2 Comments
Jul 16, 2023Liked by Michael Parekh

I think that any focus on what the "numbers in teh ANN" mean is fruitless. We cannot do it much with human brains. The approach humans use is to mold and interrogate minds with various approaches to teaching and with psychiological tests.

Reinforcement learnng should reduce hallucinations by effectively enforcing the AI not to "bullshit", to use facts and sources, and just say "I don't know" when the facts and sources are absent. Just as with humans getting caught in false narratives when not being corrected, so AIs need the same constant coaching. Without that, AI generated output could be steered in bad directions, just as we have seen with "rogue chatbots".

My sense is we need automated versions of Asimov's robot psychologist, Susan Calvin. In this case, lots of highly trained AIs to kept applying feedback and correcting the AI's responses. IMO, trying to "understand what's going on in the ANN connection architecture and synapse weights is not the way to address teh problems.

Expand full comment
Jul 16, 2023Liked by Michael Parekh

Brilliant. Now I need to stop thinking “When I can’t tell what my kids are up to, it’s usually not a good thing.” :)

Expand full comment