Baldur Bjarnason has a fascinating write-up entitled The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con.

Essentially, Baldur Bjarnason makes the observation that LLM’s frequently end up following the same path those that do “cold reading”. He notes that many people are becoming convinced that language models are intelligent. There is no reason to believe this, but he offers two possibilities:

  1. The tech industry has accidentally invented the initial stages a completely new kind of mind, based on completely unknown principles, using completely unknown processes that have no parallel in the biological world.
  2. The intelligence illusion is in the mind of the user and not in the LLM itself.

He falls squarely in the second camp. I love the way that he clearly lays out the reasoning. He describes, quite clearly, how cold reading works. Then he mirrors those same steps in how an LLM works. He not only lists the areas, but goes on to provide examples and further clarification.

For example, in a cold read situation, the first issue is that the “Audience Selects Itself”.

Most people aren’t interested in psychics or the like, so the initial audience pool is already generally more open-minded and less critical than the population in general.

The first issue in LLM’s as a Mentalist? The Audience Selects Itself.

People sceptical about “AI” chatbots are less likely to use them. Those who actively don’t disbelieve the possibility of chatbot “intelligence” won’t get pulled in by the bot. The most active audience will be early adopters, tech enthusiasts, and genuine believers in AGI who will all generally be less critical and more open-minded.

This is a fantastic read and well worth your time.