Not Real Intelligence Bias

This note comes inspired from AI Effect on Wikipedia. I don’t like calling it AI effect, since this is more correctly a bias.

The “Not Real Intelligence Bias” occurs when applications of artificial intelligence are used in systems and discounted as intelligent by the users of those systems

Defining what is intelligence and what it is not is very hard and there is no consensus in academic researches. Various theory of the mind have been proposed, but ultimately in my opinion a consensus of philosophy with evidence from neuroscience, coherent with physical laws must be provided.

Not Real Intelligence Bias is a type of ”no true scotsman” i.e when there is advance in AI to solve a task, it is dismissed as “not real AI” but just computation.

This is still true with Large Language Models, there are a lot of articles that explain that LLMs are not “real” AI and we shouldn’t use that term to describe them. What most of these people probably means is that LLMs are not sentient neither they are Artificial General Intelligence (AGI).

Let’s take the example of LLM. There are two fundamental arguments:

  • Dismissal of LLM: LLM are stochastic parrots that generate text by predicting the next most likely token in a sequence based on massive patterns learned from their training data
  • Counter-Claim: the coherence and complexity of their outputs suggests that to achieve such high-level prediction, they must have constructed functional internal representations that act like reasoning and understanding.

If a system can answer philosophical questions, revise its own chain of thought, and solve novel problems, is that not, functionally speaking, intelligence, regardless of the underlying mechanism?

On the contrary, there also could be an overstimate of LLMs (and in general of AI). Also, focusing too heavily on proving or disproving “real” intelligence can distract from critical, practical issues like: bias and alignement, robusteness, ethics, ecological and political impact.