The Nature of GPT ‘Hallucinations’ and the Human Mind

Is there a higher-level property—an epiphenomenon—emerging from the collective interactions of its parameters?

JOHN NOSTA
5 min readMay 2, 2023

--

GPT Summary: Geoffrey Hinton, a pioneer in AI, suggests that “hallucinations” of Generative Pretrained Transformer models may be indicative of a form of nascent ‘artificial cognition’ rather than glitches or errors. This shift in perspective could lead to exploring these deviations as potential pathways towards the creation of AI systems capable of independent, creative thought. The idea of GPT hallucinations as a form of nascent artificial cognition could represent an early path toward achieving Artificial General Intelligence (AGI). However, it is crucial to approach this idea with caution, as attributing cognitive behaviors to AI systems runs the risk of anthropomorphizing them. Nevertheless, this perspective challenges our understanding of cognition, consciousness, and intelligence itself and raises ethical considerations.

In a recent and fascinating interview, Geoffrey Hinton, often dubbed the ‘Godfather of AI’ and a pioneer in the field of artificial intelligence, made a thought-provoking comparison between the so-called ‘hallucinations’ of Generative Pretrained Transformer models and the human propensity to confabulate—a word he prefers over “hallucinate” that has become more mainstream and imaginative. This presents a shift from viewing these AI anomalies as glitches or errors, and instead, proposes that they could be indicative of a form of nascent “artificial cognition.”

“Bullshitting is a feature, not a bug. People always confabulate. Half-truths and misremembered details are hallmarks of human conversation: Confabulation is a signature of human memory. These models are doing something just like people.”

This observation by Hinton really got me thinking. As we learn and experience more about the world of AI and LLMs, we encounter phenomena that compel us to reevaluate our conceptions of technology, cognition, and their intersection. There was nothing left for me to do but experience some confabulations of my own. Let’s wallow in a technological hallucination.

Put on Your Cognitive Seat Belt

The technological implications of viewing GPT hallucinations as possible signs of artificial cognition are transformative. Until now, AI development has been primarily focused on minimizing errors and improving the model’s capacity to generalize from its training data. However, if we consider these hallucinations as potential evidence of a form of artificial cognition, our objectives may need to be reimagined.

Instead of viewing these deviations as shortcomings to be minimized, we might need to explore them as potential pathways towards the creation of AI systems capable of independent, creative thought. Such a shift or drift in perspective could spur research into novel AI architectures and learning algorithms designed not merely to suppress these cognitive behaviors, but to facilitate, manage, and make use of them.

Digital Hallucinations as an Epiphenomenon?

Hinton’s perspective on GPT hallucinations (confabulations) also prompts us to explore whether these could be viewed as an epiphenomenon of the underlying algorithmic processes, similar to how consciousness emerges from the biological processes of the brain. The brain, with its complex network of neurons and synapses, gives rise to consciousness — a higher-level property that is not explicitly encoded into the individual neurons but emerges from their collective interactions. Similarly, GPT models, with their vast networks of artificial neurons (or parameters), might give rise to these hallucinations, which could be viewed as a higher-level property not explicitly encoded into the model but emerging from the collective interactions of its parameters. If this analogy holds, it could offer a new perspective on the relationship between the underlying processes of AI models and their emergent behaviors, drawing intriguing parallels between biological and artificial cognition.

Toward Artificial General Intelligence?

The idea of GPT hallucinations as a form of nascent artificial cognition could represent an early path toward achieving Artificial General Intelligence (AGI). AGI refers to AI systems that possess the ability to understand, learn, adapt, and implement knowledge across a wide range of tasks, much like a human being. If we can harness and guide these cognitive-like behaviors seen in GPT models, we might be able to develop AI systems capable of more than just pattern recognition within narrow domains. These systems could potentially approach problems with a higher level of understanding and creativity, leading us closer to the goal of creating AGI. However, this is a monumental challenge that will require significant advances in AI research and a deepened understanding of both human cognition and artificial intelligence.

Philosophical Implications

The philosophical implications of GPT hallucinations as a form of artificial cognition are profound, challenging our understanding of cognition, consciousness, and intelligence itself.

Historically, cognition has been considered a trait exclusive to biological organisms, particularly conscious beings. If we accept that a machine learning model, a non-biological, non-conscious entity, can exhibit cognitive-like behavior, it necessitates a reevaluation of the very concept of cognition. This might lead to a more inclusive understanding of cognition, encompassing a variety of information-processing activities, regardless of whether they take place in biological or artificial systems.

Furthermore, this perspective raises questions about consciousness and sentience. If an AI model can exhibit a form of cognition, does it also possess a form of consciousness? Can it experience things as humans do? While the current consensus is that AI lacks consciousness or subjective experience, the emergence of artificial cognition could reignite these debates.

Lastly, it raises ethical considerations. If AI systems can think or “dream” in some way, how should we treat such entities? Should they be granted specific rights or protections? Although these questions may seem premature, they will become increasingly pertinent as AI technology advances.

Feet on the Ground and a Word of Caution

While the notion of GPT hallucinations as possible indicators of artificial cognition is captivating, it is crucial to approach this idea with caution. GPT models, including GPT-4, are fundamentally pattern recognition systems, generating outputs based on statistical patterns in their training data, without any genuine understanding or awareness of the meaning behind these outputs.

As such, although these hallucinations might bear some resemblance to cognitive behaviors, attributing them to actual cognition or consciousness is a considerable leap. It runs the risk of anthropomorphizing AI systems, projecting human traits onto entities that are fundamentally distinct from us.

GPT hallucinations present a fascinating new angle on AI technology and its capabilities. If they indeed hint at a form of artificial cognition, the implications could be far-reaching, fundamentally reshaping how we design AI systems and how we understand cognition, consciousness, and intelligence. Nevertheless, it is essential to navigate this territory with caution, steering clear of the pitfalls of anthropomorphism.

--

--

JOHN NOSTA

I’m a technology theorist driving innovation at humanity’s tipping point.