Two of AI's founding fathers — both Turing Award winners, both "Godfathers of Deep Learning" — look at the same technology and see completely different things.
Yann LeCun (Meta's Chief AI Scientist): "You can manipulate language and not be smart, and that's basically what LLMs are demonstrating." He calls current AI "dumber than a cat" and dismisses existential risk concerns as "complete B.S."
Geoffrey Hinton (Nobel Laureate 2024): When asked if current AIs are conscious, he replied without qualification: "Yes, I do." He estimates a 10–20% chance AI could cause human extinction within three decades.
Same technology. Same foundational knowledge. Opposite conclusions.
The Philosophical Problem
Hinton's consciousness claim isn't just confused. He uses a thought experiment: if you replace neurons one-by-one with silicon circuits that behave identically, consciousness would persist. Therefore, AI systems with similar computational properties might also be conscious.
Philosopher David Chalmers — who literally coined the term "hard problem of consciousness" — isn't convinced. As one philosopher noted: "You would also remain conscious after having one neuron replaced by a microscopic rubber duck. Likewise for the second, and the third. But somewhere in this process, consciousness would cease."
The deeper issue: Hinton conflates access consciousness (processing and using information) with phenomenal consciousness (the subjective experience of "what it's like" to be something). LLMs demonstrably have the former. There's no evidence they have the latter.
As cognitive scientist Gary Marcus puts it: LLMs "operate fundamentally on pattern recognition rather than true reasoning." They can manipulate language brilliantly — but that's different from understanding what language represents.
The Inadvertent Promotion Problem
Here's the paradox: Hinton's doom-saying requires the premise that these systems are genuinely intelligent. If LLMs are "just" sophisticated autocomplete — as LeCun argues — there's less to fear. But there's also less to hype.
By claiming LLMs are conscious, understand language, and pose existential risks, Hinton inadvertently:
- Validates the AGI narrative that justifies $420B/year in hyperscaler capex
- Increases AI's mystique by treating it as something approaching human cognition
- Shifts focus from real harms (misinformation, job displacement, bias) to speculative sci-fi scenarios
The message "AI will destroy humanity because it's becoming conscious" sounds like a warning. But the premise — that AI is becoming conscious — is exactly what AI companies want you to believe about their products.
A More Grounded View
LeCun's position is less dramatic but perhaps more accurate:
- Current LLMs lack world models, persistent memory, some reasoning ability, and a capacity for planning — qualities even cats possess
- They're extraordinarily useful tools that excel at pattern-matching
- They're nowhere near genuine understanding, let alone consciousness
- The real risks are near-term and concrete: scams, misinformation, job displacement, concentration of power
This doesn't mean AI isn't transformative. It means we should evaluate it for what it actually is, not what we fear — or hope — it might become.
The Bottom Line
Geoffrey Hinton deserves enormous respect for his scientific contributions. His willingness to speak about risks after leaving Google shows intellectual courage. But expertise in neural network architecture doesn't automatically confer expertise in philosophy of mind. And Nobel prizes don't make speculative claims about consciousness any less speculative.
When evaluating AI capabilities, we should listen to those who distinguish clearly between:
- What LLMs do (predict next tokens with remarkable fluency)
- What that means (sophisticated pattern-matching, not understanding)
- What risks that creates (real and immediate, not sci-fi speculation)
The most dangerous form of hype isn't always cheerleading. Sometimes it's catastrophizing based on inflated premises.
Final Thoughts
Interestingly, this debate is not new. It echoes a much older divide in AI that dates back to the 1980s, when researchers argued over two fundamentally different paths to intelligence: systems that learn patterns from large amounts of data, and systems that build structured models of the world and reason over them. What makes today's discussion between Geoffrey Hinton and Yann LeCun so important is that it revisits this same question under radically different technological conditions — with models powerful enough that the answer may finally emerge not from theory, but from the systems we build.
Both perspectives may ultimately be correct. Representation learning explains how systems can extract structure from massive data, while world models explain how intelligent systems can reason about actions, consequences, and the physical environment. In practice, the next generation of AI may not choose between these ideas — it may combine them.
What's your view? Are LLMs approaching consciousness, or is this a category error that serves the hype cycle?
References
- Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18(2), 227–247.
- Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.
- Chalmers, D. J. (2023). Could a large language model be conscious? Boston Review.
- Hinton, G. (2025, January). Tonight with Andrew Marr [Interview]. LBC.
- Hinton, G. (2025, May 15). Geoffrey Hinton — Podcast [Interview]. Nobel Prize Outreach.
- Marcus, G. (2024, February 5). Deconstructing Geoffrey Hinton's weakest argument. Marcus on AI.
- Marcus, G. (2024, September 17). Gary Marcus: Why he became AI's biggest critic [Interview]. IEEE Spectrum.
- Mims, C. (2024, October 11). This AI pioneer thinks AI is dumber than a cat. The Wall Street Journal.
- Weir, R. S. (2025, February). Have AIs already reached consciousness? Psychology Today.