Remember HAL 9000, David from Prometheus (2012), and other delightfully evil intelligent robots and computers? They're great fiction, but Mark Wittman, a research scientist at the Institute for Frontiers in Psychology and Mental Health in Freiburg, Germany, says they're definitely fiction. Speaking to Psychology Today earlier this month, he explained why AI can't actually be conscious:
It's a fallacy to think that just because a brain and a computer are both called machines, they are the same thing. It's easy to categorize two different objects with the same word “machine.” But the fact remains that a brain and a machine that contains metal are completely different entities. A computer works based on electricity flowing through its parts. But the parts themselves always remain the same. In principle, you could shut down a computer and store it in a dust-free environment. You could turn it on again after 100 years and it would continue to process data.
Mark Wittman, “It's Just a Matter of Time: Why AI Will Never Become Conscious,” Psychology Today, August 3, 2024
Of course, living things are necessarily constantly changing. If change is not growth or controlled stasis, it is decline and ultimately collapse. Consciousness is, above all, the awareness of this constant change.
Quoting microprocessor pioneer Federico Faggin's book Irreducible (Essentia 2024), Wittmann writes, “A living organism is never the same, either physically or psychologically, from one moment to the next. Computer hardware, on the other hand, remains the same physical structure from the moment it leaves the factory until it stops working or is discarded.”
Wittmann adds:
Physical time as change and becoming is reflected in physiological time, which in turn is reflected in the conscious experience of constant change felt as the passage of time. Consciousness as we know it is embedded in the principle of life, which is a dynamic state of becoming. We humans are part of nature. That is what connects the time of physics, the time of biology, and the time of consciousness.
Wittmann, “A Matter of Time: Why AI Will Not Be Conscious.”
Without the ability to experience what is happening from one level (perception) to the next, we would not be able to really be aware of our surroundings.
Consciousness is certainly difficult to explain and study, but it is clear that consciousness is not merely computational. A GPS unit, no matter how sophisticated, does not experience a road trip; it computes it. It is the conscious passenger who experiences the trip.
But some are convinced that conscious AI is on the way.
Futurist and inventor Ray Kurzweil told The Guardian in June last year that we still could get there in five years.
So 2029 is the year for both human-level intelligence and the slightly different Artificial General Intelligence (AGI). Human-level intelligence generally means AI that has reached the most skilled human capabilities in a given domain, and by 2029 that will be achieved in most respects. (Even after 2029, there may be a transitional period of a few years where AI doesn’t surpass top humans in some important skills, like writing Oscar-winning screenplays or generating deep new philosophical insights, but it will eventually surpass them.) AGI means AI that can do everything humans can do, but at a better level. AGI sounds harder, but it’s coming at the same time.
Zoe Corbin, AI scientist Ray Kurzweil: “We will expand our intelligence a million-fold by 2045” The Guardian, June 29, 2024
Of course, AI cannot do these things without consciousness, and in Kurzweil's scenario, as in the cases of Hal 9000 and David, consciousness seems to be taken for granted.
Given that we cannot even easily define consciousness, it is remarkable that some people are convinced that computation, given a certain level of sophistication, somehow translates into consciousness. The very fact that such a view is not based on provable premises makes it difficult to refute.
Why is there so little discussion of the obvious barriers to AI consciousness?
Writing in ZME Science, Tibi Puiu reflects on Kurzweil’s predictions (and the transhumanist vision in general):
Kurzweil's predictions are bold, and while not necessarily accurate, they push the boundaries of how we think about the future. As we approach his predicted date, the debate about the singularity is only intensifying. It remains to be seen whether his vision will come true, but it's clear that the questions he raises are more important than ever.
“AI Expert Ray Kurzweil Says We're Just a Few Years Away from Human-Level AI (And It Could Change Everything)” August 9, 2024
Well, maybe, but wouldn't it be a good idea to start by calmly considering why conscious AI is out of reach? Can we overcome the practical barriers?
Saying “We can send humans to Pluto!” is different from saying “We can build a time machine!” The barriers to tourism to Pluto may all be practical and technological. The barriers to time travel are probably tied to the nature of our universe.
There is something wrong with the discussion of conscious AI if it does not address questions about the kinds of problems we face.
Wittman is the author of Transformations of Consciousness: Experiences Beyond Time and the Self (MIT Press 2018).
Also read: When materialist assumptions about the mind start to sound outdated… This profile of Roger Penrose and his theory of consciousness, written in 2017, was written before the gradual changes and major disruptions that have rocked the field, and its influence is clear: Paulson’s Penrose profile is written as if materialism will triumph, but that seems much less likely now than it did in 2017.