Ghost in the Shell: Will AI Ever Be Conscious?

As society gets closer to human-level AI, scientists debate what it means to exist.

Eric James Beyer
Ghost in the Shell: Will AI Ever Be Conscious?
A robot driving a vehicle in a futuristic city.4X-image/iStock

Imagine you undergo a procedure in which every neuron in your brain is gradually replaced by functionally-equivalent electronic components. Let’s say the replacement occurs a single neuron at a time, and that behaviorally, nothing about you changes. From the outside, you are still “you,” even to your closest friends and loved ones. 

What would happen to your consciousness? Would it incrementally disappear, one neuron at a time? Would it suddenly blink out of existence after the replacement of some consciousness-critical particle in your posterior cortex? Or would you simply remain you, fully aware of your lived experience and sentience (and either pleased or horrified that your mind could theoretically be preserved forever)? 

This famous consciousness thought experiment, proposed by the philosopher David Chalmers in his 1995 paper Absent Qualia, Fading Qualia, Dancing Qualia, raises just about every salient question there is in the debate surrounding the possibility of consciousness in artificial intelligence. 

If the prospect of understanding the origins of our own consciousness and that of other species is, as every single person studying it will tell you, daunting, then replicating it in machines is ambitious to an absurd degree. 

“Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this.”

Will AI ever be conscious? As with all things consciousness-related, the answer is that nobody really knows at this point, and many think that it may be objectively impossible for us to understand if the slippery phenomenon ever does show up in a machine. 

Take the thought experiment just described. If consciousness is a unique characteristic of biological systems, then even if your brain’s robotic replacement allowed you to function in exactly the same manner as you had before the procedure, there would be no one at home on the inside, and you’d be a zombie-esque shell of your former self. Those closest to you would have every reason to take your consciousness as a given, but they’d be wrong. 

The possibility that we might mistakenly infer consciousness on the basis of outward behavior is not an absurd proposition. It’s conceivable that, once we succeed in building artificial general intelligence—the kind that isn’t narrow like everything out there right now—that can adapt and learn and apply itself in a wide range of contexts, the technology will feel conscious to us, regardless of whether it actually is or not.

Imagine a sort of Alexa or Siri on steroids, a program that you can converse with, that is adept as any human at communicating with varied intonation and creative wit. The line quickly blurs.

Lines of code light up a computer screen in red, white, yellow, and green.
Source: Markus Spiske/Unsplash

That said, it might not be necessary, desirable, or even possible for AI to attain or feature any kind of consciousness.

In Life 3.0: Being Human in the Age of Artificial Intelligence, Max Tegmark, professor of physics at MIT and president of the Future of Life Institute, laments, “If you mention the “C-word” to an AI researcher, neuroscientist, or psychologist, they may roll their eyes. If they’re your mentor, they might instead take pity on you and try to talk you out of wasting your time on what they consider a hopeless and unscientific problem.”

It’s a reasonable, if not slightly dismissive, position to take. Why even bother taking the consciousness problem into account? Tech titans like Google and IBM have already made impressive strides in creating self-teaching algorithms that can out-think and out-pace (albeit in narrowly-defined circumstances) any human brain, and deep-learning programs in the field of medicine are also out-performing doctors in some areas of tumor identification and blood work assessment. These technologies, while not perfect, perform well, and they’re only getting better at what they do. 

Douglas Hofstadter, the pioneering phenomenologist who wrote the Pulitzer Prize-winning Gödel, Escher, Bach: An Eternal Golden Braid, is among those who think we absolutely need to bother, and for good reason. 

“Human intelligence is a marvelous, subtle, and poorly understood phenomenon. There is no danger of duplicating it anytime soon.”

In a 2013 interview with The Atlantic, Hofstadter explains his belief that we’re largely missing the point if we don’t take things like the nature of conscious intelligence into account. Referencing Deep Blue, the famous IBM-developed chess program that beat Gary Kasparov in 1997, he says, “Okay, […] Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?”

Hofstadter’s perspective is critical. If these hyper-capable algorithms aren’t built with a proper understanding of our own minds informing them, an understanding that is still very much inchoate, how could we know if they attain conscious intelligence? More pressingly, without a clear understanding of the phenomenon of consciousness, will charging into the future with this technology create more problems than it solves?

A microchip sits on a blue circuit board.
Source: Harrison Broadbent/Unsplash

In Artificial Intelligence: A Guide for Thinking Humans, Melanie Mitchell, a former graduate student of Hofstadter, describes the fear of reckless AI development that her mentor once expressed to a room full of Google engineers at a 2014 meeting at the company’s headquarters in Mountainview, California.

“I find it very scary, very troubling, very sad, and I find it terrible, horrifying, bizarre, baffling, bewildering, that people are rushing ahead blindly and deliriously in creating these things.” 

That’s a fair amount of unsavory adjectives to string together. But when language like that comes from someone the philosopher Daniel Dennet says is better than anybody else at studying the phenomena of the mind, it makes you appreciate the potential gravity of what’s at stake. 

Conscious AI: not in our lifetime

While Hofstadters’ worries are perfectly valid on some level, others, like Mitch Kapor, the entrepreneur and co-founder of the Electronic Frontier Foundation and Mozilla, think we shouldn’t work ourselves into a panic just yet. Speaking to Vanity Fair in 2014, Kapor warns, “Human intelligence is a marvelous, subtle, and poorly understood phenomenon. There is no danger of duplicating it anytime soon.”

Tegmark labels those who feel as Kapor does, that AGI is hundreds of years off, “techno-skeptics.” Among the ranks of this group are Rodney Brooks, the former MIT professor and inventor of the Roomba robotic vacuum cleaner, and Andrew Ng, former chief scientist at Baidu, China’s Google, who Tegmark reports as having said that, “Fearing a rise of killer robots is like worrying about overpopulation on Mars.” 

“A view among experts that AI systems will probably (over 50%) reach overall human ability by 2040-50, and very likely (with 90% probability) by 2075.” 

That might sound like hyperbole, but consider the fact that there is no existing software that even comes close to being able to rival the brain in terms of overall computing ability.

Before his death in 2018, Paul Allen, Microsoft co-founder and founder of the Allen Institute for Brain Science, wrote alongside Mark Greaves in the MIT Technology Review that achieving singularity, the point where technology develops beyond the human ability to monitor, predict, or understand it, will take far more than just designing increasingly competent machines:

“To achieve the singularity, it isn’t enough to just run today’s software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this. This prior need to understand the basic science of cognition is where the “singularity is near” arguments fail to persuade us.” 

Like-minded individuals like Naveen Joshi, the founder of Allerin, a company that deals in big data and machine learning, assert that we’re “leaps and bounds” away from achieving AGI. However, as he admits in an article in Forbes, the sheer pace of our development in AI could easily change his mind. 

It’s on the hor-AI-zon 

It’s certainly possible that the scales are tipping in favor of those who believe AGI will be achieved sometime before the century is out. In 2013, Nick Bostrom of Oxford University and Vincent Mueller of the European Society for Cognitive Systems published a survey in Fundamental Issues of Artificial Intelligence that gauged the perception of experts in the AI field regarding the timeframe in which the technology could reach human-like levels.

The report reveals “a view among experts that AI systems will probably (over 50%) reach overall human ability by 2040-50, and very likely (with 90% probability) by 2075.” 

“Rapid progress in coming decades will bring about machines with human-level intelligence capable of speech and reasoning, with a myriad of contributions to economics, politics and, inevitably, warcraft.”

Futurist Ray Kurzweil, the computer scientist behind music-synthesizer and text-to-speech technologies, is a believer in the fast approach of the singularity as well. Kurzweil is so confident in the speed of this development that he’s betting hard. Literally, he’s wagering Kapor $10,000 that a machine intelligence will be able to pass the Turing test, a challenge that determines whether a computer can trick a human judge into thinking it itself is human, by 2029.

Shortly after that, as he says in a recent talk with Society for Science, humanity will merge with the technology it has created, uploading our minds to the cloud. As admirable as that optimism is, this seems unlikely, given our newly-forming understanding of the brain and its relationship to consciousness.

Christof Koch, an early advocate of the push to identify the physical correlates of consciousness, takes a more grounded approach while retaining some of the optimism for human-like AI appearing in the near future. Writing in Scientific American in 2019, he says, “Rapid progress in coming decades will bring about machines with human-level intelligence capable of speech and reasoning, with a myriad of contributions to economics, politics and, inevitably, warcraft.”

A robot with a black monitor on it's face.
Source: Possessed Photography/Unsplash

Koch is also one of the contributing authors to neuroscientist Guilio Tononi’s information integration theory of consciousness. As Tegmark puts it, the theory argues that, “consciousness is the way information feels when being processed in certain complex ways.” IIT asserts that the consciousness of any system can be assessed by a metric of ? (or Phi), a mathematical measure detailing how much causal power is inherent in that system.

In Koch’s book, The Quest for Consciousness: A Neurobiological Approach, Koch equates phi to the degree to which a system is, “more than the sum of its parts.” He argues that phi can be a property of any entity, biological or non-biological. 

Essentially, this measure could be used to denote how aware the inner workings of a system are of the other inner workings of that system. If ? is 0, then there is no such awareness, and the system feels nothing. 

The theory is one of many, to be sure, but it’s notable for its attempt at mathematical measurability, helping to make the immaterial feeling of consciousness something tangible. If proven right, it would essentially preclude the possibility of machines being conscious, something that Tononi elaborates on in an interview with the BBC

“If integrated information theory is correct, computers could behave exactly like you and me – indeed you might [even] be able to have a conversation with them that is as rewarding, or more rewarding, than with you or me – and yet there would literally be nobody there.”

Optimism of a (human) kind

The interweaving of consciousness and AI represent something of a civilizational, high-wire balancing act. There may be no other fields of scientific inquiry in which we are so quickly advancing while having so little an idea of what we’re potentially doing. 

“Devoting 100% of one’s efforts to avoiding diseases and accidents is a great recipe for hypochondria and paranoia, not happiness.” 

If we manage, whether by intent or accident, to create machines that experience the world subjectively, the ethical implications would be monumental. It would also be a watershed moment for our species, and we would have to grapple with what it means to have essentially created new life. Whether these remain a distant possibility or await us just around the corner, we would do well to start considering them more seriously. 

In any case, it may be useful to think about these issues with less dread and more cautious optimism. This is exactly the tone that Tegmark strikes at the end of his book, in which he offers the following analogy:

“When MIT students come to my office for career advice, I usually start by asking them where they see themselves in a decade. If a student were to reply “Perhaps I’ll be in a cancer ward, or in a cemetery after getting hit by a bus,” I’d give her a hard time […] Devoting 100% of one’s efforts to avoiding diseases and accidents is a great recipe for hypochondria and paranoia, not happiness.” 

Whatever form the mind of AGI takes, it will be influenced by and reflect our own. It seems that now is the perfect time for humanity to prioritize the project of collectively working out just what ethical and moral principles are dear to us. Doing so would not only be instructive in how to treat one another with dignity, it would help ensure that artificial intelligence, when it can, does the same.

message circleSHOW COMMENT ()chevron