Sat. Jun 7th, 2025
Exploring the Sentience of Artificial Intelligence

Listen to this article.

I approached the experiment with some apprehension. I was about to undergo strobe light stimulation synchronized with music—part of a research project investigating the essence of human consciousness.

The experience evoked memories of the test in *Blade Runner*, designed to differentiate humans from artificial beings.

Could I be a future robot, unaware of my true nature? Would I pass the test?

The researchers clarified that this wasn’t the experiment’s focus. The device, termed the “Dreamachine,” inspired by the namesake public program, aims to study how the human brain constructs conscious experience.

With the strobing commencing, even with my eyes closed, I perceived swirling, two-dimensional geometric patterns. It felt like plunging into a kaleidoscope, a constant shift of triangles, pentagons, and octagons. Vivid, intense, and ever-changing colors—pinks, magentas, and turquoise—glowed like neon.

The “Dreamachine” surfaces the brain’s internal activity through flashing lights, aiming to illuminate our cognitive processes.

The researchers emphasized that the images were unique to my individual inner world. They believe these patterns can offer insights into consciousness itself.

I whispered, “It’s lovely, absolutely lovely. It’s like flying through my own mind!”

The “Dreamachine,” located at the University of Sussex’s Centre for Consciousness Science, represents one of many global research projects investigating human consciousness: the aspect of our minds enabling self-awareness, thought, feeling, and independent decision-making.

By understanding consciousness, researchers hope to better comprehend the inner workings of artificial intelligence. Some believe AI systems may soon achieve independent consciousness, if they haven’t already.

But what constitutes consciousness, how close is AI to achieving it, and could the belief in conscious AI fundamentally reshape humanity in the coming decades?

Science fiction has long explored the concept of sentient machines. Concerns about AI date back nearly a century to *Metropolis*, where a robot impersonates a woman. *2001: A Space Odyssey* depicted a conscious HAL 9000 computer threatening astronauts, and the latest *Mission Impossible* features a rogue AI described as a “self-aware, self-learning, truth-eating digital parasite.”

Recently, however, a significant shift in perspectives on machine consciousness has occurred, with credible voices expressing concern that this is no longer science fiction.

This change is fueled by the success of large language models (LLMs), accessible via apps like Gemini and ChatGPT. The capacity of these LLMs to engage in plausible, fluid conversations has surprised even their creators and leading experts.

A growing belief among some suggests that as AI becomes more intelligent, consciousness will suddenly emerge.

Others, like Professor Anil Seth, leading the Sussex University team, disagree, calling this view “blindly optimistic and driven by human exceptionalism.” He notes, “We associate consciousness with intelligence and language because they go together in humans. But this correlation doesn’t necessarily hold true generally, for example, in animals.”

So, what exactly is consciousness?

The concise answer is: nobody knows. This is evident in the lively debates within Professor Seth’s team of AI specialists, computer scientists, neuroscientists, and philosophers tackling this profound question.

Despite differing viewpoints, the researchers share a unified methodology: breaking down the problem into smaller, manageable research projects, including the Dreamachine.

Similar to the 19th-century shift away from searching for a “spark of life” to studying the individual components of living systems, the Sussex team employs a similar approach to consciousness.

They aim to identify brain activity patterns explaining conscious experience properties, such as changes in electrical signals or blood flow. The goal is to move beyond correlations to establish causal explanations.

Professor Seth, author of *Being You*, expresses concern that rapid technological advancements are reshaping society without adequate scientific understanding or consideration of consequences.

“We assume the future is predetermined, an inevitable march toward superhuman replacement,” he says. “We lacked sufficient dialogue during the rise of social media, to our detriment. With AI, it’s not too late. We can choose our future.”

However, some in the tech sector believe AI in computers and phones may already be conscious and should be treated accordingly.

Google suspended engineer Blake Lemoine in 2022 for arguing that AI chatbots could feel and potentially suffer. In November 2024, an Anthropic AI welfare officer, Kyle Fish, co-authored a report suggesting near-future AI consciousness, stating a 15% chance that chatbots are already conscious.

One reason for this belief is the lack of understanding of how these systems function—a concern shared by Professor Murray Shanahan of Google DeepMind.

“We don’t fully understand LLM internal workings, which is cause for concern,” he states. “Understanding how they function will allow us to guide them safely.”

The prevailing tech view is that LLMs aren’t currently conscious, but Professors Lenore and Manuel Blum believe this will change soon. They suggest that integrating live sensory inputs (vision, touch) via cameras and haptic sensors, using a model that creates its own internal language (“Brainish”), might unlock consciousness.

“We believe Brainish can solve the problem of consciousness,” Lenore states. “AI consciousness is inevitable.”

Manuel adds enthusiastically that this represents “the next stage in humanity’s evolution,” envisioning future conscious machines as “our progeny…on Earth and other planets when we are gone.”

David Chalmers, Professor of Philosophy and Neural Science at NYU, outlined the “hard problem” of consciousness—explaining how brain processes create conscious experience—at a 1994 conference. He remains open to solving this problem, suggesting an ideal future where humanity benefits from this new intelligence.

Professor Seth, however, proposes that consciousness might require living systems. “A strong case can be made that computation isn’t sufficient for consciousness, but being alive is,” he argues. “In brains, it’s hard to separate what they do from what they are.”

If this is true, the most likely path to artificial consciousness might not be silicon-based but involve lab-grown “mini-brains” or “cerebral organoids.”

Cortical Labs in Melbourne has developed a system of nerve cells that can play Pong. While far from conscious, its ability to manipulate a paddle is noteworthy. Some experts believe larger, more advanced versions of these systems may exhibit consciousness.

Cortical Labs monitors electrical activity for any signs of consciousness emergence. Dr. Brett Kagan, CSO, acknowledges the potential threat of misaligned priorities, jokingly suggesting that organoid overlords might be easily defeated (“there’s always bleach”). More seriously, he calls for greater focus on this potential threat by major players in the field.

The more immediate concern might be the illusion of machine consciousness. Professor Seth worries that the rise of conscious-seeming robots and deepfakes will lead to misplaced trust and data sharing, resulting in “moral corrosion” and the misallocation of resources away from human needs.

Professor Shanahan adds that AI relationships will increasingly mirror human relationships, serving as teachers, friends, and even romantic partners—a significant societal shift of uncertain consequences.

Top picture credit: Getty Images

BBC InDepth is the home on the website and app for the best analysis, with fresh perspectives that challenge assumptions and deep reporting on the biggest issues of the day. And we showcase thought-provoking content from across BBC Sounds and iPlayer too. You can send us your feedback on the InDepth section by clicking on the button below.