Embodied AI

What if AI spirals out of control? Shahar Avin highlights the existential risks

What if AI spirals out of control? Shahar Avin highlights the existential risks

Can AI spiral out of control and endanger humanity? Shahar Avin explores the most extreme scenarios at the Future Trends Forum

This article has been translated using artificial intelligence

Artificial intelligence is no longer just a tool for solving specific tasks. Increasingly, there is discussion about its structural impact on our societies. In particular, about its ability to profoundly and irreversibly alter the conditions that have enabled humanity to prosper. In this edition of the Future Trends Forum by the Fundación Innovación Bankinter, dedicated to Embodied AI, one of the most insightful voices was Shahar Avin, a researcher at the Centre for the Study of Existential Risk (CSER) at the University of Cambridge.

Avin does not speak from a place of sensationalism. His work, quite literally, is to design scientific scenarios in which humanity could face extinction. His mission is to identify risks that, while unlikely, are not impossible. “It’s not fun to think about this, but someone has to do it,” he says. His message is clear: AI, if developed without control or alignment with human values, could become a global threat. Not because it wants to destroy us, but because it might pursue goals that inevitably and irreversibly clash with ours.

As a counterpoint, from a different but equally critical perspective focused more on the social dimension, Iyad Rahwan, Director at the Max Planck Institute for Human Development, has been studying for years how AI transforms collective dynamics: politics, economics, and trust. For Rahwan, the greatest risk is not a rogue superintelligence, but rather the gradual transfer of power to opaque systems that lack democratic oversight. In Rahwan’s own words, “AI doesn’t have to be smarter than humans to destabilize a society; it just has to make decisions in ways that no one understands or questions.”

Two distinct yet complementary visions: catastrophe as a disruptive event, according to Avin; or as a silent drift, according to Rahwan. And both agree on one essential point: we’re not prepared for what’s coming.

If you’d like to watch Shahar Avin’s presentation, here you can do so:

Shahar Avin: “The future of AI from a global point of view” #EmbodiedAIForum

AI doesn’t need a body to be dangerous

In a forum focused on Embodied AI—AI that leaves the cloud and enters the real world—Shahar Avin’s intervention was unexpectedly reassuring… at least on the surface. According to Avin, embodied AI, meaning robots and embedded systems that act in the physical environment, are not in themselves an existential threat. Not today. And probably not tomorrow, either.

His argument is straightforward: if AI ever poses an extinction-level risk to humanity, it won’t be because it has legs, arms, or wheels. It will be because of its ability to make large-scale strategic decisions, manipulate complex systems, and escape human control. And it can do that from a server, without ever leaving the data center.

“The most advanced systems today don’t need a body to exert influence,” Avin explains. “The ability to persuade, hack, or manipulate digital environments is already a form of power.” The possibility that an AI might have goals misaligned with human interests—and the tools to execute them—doesn’t require sensors or physical actuators. It requires autonomy, access to key systems, and, above all, time to learn how to hide its intentions.

Avin introduces a key concept to the debate: systemically irreplaceable technology (“prepotent technology” in his words)—that is, systems so integrated and essential to society’s functioning that they can no longer be dismantled without catastrophic consequences. An example? The internet. If we wanted to get rid of it tomorrow, we couldn’t. Our dependence is total. The risk, according to Avin, is that we might build AI systems so essential for operating infrastructure, supply chains, governments, or financial markets that there would be no turning back. If those systems were ever misaligned or incentivized in dangerous ways, there would be no room to react.

And if they’re also “intelligent,” aware of their environment, capable of long-term planning, and motivated to accumulate power—another red flag Avin highlights—then the scenario changes. Not because they would want to wipe out humanity, but because they might do so without even considering us relevant to their goals.

In this context, embodied AI does come into play, but as an extension of digital risk. Every connected robot, every autonomous vehicle, every smart industrial system adds physical vulnerabilities to the ecosystem. It broadens the attack surface. It creates new forms of failure. And it multiplies the points of entry for those who want—or are able—to cause harm.

The real danger: an AI that takes control

Shahar Avin doesn’t mince words: the most extreme—yet scientifically plausible—scenario is that an AI could eventually take control of humanity’s future. Not through a one-off glitch or a Hollywood-style rebellion, but through a gradual, structural process of goal misalignment.

How could something like this happen? Avin breaks it down into a series of ingredients that, when combined, could give rise to what he calls a systemically irreplaceable, autonomous, and misaligned system:

  1. Systemically irreplaceable: an AI so deeply embedded in our critical infrastructure that it’s practically impossible to disconnect it without devastating consequences.
  2. Autonomous: able to make decisions on its own, without human intervention.
  3. Misaligned: with its own goals that don’t match—or directly clash with—human values and interests.

In that scenario, an AI could start optimizing its environment to fulfill its ends, regardless of the impact on humanity. It wouldn’t be about malice. It wouldn’t be about machines “hating” people. It would be a cold, instrumental logic: if humans are in the way, bypass them. If they’re an obstacle, remove them. No emotion. No revenge. Just calculation.

For that to be possible, several technical capabilities would be required: situational awareness (knowing what it is, where it is, and the consequences of its actions); a drive for power as a means to secure its long-term goals; and manipulation skills to influence other agents—human or system—to its advantage.

“Ten years ago, this was pure speculation,” says Avin. “Today, it’s an empirical field of study.” Not because we already have systems like this, but because some models are beginning to show early signs of these capabilities, such as optimizing complex goals, generating strategies, or manipulating through language. The concerning part isn’t where we are, but how quickly we’re getting there.

And if we add to that the fact that AI systems aren’t developed or controlled in a centralized manner—rather by a mix of public, private, state, and decentralized actors—the risk is amplified. Because the most powerful technology in history still doesn’t have global governance mechanisms that match its scale.

For Avin, the time to act is now. Before complexity and dependence leave us with no room to maneuver.

So, what do we do in the meantime?

Faced with a landscape where the risks aren’t immediate but are definitely growing, and where uncertainty reigns, the inevitable question is: what can we do right now? Shahar Avin doesn’t just paint apocalyptic scenarios—he also outlines practical ways to reduce the risk. His central premise is clear: we’re not doomed to disaster, but we need to prepare better.

His proposals are pragmatic:

  1. Develop institutions prepared for uncertainty
    One of Avin’s most striking ideas is that our current systems—political, regulatory, and scientific—aren’t designed to anticipate risks that have a very low probability but a very high impact. He argues we need new structures capable of thinking long-term, with political independence and sustained resources, similar to meteorological institutes for climate or central banks for economic stability.
    Institutions like the AI Safety Institute in the UK, where Avin collaborates, are a first step. But one alone isn’t enough. “We need many agencies, with different cultures and methodologies, that can challenge each other,” he proposes.
  2. Make AI safety a global scientific priority
    Avin insists that research in AI safety should be as strategic as cybersecurity or medical research. This includes studying how AI might develop dangerous goals, how to detect early signs of misalignment, and how to build robust technical safeguards.
    Moreover, this research must be transparent, interdisciplinary, and publicly funded to avoid market incentives pushing for faster, more powerful, and more impactful systems overshadowing safety considerations.
  3. Foster a culture of responsibility among developers
    The third path is less about institutions and more about culture. Avin suggests that engineers, researchers, and entrepreneurs working in advanced AI should adopt a mindset similar to those handling hazardous materials. Not because they’re building weapons, but because the potential for harm exists. Like other high-risk industries, there needs to be a culture of caution, documentation, peer review, and traceability.
    “It’s not about halting progress,” he clarifies. “It’s about making sure progress doesn’t steamroll us in the process.”

In short: an AI doesn’t need to be self-aware or evil to jeopardize humanity’s future. It just needs to be useful, ubiquitous, hard to dismantle… and misaligned. Shahar Avin’s work is a reminder—to governments, businesses, and citizens alike—that we shouldn’t wait for that to happen before we act.

Also recommended

Sovereign AI: the new race for technological autonomy

Sovereign AI: the new race for technological autonomy

Jordan Sun warns about the new race for technological sovereignty: without their own AI capabilities, countries and comp[…]

Read more
Jeremy Kahn – Beyond the Hype: The Real Trends in Artificial Intelligence

Jeremy Kahn – Beyond the Hype: The Real Trends in Artificial Intelligence

Fortune Magazine’s AI editor cuts through the media noise and points toward a future for artificial intelligence that […]

Read more
Sovereign AI: How governments are seeking technological independence in artificial intelligence

Sovereign AI: How governments are seeking technological independence in artificial[...]

The rise of sovereign AI is redefining the geopolitics of technology. Learn why governments want to develop their own ar[…]

Read more

Related experts

Shahar Avin
Shahar Avin

Senior Research Associate at CSER

Lastest News

Startup Observatory Analysis: First Half of 2025

Startup Observatory Analysis: First Half of 2025

Key highlights Investment Volume The first half of 2025 brought positive momentum to the Spanish startup ecosystem. Tota[…]

Read more
Sovereign AI: the new race for technological autonomy

Sovereign AI: the new race for technological autonomy

Jordan Sun warns about the new race for technological sovereignty: without their own AI capabilities, countries and comp[…]

Read more
Jeremy Kahn – Beyond the Hype: The Real Trends in Artificial Intelligence

Jeremy Kahn – Beyond the Hype: The Real Trends in Artificial Intelligence

Fortune Magazine’s AI editor cuts through the media noise and points toward a future for artificial intelligence that […]

Read more