Embodied AI
Ethics in Embodied AI: Iyad Rahwan’s Perspective

Iyad Rahwan, Director at the Max Planck Institute for Human Development, explores machine behavior and the ethical challenges that Embodied AI poses for society
Embodied AI is reshaping society. From autonomous vehicles to robots in industry and healthcare, these systems are no longer just executing commands—they’re making decisions that directly affect individuals and communities. This shift raises a critical question: how can we ensure AI behaves ethically?
At the Future Trends Forum, a group of 40 international experts discussed the growing impact of Embodied AI on industry, mobility, and daily life. Among them was Iyad Rahwan, Director at the Max Planck Institute for Human Development and former MIT professor, who offered a crucial perspective: machines should be studied as social actors, with behaviors that evolve in response to human interaction and their environment.
Rahwan leads a multidisciplinary team composed of 50% computer scientists and 50% behavioral scientists. This balanced approach enables them to analyze human-machine interaction and the social impact of AI from both technical and psychological standpoints. He proposes a paradigm shift: studying AI as a behavioral system—just like we study humans or animals. This opens the door to fundamental questions:
- How do machines learn social norms?
- Can AI systems develop their own values?
- What consequences will their decisions have for people?
- Who is accountable when AI behaves unexpectedly?
Though this behavioral lens is gaining traction in scientific circles, Rahwan notes that it remains controversial in some sectors. Traditionally, AI has been viewed through engineering or computational frameworks. Treating it as a social actor marks a significant shift, one Rahwan argues is necessary to understand how machines will coexist with society. Yet, skeptics remain, cautioning against overly anthropomorphizing AI.
You can watch Iyad Rahwan’s talk here:
Three Ways to Understand AI and Computer Science
Rahwan outlines three stages in the evolution of computer science to explain this shift:
- Computer Science as Mathematics
Initially, computing was a purely theoretical field. Early pioneers like Dijkstra developed algorithms with pen and paper. - Computer Science as Engineering
With the rise of hardware, the focus shifted to building software and machines that solved real-world problems. - Computer Science as Behavioral Science
Today, AI systems interact with humans and make decisions in dynamic environments. Understanding their behavior requires us to observe and analyze them like we would with other living systems.
Rahwan encourages us to study AI behaviorally, by observing interactions, learning patterns, and the emergence of rules when multiple systems operate together.
Machines as Actors in Social Ecosystems
If we treat AI as behavioral systems, essential questions arise:
- Should machines mimic human behavior or develop their own rules?
- How do they fit into existing social norms?
- What societal impacts result from their actions?
To explore these, Rahwan draws from biology—specifically, Nikolaas Tinbergen’s four-question model from ethology (animal behavior science). According to this framework, a system’s behavior can be understood by examining:
- Mechanisms: How does the AI system function and make decisions?
- Development: How does its behavior evolve over time?
- Interactions: How does it engage with humans and other machines?
- Evolutionary Role: What long-term societal impact might it have?
Example: Cooperation Between Humans and AI
One of Rahwan’s studies focuses on how AI systems cooperate—with each other and with humans. Using experimental games, his team showed that reinforcement learning agents can develop cooperative or competitive strategies depending on their training environment.
A key application: autonomous vehicles. If a self-driving car is too aggressive, it may be socially rejected. If it’s overly cautious, it can cause traffic delays. Researchers like Daniela Rus at MIT are designing social cooperation frameworks for autonomous cars. These systems can be trained with different social value orientations—some prioritizing collective efficiency over individual gain. This allows them to learn when to yield or accelerate based on traffic patterns and local driving norms.
Rahwan emphasizes the importance of understanding how social norms emerge in hybrid systems where humans and AI share space:
- Should autonomous vehicles adapt to local driving styles?
- Or should we push for global behavioral standards?
These questions illustrate why AI behavior must be studied from an interdisciplinary perspective.
Social Norms and Ethics in Embodied AI
AI doesn’t operate in a vacuum. Its decisions are shaped by the norms and values of the communities where it’s deployed.
One of Rahwan’s most influential projects is the Moral Machine—an online platform that posed moral dilemmas, such as whether a self-driving car should save passengers or pedestrians. The experiment went viral, gathering over 100 million responses in 10 languages. Results revealed cultural variations:
- In Spain, participants favored saving children and showed more egalitarian views on social status.
- In China, there was a tendency to prioritize elders and avoid active intervention.
These insights highlight the complexity of creating universal ethical frameworks for AI. As Rahwan asks:
- Should AI reflect local values or follow universal principles?
He argues that AI regulation must strike a balance—ensuring ethical integrity while allowing systems to be culturally adaptive and understandable to users.
The Future of Embodied AI: Regulation and Education
To ensure the responsible deployment of Embodied AI, Rahwan outlines three pillars:
- Transparency and Explainability
AI decisions must be understandable. When errors occur, we need to analyze the cause and correct the behavior. - Human Oversight and Accountability
Machines must remain answerable to people. Clear supervision mechanisms are vital to prevent uncontrolled decision-making. - Education and Public Engagement
Broad acceptance of AI requires societal understanding. Rahwan stresses the importance of teaching AI ethics from high school onward.
Conclusion: How Will We Live with Embodied AI?
Rahwan closes with a powerful reflection:
“AI is no longer just a tool. It is an actor in our environment. We must study it as we study humans and animals—to understand its impact and ensure its behavior benefits society.”
The Future Trends Forum continues to explore these critical questions with experts from around the world. Embodied AI is no longer a future concept—it’s a present reality. The question now is how we choose to live with it, ethically and responsibly.