AI-generated summary
The AI Index Report 2025, produced by Stanford’s Institute for Human-Centered AI, offers a comprehensive global overview of artificial intelligence developments, challenges, and trends shaping the field. Key highlights include dramatic improvements in AI model performance—some surpassing human levels in technical tasks—alongside widespread adoption, with 78% of organizations using AI by 2024. Industry dominates AI innovation, producing nearly 90% of advanced models, supported by unprecedented private investment totaling $252 billion, particularly in generative AI. However, rapid technological progress reveals growing tensions between innovation and governance, as ethical and security incidents increase despite intensified regulation. Regional disparities in AI perception and access persist, with optimism prevalent in Asia and caution in the West.
A significant shift documented in the report is AI’s evolution from a purely analytical tool to “embodied AI,” integrated into physical systems like robots, autonomous vehicles, and medical devices. This transition expands AI’s capabilities and raises fresh technical, ethical, and societal challenges related to human-machine interaction in real-world environments. The report underscores the importance of adaptive, human-centric governance and ethical design, especially as AI becomes a physical actor in daily life. Complementary analysis from the Physical AI report calls for strategic investment, integrated education, and new metrics to ensure responsible deployment. Together, these insights highlight the need for coordinated, forward-looking strategies—particularly in Europe—to harness AI’s transformative potential while managing its risks.
Analysis of the AI Index Report 2025 prepared by Stanford HAI and its connection with the Future Trends Forum's Embodied AI report. Performance, governance, ethics and investment in AI that is already transforming the physical world
The AI Index Report 2025, one of the world’s most influential publications on artificial intelligence, produced by the Stanford Institute for Human-Centered AI, has just been released. This report, globally recognized for its rigor and comprehensive vision, provides an up-to-date overview of the developments, challenges, and dynamics that are defining AI development on a global scale.
We highlight six key themes from the report that mark the pulse of technological evolution this year:
- Increasing performance of AI models: In demanding benchmarks such as GPQA or SWE-bench, models have improved their performance by up to 67 percentage points in just one year, approaching or even surpassing human performance in technical tasks.
- Mass adoption in everyday life and the business fabric: AI has moved from laboratories to hospitals, factories and vehicles. By 2024, 78% of organizations are using AI, up from 55% the previous year.
- Industrial leadership in model development: Almost 90% of the most advanced models in 2024 were developed by private industry, signaling a shift in the axis of innovation from academia to big tech.
- Unprecedented increase in private investment: Private investment in AI reached $252 billion, with notable growth in the generative AI area, which attracted $33.9 billion, 18.7% more than in 2023.
- Tension between progress and governance: Although governments are increasing their investment and regulation (the US, for example, doubled its regulations in 2024), the occurrence of ethical and security incidents is also on the rise, revealing a worrying gap in responsible AI.
- Regional inequalities in perception and access: While countries such as China, Indonesia or Thailand show high levels of optimism about AI, caution prevails in the West. At the same time, improvements in efficiency and cost reduction are democratizing access to advanced models.
From the lab to the world: AI Index Report 2025 confirms that AI is becoming physical
One of the most significant transformations documented in the AI Index Report 2025 is the shift from artificial intelligence as an analytical tool to its embodiment in systems that act, move, manipulate, and make decisions in the physical environment. From medicine to autonomous transportation, AI becomes a real-world player.
In 2023, the U.S. Food and Drug Administration (FDA) approved 223 AI-enabled medical devices, exponential growth from previous years. According to Carme Torras, Professor and researcher at the CSIC and participant in our Future Trends Forum on physical AI (Embodied AI), robots can play a crucial role in the care of people with Alzheimer’s, helping them remember everyday tasks such as taking medication or performing cognitive exercises, and providing emotional support. On the streets, companies such as Waymo already offer more than 150,000 autonomous rides per week in the US, and platforms such as Apollo Go are expanding autonomous mobility in several cities in China. All of this indicates that AI is no longer just in the cloud: it has a body, sensors, and wheels.
In this context, the Embodied AI report offers an essential layer of analysis: it defines this new stage as physical AI or “embodied”, in which artificial intelligence is integrated into robotic systems, sensors, actuators and physical bodies capable of learning and adapting. This concept expands the capabilities of AI and opens up new dimensions of human-machine interaction, with profound technical, ethical, and societal implications.
Technology that thinks and acts: from benchmarks to intelligent agents
The AI Index Report 2025 also highlights advances in the creation of intelligent agents capable of solving complex tasks, especially in time-limited contexts. For example, in the
In parallel, our physical AI report argues that these advances must be seen from a logic of adaptive embodiment: when these models are inserted into robots or devices that operate in factories, hospitals or homes, not only their abstract performance matters, but also their capacity for interaction, multimodal perception and continuous learning in dynamic environments.
Global AI, but with rules still under construction
Stanford reveals a landscape where investment and development are dominated by powers such as the US and China. Europe is in the background in terms of developed models, although it stands out for its leadership in regulation. Meanwhile, AI-related incidents grew by 56% in 2024 and the systematic evaluation of models with responsible AI criteria remains in the minority. As Iyad Rahwan, Director of the Center for Humans and Machines at the Max Planck Institute for Human Development and also a participant in our Future Trends Forum, points out, “AI systems can only be as fair as the data that feeds them,” highlighting the crucial importance of addressing bias in datasets to build equitable systems.
Here, the physical AI report offers a clear proposal: to leverage European leadership in robotics, technological ethics and regulation to lead physical AI from a human-centric approach. This includes hybrid regulatory frameworks, ethical design principles, and new metrics for assessing human-machine interaction in the real world.
Action strategies: from data to decisions
While the AI Index Report 2025 does not make direct recommendations, it does offer critical indicators that urge action:
- 90% of the featured models come from industry, but the average transparency in their developments barely reaches 58%, according to the Foundation Model Transparency Index.
- In 2024, there were 233 incidents involving AI systems, an all-time high that raises the need for robust preventive and reactive mechanisms.
- While 81% of computer science teachers in the U.S. believe AI should be in the school curriculum, less than half feel ready to teach it.
- Regional differences in access to technology, education, and perception of AI create gaps that could widen if not addressed in a coordinated manner.
In this context, the physical AI report complements the diagnosis with operational proposals:
- Integrated training and curricula, which prepare talent to design and implement AI in physical environments.
- Strategic investment in robotics, sensors and intelligent agents, combining cognitive and physical capabilities.
- New metrics to assess social and ethical impact, especially in real human interactions.
- Adaptive governance models, which balance innovation with security, and promote citizen trust in the AI that “walks among us”.
The conclusion is clear: Stanford’s data is a warning. The physical AI report is an action guide.
Conclusion: Stanford measures the pulse and Europe needs a strategy
Stanford’s AI Index Report 2025 offers an accurate and quantified snapshot of the global advancement of artificial intelligence. It shows how AI has ceased to be a laboratory technology to become an active part of the productive and social fabric, but it also warns of the risks and asymmetries that accompany this accelerated deployment.
In this context, the physical AI report by the Bankinter Innovation Foundation provides a complementary approach. Although it does not have the empirical scope of the Stanford study, it does introduce a structured perspective on the growing role of AI in the physical world and the challenges this poses from regulation, education, investment, and ethics.
What the physical AI report provides are not definitive answers, but key questions, working hypotheses, and preliminary proposals to begin to sort out a rapidly consolidating field. In particular, he stresses that Europe could play a relevant role if it knows how to articulate its strengths in robotics, regulation and human-centred design.
In short, while Stanford offers us the data of the change and growth of AI, documents such as physical AI are an invitation to turn that information into strategic direction and shared vision. Because in the new AI cycle – an AI that already acts, interacts and transforms – it is not enough to observe: we have to decide how we want to participate.