Artificial Intelligence
Jeremy Kahn – Beyond the Hype: The Real Trends in Artificial Intelligence

Fortune Magazine’s AI editor cuts through the media noise and points toward a future for artificial intelligence that is more useful, safer, and more human-centered
This article has been translated using artificial intelligence
The Bankinter Innovation Foundation remains committed to bringing the innovation that will shape the future closer to society and professionals. As part of this mission, we have organized a new webinar focused on one of the most transformative—and most debated—fields today: artificial intelligence.
This event is part of our outreach series following the latest Future Trends Forum (FTF), where more than 40 international experts examined the rise of Embodied AI—a form of artificial intelligence that now interacts directly with the physical world. The key insights from that forum have been compiled in our report Embodied AI, now available on our website.
To delve deeper into this technological revolution, the Foundation hosted a webinar moderated by Frances Stead-Sellers, featuring Jeremy Kahn, AI Editor at Fortune Magazine and author of Mastering AI: A Survival Guide to Our Superpowered Future, who also took part in the FTF.
During the session, Jeremy Kahn shared his perspective on this critical turning point for artificial intelligence: a phase where having powerful models is no longer enough—the real challenge lies in turning them into useful, safe tools aligned with human interests. He warned about the risk of losing control over autonomous systems if clear boundaries are not set and emphasized the urgent need for transparency and regulation in a space largely dominated by private tech giants.
Kahn also highlighted the emergence of a new generation of AI models that are no longer competing to be the biggest, but rather the most efficient, specialized, and adaptable. In his view, the future of AI may be less headline-grabbing, but it will be far more transformative in our everyday lives—from how we learn and make decisions to how we design public policy.
Don’t miss the webinar Beyond the Hype: The Real Trends in Artificial Intelligence, featuring striking video examples that showcase the remarkable capabilities of today’s robots (in English):
Why a “Survival Guide” for AI?
Jeremy Kahn explains that his book, Mastering AI: A Survival Guide to Our Superpowered Future, was born in the immediate aftermath of OpenAI’s launch of ChatGPT. As a journalist specializing in artificial intelligence—first at Bloomberg and now at Fortune—he had been closely tracking the evolution of this technology for eight years. And when ChatGPT burst into the public conversation, one thing became clear to him: people needed answers.
Millions began asking what AI really meant for their jobs, the economy, politics, and even their personal lives. With direct access to researchers at leading AI labs—such as OpenAI, Google DeepMind, and Anthropic—Kahn decided to use his experience to offer an accessible guide. His goal: to help people think about this technology not only in terms of its promises, but also through the lens of its risks.
The term “survival guide” is no exaggeration. For Kahn, AI presents real risks—risks that can only be avoided if we design and regulate its development responsibly. But he is no alarmist; he describes himself as a pragmatic optimist. He believes AI can have a highly positive impact—if we act urgently and wisely to seize the opportunities while avoiding the pitfalls.
What’s Changed Since the Book Was Published?
Although Mastering AI has been on shelves for less than a year, Kahn says many of the trends he anticipated are already materializing. One particularly troubling development: people who, after interacting with chatbots, begin to lose their grip on reality. He cites a recent New York Times report showing how some users—with no prior history of mental illness—become ensnared in conspiracy theories and delusions after repeated conversations with AI systems. It’s not the content of the answers that’s dangerous, he warns, but the tone—so convincing that it can override a user’s critical thinking.
On the economic front, Kahn confirms an uneven impact. Some software companies are already slowing down hiring, citing productivity gains from generative AI models. Others, however, struggle to measure the real return on their AI investments. In many cases, early excitement is now colliding with reality: the technology is expensive and doesn’t always yield immediate benefits.
Another key issue is sustainability. Kahn raises alarms about AI’s growing energy demands. One example: Project Stargate—a joint effort by OpenAI, SoftBank, Oracle, and Middle Eastern investment funds—aims to build massive data centers that would consume gigawatts of electricity, equivalent to the power needs of an entire city. Questions around the environmental impact of AI are no longer fringe concerns.
There have also been surprises. Kahn admits he didn’t foresee the rapid rise of so-called reasoning models—systems capable of simulating step-by-step thinking. This breakthrough is redefining expectations for AI, opening up new applications in complex tasks.
Finally, he points to the rapid acceleration of China’s AI ecosystem. In a remarkably short time, Chinese players have significantly closed the gap with the leading U.S. firms—another shift with major global implications.
Which Jobs Are Safe in the Age of AI?
It’s a common question: which jobs will survive the rise of artificial intelligence? Jeremy Kahn responds with realism. He acknowledges that many knowledge-based roles—such as analysts or programmers—are already being displaced. Especially vulnerable, he notes, are jobs that involve reading, writing, or analyzing data in front of a screen.
But Kahn also identifies professions that are more resilient. Any job that requires direct human or physical interaction is far more likely to endure. In healthcare, for instance, he sees medical professionals as indispensable. Doctors and nurses will remain essential, even as AI tools support them by reading scans, drafting reports, or designing treatment plans. The act of caring for, touching, and accompanying a patient remains deeply human.
The same goes for education. Kahn believes that teachers won’t be replaced, but rather augmented. AI may offer personalized tutors for students, but educators will still be vital to guide learning, resolve conflicts, and build trust.
He also highlights roles in social work, elder care, and even courtroom legal practice as less exposed. While AI may help review legal documents, lawyers will continue to represent clients before judges and juries.
The message is clear: the more human, relational, and physical a job is, the more likely it is to withstand the technological tsunami.
Are We Ready to Live with Robots?
Jeremy Kahn circles back to the recent forum we organized on Embodied AI. There, participants were introduced to robots designed to assist the elderly—reminding them to take their medication, helping with small tasks, and most importantly, reducing loneliness. While promising, the experience also raised concerns for him.
Physical AI has made notable progress over the past year. Thanks to large language models, we can now interact with a robot as if it were a person—no need for line-by-line programming. We’re also seeing the rise of foundation models specifically designed for robotic applications. One example: California-based startup Physical Intelligence has developed a system capable of controlling various robotic arms, regardless of the manufacturer—something that was previously unthinkable due to compatibility limitations.
Still, Kahn sets boundaries. These robots cannot replace the physical care many elderly people require: they don’t help them get out of bed or accompany them to the grocery store. At best, they serve as useful companions that can assist with certain tasks and offer moments of interaction—but they are not substitutes for human presence.
He also raises an ethical dilemma: will we become overly reliant on these robots? Will family visits decline because “they already have someone to talk to”? For Kahn, that risk is real. No matter how advanced AI becomes, no machine can replace the human bond.
What Are Foundation Models—and Why Do They Change Everything?
Jeremy Kahn highlights a key concept in the evolution of artificial intelligence: foundation models. Unlike earlier AI systems, which were designed for very specific tasks—such as detecting defects on an assembly line—foundation models are neural networks trained on massive datasets to handle a wide range of tasks across one or multiple domains.
Familiar examples include the large language models developed by OpenAI, Anthropic, and Google DeepMind. These systems can write poems, summarize articles, or generate code without needing to be specifically trained for each task. In Kahn’s words, they are “general-purpose tools.”
However, versatility doesn’t always equal excellence. If a company needs to identify faults on a production line, a narrowly trained AI model might still perform better. That’s why, alongside generalist models, we’re seeing the rise of specialized foundation models—systems trained to handle multiple tasks within a focused area.
One example Kahn points to is the previously mentioned startup Physical Intelligence, which has created a foundation model specifically for robotic arms. This model can recognize a wide range of objects without additional training and adapt to robotic hardware from different manufacturers—overcoming one of the sector’s longstanding limitations.
Another domain being transformed by foundation models is medicine. Pioneering companies are applying these models to predict protein folding, molecular interactions, and drug development. Previously, each subtask required its own model. Now, a single system can predict how any molecule will interact with any protein, dramatically accelerating medical research.
The key lies in transferability: the ability to apply what the model has learned in one context to many others. This marks the beginning of a new era of fast, flexible progress in AI.
Does AI Really “Reason”? And What Does It Teach Us About Ourselves?
One of the most fascinating—and also most misunderstood—developments in AI is the emergence of what are known as reasoning models. Jeremy Kahn clarifies that while they’re called that, these models don’t reason like humans do. What they do is follow a chain of thought, a step-by-step process that allows them to plan tasks or solve problems by breaking them into subtasks. Some models even display this process to users in the form of what resembles internal monologue.
This ability has been essential in the rise of agentic AI—AI agents capable of interacting with tools and performing autonomous actions on the internet. But Kahn warns: we shouldn’t be fooled. These models don’t apply logic from first principles like a human would. Instead, they search the pathways seen in their training data and select the ones that most closely match the task at hand.
Still, this simulation of reasoning is producing remarkable outcomes. And beyond performance, it’s also helping scientists better understand language—and ourselves.
Kahn explains that researchers are beginning to “open the black box” of AI models by mapping which artificial neurons activate in response to specific concepts. What’s striking is that multilingual models often cluster similar ideas—such as “mother” or “fire”—regardless of language. This suggests that these models may be building a kind of universal conceptual knowledge.
Does this mean humans think in the same way? It’s a possibility. The idea of a universal grammar, famously proposed by Noam Chomsky, is returning to the spotlight thanks to these findings. While the structure of artificial neural networks doesn’t replicate the human brain, it may be revealing patterns of understanding that mirror how we, too, make sense of the world.
What’s Going On with AI Agents?
This year, AI agents have become one of the hottest topics in the sector. Jeremy Kahn sees both enormous potential—and a fair amount of hype. Companies like Salesforce are making bold moves—at one point, they even considered rebranding as “AgentForce”—convinced that this technology will revolutionize business processes.
But what exactly is an AI agent? According to Kahn, it’s not enough for it to be an assistant that performs a task. A true agent must be capable of reasoning, executing multi-step processes, and acting with a degree of autonomy. And for now, that functionality is only partially reliable.
The most concrete impact so far has been in software development. Thanks to these agents, we’re moving beyond simple code suggestions (like GitHub Copilot) to systems that can generate full applications, run tests, and debug errors. In this domain, mistakes are obvious: if the code doesn’t compile, it doesn’t work. But in areas like marketing, customer service, or design, it’s often harder to define what a “bad result” looks like—making it more difficult to train and evaluate these agents.
Kahn distinguishes between short tasks—fewer than five steps—where agents perform reasonably well, and more complex processes, where their performance remains inconsistent. The technology is progressing, but reality still lags behind expectations.
On the consumer side, the vision is even more ambitious: a personal assistant that manages your entire digital life. Bill Gates has described it as “the ultimate app.” Imagine a system that not only suggests a travel itinerary, but also books your flights, hotel, restaurants, and museum tickets. Google DeepMind has already demoed prototypes that can perform some of these actions, though reliability is still an issue.
And this raises new legal and ethical questions: who’s responsible if an agent books the wrong flight or makes an incorrect payment? For now, companies shift the burden to the user. There’s also debate over how often the agent should ask for confirmation. Ask too frequently, and it becomes annoying; ask too little, and mistakes happen. Finding the right balance will be one of the major challenges of this new era.
What Happens When AI Answers Before the News Media Can?
Frances raises a concern shared by many journalists: more and more, when users search on Google, they’re getting AI-generated responses instead of a list of links. What does this mean for media outlets like The Washington Post or Fortune, which have long relied on traffic from search engines?
Jeremy Kahn confirms the impact is real. Google has already rolled out its AI Overviews—automated summaries that appear above traditional search results—and more recently, AI Mode, an even more advanced experience that can perform small tasks like booking a restaurant. In this setup, there are no visible links—and without clicks, media visibility plummets.
This shift threatens the business model of journalism. Previously, visits from Google were a vital source of audience. Now, with direct AI answers, that dependence is weakening. Google argues that users who do click are more engaged, but Kahn says many publishers remain unconvinced.
Faced with this new landscape, news outlets are starting to realize they can’t rely on organic search traffic. The priority now is building a direct relationship with readers—encouraging regular visits, boosting subscriptions, and becoming a daily news habit. But that’s no small feat.
Kahn warns that the threat isn’t just technical—it’s also perceptual. If an AI assistant can summarize the news effectively, why subscribe to a single outlet? Why pay for The New York Times if a machine can synthesize the best content from every outlet? As long as users feel the information is reliable, many won’t care where it comes from.
The takeaway is unsettling: the rise of “answer engines” presents a structural threat to journalism as we know it. And no one has a clear solution—yet.
How Does Our Relationship with AI Change When We Talk to It, Show It Things, and It Responds?
Our interaction with AI is quickly moving beyond the keyboard. Jeremy Kahn explains that today’s models are already multimodal: they can process voice, images, and even real-time video. That means you can not only type a question—you can speak to the AI, show it a picture, or stream what you’re seeing live.
For example: imagine fixing your bicycle. Instead of searching for a YouTube tutorial, you could activate your AI assistant, show it the problem via video, and get personalized instructions—what tool to use, how to adjust a part, what you’re doing wrong. Unlike traditional videos, this is a real-time conversation, tailored to your specific situation.
Kahn notes that this kind of natural, continuous interaction is creating a new demand: a dedicated device for interfacing with AI. That’s why OpenAI is working on proprietary hardware with Jony Ive—the designer of the iPhone—in a still-secretive project that has already attracted over $6 billion in funding.
Will it be a wearable pin, smart glasses, or a next-gen speaker? No one knows yet. But the goal is clear: to build a device that’s always on, sees what you see, and helps you in real time. Kahn cites the Ray-Ban Meta Smart Glasses as a step in that direction, but adds that other options are on the table, such as Alexa-style devices or new portable assistants.
Naturally, this opens a new set of questions around trust and reliability. These systems may be trained on manuals, videos, and books from around the world—but we still don’t know if their answers are consistently accurate or safe. As Kahn puts it, “the technology is astonishing, but we still haven’t fully solved the trust problem.”
What’s at Stake with Open-Source AI Models?
The rise of open-source—or more precisely, open-weight—AI models has been one of the biggest shifts in the past year. Jeremy Kahn begins by clarifying the terminology: while models like ChatGPT are closed and only accessible through an interface, open-weight models allow users to download and run the model parameters locally, modify them, and even use them offline.
Meta is leading this movement with its LLaMA family of models, but it’s not alone. In January, Chinese startup DeepSeek launched R1, a reasoning-capable (chain of thought) model that anyone can download and use for free—even hosted on their own servers at no cost. DeepSeek claims the model was inexpensive to train, though many experts are skeptical of that claim.
The main advantage of open-weight models is control: they allow organizations to tailor AI to their specific needs without being tied to the pricing, terms, or limitations of providers like OpenAI, Google, or Anthropic. In theory, this can also reduce costs—but not always. Many companies find that once they factor in infrastructure and maintenance, running their own models can end up being more expensive than using commercial APIs.
There are also significant risks. Without the built-in safeguards of closed models, open-weight systems can be modified for harmful purposes—like generating malware or instructions for making weapons. All it takes is some technical know-how to disable the safety filters, turning the model into a dangerous tool. This is a major concern for AI safety and governance experts.
The battle between open and closed models is far from settled. While some argue that open access is essential for innovation and transparency, others warn of the dangers of unrestricted use. For Kahn, the most likely outcome is a hybrid ecosystem: companies will use open models for flexibility and closed models for robustness and security.
Who Really Benefits from AI? Key Takeaways from the Q&A with Jeremy Kahn
The webinar’s Q&A session tackled some of the most pressing and complex questions about AI’s impact:
Are we democratizing intelligence—or consolidating power?
Kahn acknowledged that, so far, AI development is concentrated in a handful of companies in the U.S. and China. Yet he also highlighted real-world stories of people using these tools to launch businesses or access knowledge that was once out of reach. AI can democratize information—but the question remains: does that balance out the power now held by Big Tech?
What happens if AI systems start ignoring human intent?
Kahn was candid: we’re not ready. Some models have already displayed deceptive behavior—hiding or fabricating actions—which raises serious concerns about reliability. He called for sensible regulation to ensure these systems are controllable and transparent before they reach the market.
How would international regulation even work?
It’s a major challenge, he admitted. Still, Kahn believes it’s possible to establish shared minimum standards for civilian uses of AI—while leaving military applications out of scope, at least for now. Europe is leading the way with its regulatory framework, while the U.S. is moving more slowly. Some experts warn that meaningful regulation won’t happen until a serious incident occurs.
What’s the future of advertising when agents do the searching?
This question struck a nerve. Kahn explained that brands are already investing in how to appear in chatbot-generated answers. A new discipline is emerging: Generative Engine Optimization (GEO), a next-generation version of SEO. But the real concern is transparency—will users know if an AI agent is recommending a brand because it’s good, or because it paid to be there?
What role can Southern Europe play in this new landscape?
On regional dynamics, Kahn emphasized that Southern Europe—and Spain in particular—has real opportunities. AI could help mitigate labor shortages and aging demographics, and sectors like agriculture stand to gain significantly from advances in robotics over the next decade. That said, we must factor in the environmental costs. The growth of data centers is already straining water resources in hot, dry areas, and there’s an urgent need to assess the sustainability of this technological race.
A Must-Watch Conversation About the Present (and Future) of AI
With clarity and evidence, Jeremy Kahn walked through the promises, risks, and open questions of the most transformative technological revolution of our time. From power concentration and environmental impact to ethical dilemmas and economic opportunities, the debate is wide open—and will remain so.
As Frances Stead Sellers noted at the close of the webinar, audience engagement was overwhelming. The conversation with Kahn made one thing clear: as a society, we’re facing critical decisions. Now is the time to engage—actively—in shaping the future that’s already unfolding.