Governing AI: The Challenge of Making Decisions with Machines in the Room

AI-generated summary

The integration of artificial intelligence (AI) as an active participant in high-stakes decision-making is increasingly becoming a reality in sectors like healthcare, justice, and emergency management. AI systems now do more than automate routine tasks; they influence strategic decisions by processing vast and varied data sources to provide recommendations on complex issues, such as patient prioritization in hospitals or policy responses to crises. This data-driven approach offers advantages like reduced errors, better resource allocation, and enhanced adaptability. However, challenges arise from AI’s dependence on data quality, algorithmic opacity, and its inability to replicate human intuition, ethical judgment, and contextual understanding.

Recognizing these limitations, strong governance frameworks are being developed globally to regulate AI’s use, especially in sensitive areas. The European AI Act and the Council of Europe’s Framework Convention emphasize transparency, fairness, human oversight, and rights protection, while countries like Colombia are adapting their policies accordingly. Compliance with these regulations not only avoids penalties but also builds trust and operational efficiency. The emerging consensus advocates a collaborative model where AI supports—rather than replaces—human decision-makers, freeing them to apply expert judgment and moral responsibility. Ultimately, the goal is to align AI development with human values, ensuring it enhances talent and well-being without undermining critical thinking or creating power monopolies.

Artificial intelligence is no longer just automating tasks, it's also starting to influence strategic decisions, from business investment to public policy.

Let us imagine a board of directors in which, among the human advisors, there is also a system of artificial intelligence. Not as a passive observer, but as an active participant, capable of suggesting investments, warning about risks or prioritizing decisions. This scenario is no longer science fiction: it is happening in sectors such as health, justice and emergency management. But what actually happens when there’s a machine in the room where critical decisions are being made? How to govern AI (without being governed by it)?

Until recently, artificial intelligence has mostly been used to automate repetitive and structured tasks. Today, its role is transformed: it becomes an ally in high-impact decision-making. In fact, thanks to machine learning techniques, AI systems are no longer only processing large volumes of data, but are also beginning to influence strategic business and political governance choices: from which patient to see first in a saturated ICU to which policy to apply in the face of a energy or health crisis.

This phenomenon is part of a broader paradigm: that of data-driven decision making. According to the German Research Center for Artificial Intelligence (DFKI), this approach offers clear competitive advantages: it reduces errors, improves resource allocation, and enables a more agile response to changes in the environment. AI thus acts as a catalyst, processing not only structured data, but also unconventional information – such as digital interactions or environmental variables – to generate strategic insights .

AI in hospitals, courts, and crisis centers

The deployment of AI systems in sensitive sectors is already a reality. In medicine, algorithms are used to predict complications, assign hospital beds, or suggest personalized treatments. In March 2025, the European SHAIPED initiative – part of the European infrastructure HealthData@EU— began testing AI models in real-world clinical settings to support medical decisions with multi-source health data.

In China, since January 2025, the DeepSeek system It is applied on a large scale in tertiary hospitals, managing clinical flows and improving diagnostic accuracy. On the other hand, the network AI4PEP, promoted from Latin America, seeks to democratize the use of AI to anticipate pandemics and manage public health crises, based on local science and principles of digital equity.

In the courts, the issue is more controversial. The COMPAS algorithm in the U.S. has been criticized for replicating racial biases when predicting criminal recidivism, despite not explicitly using the ethnic variable. In this regard, the Council of Europe, through the JuLIA, has warned about the risks of algorithmic opacity in judicial processes and called for clear ethical frameworks to ensure impartiality and accountability.

The fact is that the promise of “rational” AI is based on its ability to consider millions of variables, evaluate multiple scenarios, and offer recommendations quickly. But that promise has limitations. The quality of algorithmic decisions depends on the quality of the input data, and if the data is incomplete, biased, or outdated, the results will be just as flawed. In addition, many algorithms work like black boxes: they offer conclusions without clearly explaining how they were reached. This opacity it generates mistrust, especially in areas such as health or justice, where understanding the basis of a decision is essential.

Another key problem is the lack of “expert intuition.” An experienced manager, even if they do not have immediate access to all the data, can detect patterns, anticipate improbable scenarios and make creative decisions under ambiguous conditions. AI, for now, lacks that capability. It does not distinguish contextual nuances nor can it fully assume the ethical responsibility of its recommendations.

AI governance: between regulation and opportunity

The need for strong AI governance has thus become a strategic axis for both governments and companies. In 2024, the European AI Act (EU Regulation 2024/1689), which establishes differentiated obligations according to the level of risk of the system. For sectors such as health, justice or critical infrastructure, legislation requires transparency, explainability, human oversight and control of bias.

In parallel, the The Council of Europe’s Framework Convention on Artificial Intelligence, adopted in May 2024, links the development of AI with the protection of human rights and democracy. For their part, Latin American countries such as Chile, Mexico and Colombia are beginning to adapt their regulatory frameworks. Colombia, for example, included explicit algorithmic guidelines in its policy CONPES 4144 and in court rulings such as T-067/25.

For the Compliance with these frameworks will not only prevent penalties or purchase rejections: it will also translate into reputational trust and operational efficiency. In fact, as the initiative underlines FUTURE-AI, made up of more than a hundred experts from 51 countries, companies that are committed to auditable and human-centric systems will have a structural advantage over those that privilege only speed or scale.

In addition to laws, principles are needed. AI must be explainable, fair, secure, and auditable, and its development must be aligned with human values, not just efficiency goals. As the report highlights Megatrends 2025 of the Bankinter Innovation Foundation, one of the great challenges is to ensure that this technology is used to enhance talent and well-being, not to replace human judgment or create monopolies of computational power.

Thus, in the face of the polarization of the debate – utopians versus alarmists – a third way is emerging: a collaborative approach. AI it should not replace the human, but free up time and energy for expert judgment to flourish. Theoretical models such as Cohen and March’s (1972) garbage can model , which describe decisions as chaotic and uncertain processes, can find in AI an ally that orders “information chaos” without replacing critical sense.

To do this, it is key to redefine the role of the decision-maker, who should no longer be a simple receiver of data, but a strategic translator who interprets, validates and transforms algorithmic outputs into effective and human decisions. Because AI can suggest, but it cannot yet take responsibility for a moral or creative choice.