Responsible AI: How to move from regulation to practice

AI-generated summary

The European Union’s AI Act, the world’s first comprehensive legislation dedicated to artificial intelligence, marks a shift from aspirational principles to enforceable obligations emphasizing rights protection, security, and transparency. Isabelle Hupont of the European Commission’s Joint Research Centre highlights the act’s citizen-focused, risk-based approach, underscoring that AI affects profound societal dimensions including justice, health, and fundamental rights. The regulation targets the use of AI rather than the technology itself, classifying applications into four risk categories—unacceptable, high, limited, and minimal—to balance innovation with necessary safeguards. High-risk AI systems, such as credit scoring or university admissions algorithms, face stringent requirements including data quality assessments, discrimination checks, and meaningful human oversight.

Implementing the AI Act extends beyond legal compliance, demanding organizations to address algorithmic biases, ensure transparency, and foster an internal culture of AI governance integrating technical, ethical, and legal expertise. Challenges remain, as many European organizations are still unprepared to manage AI risks effectively. The act’s emphasis on “blessing the brakes” reflects a strategic pause to build trust and improve quality, positioning Europe to lead not just in AI innovation but in creating a secure, ethical ecosystem where technology and citizen rights advance hand in hand.

How Europe has decided to move before artificial intelligence transforms entire sectors without clear rules.

With the approval of the AI Act – the world’s first comprehensive law dedicated specifically to artificial intelligence (AI) – the European Union is entering a stage in which the protection of rights, security and transparency are no longer general aspirations but verifiable obligations. However, the real challenge begins now: how to put this regulation into practice in companies, SMEs, public administrations and educational environments?

The vision of the researcher at the European Commission’s Joint Research Centre, Isabelle Hupont, provides a particularly useful framework for understanding this transition: an AI focused on citizens, not just users; a risk-based model; and a simple but powerful idea – “blessed be the brake” – when the regulatory pause is what allows us to build trust in the long term.

According to Hupont explains, AI does not only affect how we use an application, but also profound dimensions of public life: security, justice, health, fundamental rights. Algorithmic decisions are already part of the social fabric, and that is why the subject of regulation must be the person in all his complexity, not the isolated consumer.

Europe Hupont recalls, has been criticized for years for regulating too quickly or for getting ahead of technologies that are still immature. But the goal has never been to intervene on the innovation itself, but to ensure that its deployment respects shared democratic principles. The regulation is thus understood as a coherent step within a broader strategy for the protection of citizens in the digital age.

Regulating the use, not the algorithm: a “future-proof” regulation

One of the most important elements of the AI Act is its risk-based approach. The reason is simple: technology changes too quickly to regulate specific concepts that could become obsolete in a matter of months. That is why the EU decides not to focus on the algorithms themselves, but on what they are used for.

According to the official summary, the AI Act classifies AI systems into four levels of risk: unacceptable, high, limited, and minimal.

  • Minimal risk: Practices with limited impact, such as unlocking the phone with facial recognition.
  • Limited risk. Applications that require informing the user, but whose impact on rights is reduced.
  • High risk. This is where the core of the regulation is concentrated. For example, the granting of credits, where an algorithm decides whether or not to grant credit; or access to university or similar selection processes, in which a bad classification can violate fundamental rights.
  • Unacceptable risk. It includes prohibited practices, such as social scoring systems based on individual behavior without transparency.

This tiered model balances innovation and security: not all applications require the same degree of control, but those that have a critical impact must meet strict standards. In high-risk cases, in particular, the regulations require documenting the operation, verifying the quality of the data, assessing possible discrimination and, above all, ensuring meaningful human supervision.

When Hupont says “bless the brakes”, it does not suggest that technology should stop, but that a pause in the deployment of high-risk systems improves the quality of the final product and protects citizens from irreversible consequences. And although part of the public debate insists that Europe “slows down” innovation, the well-applied brake can become a competitive advantage. As with the GDPR, the introduction of rigorous standards raises quality, reinforces trust and facilitates adoption in critical sectors such as health or education.

Real practice: biases, human oversight and organisational culture

The move from theory to practice involves recognizing that algorithms are not neutral. Hupont He gives a particularly clear example: biased automation in facial analysis systems. The researcher shows how these models can detect micro-expressions or facial features, but she also warns that facial recognition—especially that used in police contexts—can lead to misidentification and discrimination.

The AI Act does not prohibit all forms of facial recognition, but it does impose strict limits in cases where it may violate fundamental rights. This is an essential red line to prevent automation from displacing human responsibility. The regulations also require organizations to learn more about their own systems. That means understanding how models are trained, what data they use, what biases they can incorporate, what control mechanisms they require, and how to communicate this to the citizen.

Applying these principles requires organizations to adopt internal AI governance processes. It is not enough to comply with the regulations in the abstract: datasets must be reviewed, traceability mechanisms must be established, models must be documented and, above all, teams must be set up that integrate technical, legal and ethical profiles. Although the AI Act does not explicitly mandate the appointment of an “AI ethics officer“, the spirit of the law goes in that direction: clear roles, defined responsibilities and a culture of transparency.

However, a Deloitte’s study on generative AI readiness and governance found that only 18% of European executives considered themselves “highly prepared” or “highly prepared” in risk management and AI governance. Many organizations still have a long way to go: regulation may exist, but whether it is complied with in daily operations depends on governance, culture and technical knowledge.

The AI Act does not only call for legal compliance: it requires responsible management of a technology that is increasingly integrated into everyday life. And in this sense, the regulation does not seek to limit artificial intelligence, but to ensure that it is developed in a framework of trust, security and respect for fundamental rights. Europe is not competing only on the speed or size of the models, but on something more difficult to imitate: an ecosystem where technological innovation and citizen protection advance together.