Balancing without overdoing it: the pioneering law on AI

AI-generated summary

On March 13, 2024, the European Parliament approved the AI Act, the world’s first comprehensive law regulating artificial intelligence across sectors. This legislation aims to ensure that AI development prioritizes the protection of citizens’ fundamental rights, health, safety, democracy, and the environment, while still fostering innovation and maintaining a competitive internal market. Unlike other global powers, Europe emphasizes a human-centric and trustworthy AI, balancing technological advancement with strict safeguards on privacy and freedom. This approach reflects lessons learned from earlier regulations like the GDPR, which, despite initial criticism, enhanced users’ rights without harming the economy.

The AI Act adopts a risk-based framework, classifying AI systems into unacceptable, high, limited, and minimal risk categories, with corresponding obligations for providers and users. It prohibits unethical uses such as biometric categorization, subliminal manipulation, and social scoring, while requiring high-risk systems—like those monitoring exam candidates—to undergo external human rights impact assessments. By regulating AI based on its application rather than the technology itself, the Act is designed to be adaptable to rapid technological changes. The law underscores that AI must serve humanity, ensuring transparency and preventing discrimination, ultimately positioning Europe as a global leader in ethical AI governance.

The European Union has marked a milestone by approving the world's first legislation on artificial intelligence, the AI Act, with which it seeks to combine security and respect for individual freedoms.

To regulate or not to regulate, that is the dilemma, Shakespeare would have said in the age of artificial intelligence: Europe has decided to regulate. On March 13, 2024, the European Parliament approved the world’s first cross-cutting law on artificial intelligence: the AI Act, the result of lengthy negotiations between EU member countries. It is an important piece of a broader regulatory ecosystem, which also includes the GDPR, which aims to protect users’ fundamental rights.

“I prefer to talk about citizens than users,” he clarifies Isabelle Hupont Torres, PhD in artificial intelligence, scientific officer at the Joint Research Center of the European Commission, and member of the think tank of the Bankinter Innovation Foundation. Hupont has provided scientific advice on the work that led to the birth of the AI Act, as a synthesis of a multidisciplinary debate between the sometimes divergent positions of the member states.

It would be impossible to summarize all the measures that at the European level have affected artificial intelligence in recent years: the sectors are divided into industry, public administration, digital, scientific research, and ethical and political issues touch the sphere of privacy, the labor market, human-machine interaction. However, the AI Act is the first step towards solid and unitary legislation, with which Europe becomes the first supranational entity to adopt a law that will regulate the technology of the 21st century.

For a human-centric AI

From the beginning of the text, the focus on protecting the rights and privacy of citizens is perceived. The first paragraph reads: “The purpose of this regulation is to promote the spread of trustworthy human-centric artificial intelligence, and to ensure a high level of protection of health, safety, fundamental rights, democracy and the rule of law, as well as the environment, against the harmful effects of artificial intelligence systems in the Union, at the same time that innovation is supported and the functioning of the internal market is improved”.

The legislator’s first concern, therefore, is that artificial intelligence is human-centric and reliable, only in a second moment is emphasis placed on innovation and the correct functioning of the market. Throughout the text, it is evident that there is an attempt to balance the demands of competitiveness at the technological and economic level, to be on a par with other world powers, and to safeguard the fundamental values of the EU, especially those related to the privacy and freedom of citizens.

On the other hand, law and technology, with all their intersections in ethics, economics and politics, are the two major fields in which the game of effective integration between artificial intelligence, innovation, global market and society is at stake. It is up to each state or international organisation to do its best to find the long-awaited balance and if, compared to China and the United States, Europe is undeniably lagging behind technologically, this is also due to stricter legislation on rights and privacy, which, however, also produces virtuous effects.

The rights of individuals, first and foremost

Especially the United States, but also some EU member countries have criticized the European orientation, fearing the risk of slowing down innovation, in addition to creating a competitive disadvantage for companies on the continent in the development and adoption of this technology. But there is the example of the GDPR, whose arrival, six years ago, was criticized in the same way and did not produce any collapse of the European economy, while it served to correct some distortions generated by Big Tech. Some services arrived a few weeks later in Europe, but they were widely transformed and improved, as in the case of the Chat GPT and Replika applications.

Hupont’s personal opinion on the matter is categorical: “The risk of the AI Act being a possible deterrent exists, but I prefer the European perspective that puts fundamental human rights first. In addition, in my opinion, it is a competitive advantage to release ethical algorithms, even if they take six more months to come out. In this sense: blessed be the brake.” In fact, the AI Act, Hupont confirms, “is very human-centric, in the sense that, for example, if I’m denied credit, I want to know why. It is not worth me to be told because an algorithm has said so. Therefore, supervision and a series of rules that protect citizens from discrimination and abuse are necessary.”

Europe has also been criticized for choosing to regulate something that is still in process. Many consider, in fact, that it is too early and that we should wait for the arrival of technologies on the market. In reality, it could even be too late, since AI is already among us, and the consequences of the slowness in regulating social networks are obvious to everyone, with risks also on the democratic front.

The AI Act seeks to strike a balance between the need to regulate with clear rules and the need to be flexible enough so that you don’t have to start over every time a new technology comes out. As Hupont explains, “When it came to writing it, legislators were faced with the problem of a technology that surprises you every month with a new algorithm, capable of doing even more sophisticated things. In addition, technology evolves a lot, but humans do not evolve so quickly.”

The risk-level approach

However, according to the researcher, the approach adopted is the right one: “The AI Act is what we call ‘future proof‘, because legislators did something that I consider very intelligent, that is, they decided not to regulate the technology, the algorithm, but the use that is going to be made of it. Based on this use, it is assigned a risk level that determines whether an AI system has to demonstrate compliance with the regulations or not.” This approach serves to find the balance according to a criterion of proportionality.

The AI Act is based on the principle that technology must be developed and used safely and ethically. For this reason, it establishes a classification of AI systems according to their level of risk to the safety and rights of individuals, and establishes a series of requirements and obligations for the providers and users of such systems. The regulatory framework distinguishes between four categories of risk: unacceptable, high, limited and minimal.

Looking at the definition of uses considered unacceptable, one notices the prominent attention to the ethical dimension: just think of the prohibition of using biometric categorization to extract sensitive data, of employing AI techniques to subliminally manipulate people’s decisions or behavior, or of using artificial intelligence for social scoring purposes.

AI systems considered to be high-risk (such as those used to monitor the behaviour of students during exams or candidates during a job interview) are required to undergo an external assessment of their impact on fundamental human rights. The AI Act is a first, but substantial, warning that technology is at the service of human beings, and not the other way around. And this means, above all, respect for fundamental rights.