AI Impact

Algorithm audits for better artificial intelligence

Algorithm audits for better artificial intelligence

Artificial Intelligence must be more humane, particularly as it takes part in decisions that concern us. Algorithm audits are designed to do just that.

We rely more and more on artificial intelligence, even though we may not always be aware of it. We pay heed to its recommendations about what show to watch next. We rely on it to adjust workouts with data from our wearables and it even helps us educate our children. More importantly, companies and public organizations are using it more and more to engage with us.

The impact of artificial intelligence on our lives is already very significant, but it will multiply in the coming years. This raises an important question: to what extent will an algorithm treat us fairly? To what extent are algorithm-driven decisions ethical? And, above all, who can ensure that they are?

Algorithm auditing for more ethical artificial intelligence

Algorithm auditing is the industry’s solution to verify that the impact of an algorithm on our lives will indeed be positive. To this end, there are companies dedicated to combining social sciences with technological development, to ultimately develop tools built on ethical principles.

In Spain, some auditing firms are specialized in this area already and a Spanish Artificial Intelligence Oversight Agency will be created to audit artificial intelligence algorithms that affect citizens.

Algorithm audits usually involve different phases, which include checking the type of data an algorithm uses, what is the impact of that algorithm on different groups, and how humans interact with the work delivered by the algorithm itself.

Algorithm audits also check that algorithms comply with the legislation in force and help identify risks their application poses to the company’s or organization’s business. Evaluating and measuring all this work must continue, and corrections must be made when necessary.

Questionable data quality

The quality of the data processed by algorithms is a major problem of artificial intelligence at present. For this reason, evaluating these data is one of the main points of an algorithm audit.

For Peter Eckersley, co-founder of AI Objectives Institute and member of our think tank, the Future Trends Forum, algorithm biases are largely due to the fact that they are, in turn, fed by biased data sources.

“Algorithms can make bad decisions if they are based on poor quality data, if they cannot identify the most important causality variables or they cannot include explicitly the complexity of decisions that have uncertain or competing objectives,” he explains.

The expert adds that some of the more obvious responses to these difficulties may even make the problem worse. “For example, trying to by-pass the algorithm’s bias by removing data from protected categories is worse than including and then correcting them. Besides, choosing a single type of correction can be much worse than combining different ways of measuring the fairness of an algorithm. However, many organizations and companies have not understood all this.”

It is at this point that algorithm auditing—performed by professionals who know these risks well and can therefore help detect and correct them—can be particularly useful.

“An ethical algorithm audit can help with all this. Especially if it encourages the teams that build and deploy decision-supporting algorithms to be aware of the negative impact their work can have. An impact that encompasses citizens, customers and other subjects affected by these decisions. These audits may also encourage institutions to be more careful when approaching the automation of these types of tasks,” says Eckersley.

The consequences of bias

This last issue, automating decision-making, is key at a time when this task is increasingly being delegated to artificial intelligence.

For the expert, “modern bureaucracies are increasingly looking to automate decision-making pertaining human beings, or at least to do so partially. This extends across a wide range of contexts, from admission to educational institutions to access to jobs, loans, insurance, and social services. It is even done in court rulings and pre-trial phases. In all of these, we see governments and private companies making more use of artificial intelligence and other algorithmic decision-making tools.”

The consequences of doing this work without proper oversight are already becoming apparent. For example, minority neighborhoods in the U.S. pay more for auto insurance than predominantly white neighborhoods, as Eckersley himself reflected in the Future Trends Forum’s Disruptive Business Models report.

How to create more ethical artificial intelligence

One of the main questions when considering a more ethical artificial intelligence is where to start building it. A goal for which Peter Eckersley proposes two levels of action.

“At a higher level, we at the AI Objectives Institute believe that, if we want artificial intelligence to truly reflect human priorities and needs, it is essential for market incentives to do the same,” says the expert.

This means ensuring that for-profit entities have sufficient incentives to allocate their resources to issues such as economic justice, equality, or the generation of public and environmental benefits. Those incentives should also encourage complex feedback loops about the impacts on their employees, consumers, and communities. “If those market incentives are good, entrepreneurs and investors will work to build artificial intelligence that meets those needs.”

At a lower level, Eckersley points to a host of technical details that need to be managed well from the start—algorithm auditing can help there. Selecting data quality sources and metrics correctly, consulting with the right groups affected before making decisions, choosing multiple measures of unintended consequences, or choosing precise optimization objectives are some details mentioned by the expert.

How to measure the social impact of an algorithm well

To improve the social impact of algorithms—a major objective of audits—establishing metrics that identify it is not enough. According to Peter Eckersley, people affected by the algorithm must be informed about what living with it entails.

“Algorithms with greater potential impact should have dedicated teams that study the data and make adjustments as soon as something seems to be going wrong,” he says.

Otherwise, the expert advocates to measure the impact of algorithms in different ways, within a strategy that includes primary measures for intended impacts and secondary measures that cover unintended consequences. The goal is to leave nothing to chance. This is essential if we aim to make artificial intelligence as humane as possible.

Also recommended

What if AI spirals out of control? Shahar Avin highlights the existential risks

What if AI spirals out of control? Shahar Avin highlights the existential risks

Can AI spiral out of control and endanger humanity? Shahar Avin explores the most extreme scenarios at the Future Trends[…]

Read more
Sovereign AI: the new race for technological autonomy

Sovereign AI: the new race for technological autonomy

Jordan Sun warns about the new race for technological sovereignty: without their own AI capabilities, countries and comp[…]

Read more
Jeremy Kahn – Beyond the Hype: The Real Trends in Artificial Intelligence

Jeremy Kahn – Beyond the Hype: The Real Trends in Artificial Intelligence

Fortune Magazine’s AI editor cuts through the media noise and points toward a future for artificial intelligence that […]

Read more

Related experts

Peter Eckersley
Peter Eckersley

IT Director at Electronic Frontier Foundation

Lastest News

Startup Observatory Analysis: First Half of 2025

Startup Observatory Analysis: First Half of 2025

Key highlights Investment Volume The first half of 2025 brought positive momentum to the Spanish startup ecosystem. Tota[…]

Read more
What if AI spirals out of control? Shahar Avin highlights the existential risks

What if AI spirals out of control? Shahar Avin highlights the existential risks

Can AI spiral out of control and endanger humanity? Shahar Avin explores the most extreme scenarios at the Future Trends[…]

Read more
Sovereign AI: the new race for technological autonomy

Sovereign AI: the new race for technological autonomy

Jordan Sun warns about the new race for technological sovereignty: without their own AI capabilities, countries and comp[…]

Read more