Artificial Intelligence and governance

AI-generated summary

Artificial intelligence (AI) governance refers to the framework of interactions among businesses, governments, and scientific communities shaping the development and progress of AI technology. The governance of AI, like any transformative technology, faces challenges due to uncertainties stemming from balancing its vast opportunities against associated risks. As a General-Purpose Technology (GPT), AI has the potential to revolutionize many aspects of society, from communication to energy, but its rapid expansion complicates centralized control and governance.

Currently, AI development is dominated by strategic competition among governments and major global corporations, including Amazon, Apple, Alphabet, Microsoft, Facebook, Alibaba, and Tencent. These companies invest heavily in AI research, making centralized governance difficult. To address this, the Partnership on AI consortium was created, allowing key players to collaborate on promoting innovation and regulating AI to prevent misuse. However, there is concern about the privatization of AI governance, risking decisions being made without transparency or democratic accountability. Additionally, AI systems often reflect human biases embedded in their training data, challenging the notion that AI inherently surpasses human intelligence. Experts caution against “governance by numbers,” where reliance on metrics oversimplifies complex human intelligence and ethical considerations in AI development.

The challenges of ai governance are based on the balance between the opportunities it presents and the associated risks.

We define artificial intelligence’s governance as the model of relationships among the businesses, governments and scientific communities involved in the progress and development of this technology.

The governance challenges of any technology, particularly artificial intelligence, are related to the uncertainty arising from the trade-off between its potential opportunities and associated risks.

AI´s Governance

1. Artificial Intelligence is a General-Purpose Technology (GPT). This type of technology transforms the key functions of civilization, from energy production to interpersonal communication. GPTs come with the speedy extension of their functions throughout society; their economic potential is exciting, but it is complicated to have either centralized control or centralized governance objectives.

2. Artificial intelligence is currently trapped in an environment of strategic competition. Both governments and large, global companies are investing plenty of resources in its development. Seven out of the ten largest corporations in the world by market value are companies that place artificial intelligence at their very core (Amazon, Apple, Alphabet, Microsoft, Facebook, Alibaba and Tencent) and they invest large sums of money in R+D, making centralized governance hard to achieve. 

That is the reason behind the creation of Partnership on AI, a consortium where all of these companies (except for Tencent) have joined forces to debate how to promote and regulate innovation and governance in artificial intelligence and, at the same time, fight its potential manipulation.


3. The privatization of artificial intelligence’s governance. Our experts warn about the risk that the private sector will catch the public’s eye and that regulations are built with no transparency, accountability and democratic human governance. There is a risk that technology empires are the ones making the decisions on the technology that affects all of us. 


4. The “Rule of Technology”. In his essay Manufacturing An Artificial Intelligence Revolution, biologist Yarden Katz talks about the false impression that current systems have exceeded human capabilities to the point that we think machines can handle many areas in a better way. These statements are based on a narrow and radically empirical view of human intelligence, and it is what Alain Supoit dubbed “governance by numbers”, a way to limit thinking based on metrics.

Our expert David Weinberger states that artificial intelligence is based on machine learning that learns from the data we feed it, which often reflects existing prejudices and biases as well as human subjectivity. We use these systems because we believe their results are better than those humans could generate, but sometimes that superiority is just a matter of greater speed and lower price.