Learning with machines: algorithmic literacy for the digital age

AI-generated summary

Understanding algorithms is becoming as essential as reading and writing, not to turn everyone into programmers but to equip citizens with the ability to interpret how artificial intelligence (AI) shapes their perceptions, decisions, and beliefs. By 2025, the European Union will promote advanced AI skills and computational thinking through the Digital Skills & Jobs Platform, marking algorithmic literacy as a fundamental element of digital citizenship. Currently, algorithms influence numerous aspects of daily life—from news feeds and job opportunities to educational organization—yet most people lack insight into how these systems work or their impact, creating vulnerabilities to misinformation and loss of cognitive autonomy. Therefore, critical coexistence with AI-driven decision-making systems is crucial.

International organizations like UNESCO and the EU emphasize three core competencies in AI literacy: understanding algorithm functions, critically assessing their biases and limitations, and using AI tools responsibly. Research shows that greater algorithmic understanding enhances individuals’ ability to detect misinformation and navigate digital content more wisely. Schools, universities, and businesses are integrating AI literacy into curricula and training, underscoring that frequent exposure alone does not guarantee comprehension; guided reflection and conceptual frameworks are necessary. Innovative educational projects employing Explainable AI aim to increase transparency by clarifying how AI recommendations are generated, fostering autonomy rather than blind acceptance. Ultimately, algorithmic literacy empowers humans to collaborate intelligently with machines, recognize inherent biases, and build ethical technological cultures, preparing society to actively engage in a future shaped by human-machine co-evolution.

Understanding how machines think is already an indispensable citizen skill: without that knowledge, we lose autonomy and opportunities.

Understanding how algorithms work will soon be as essential as knowing how to read or write, but not to turn everyone into programmers, but to train citizens capable of interpreting how artificial intelligence influences what they see, decide and come to believe. In 2025, the EU will reinforce this idea with the launch of the new Digital Skills & Jobs Platform, aimed at advanced skills in AI and computational thinking. The so-called algorithmic literacy thus ceases to be an academic concept and becomes a necessary condition for the digital citizenship.

Today, algorithms select the news we read, the results we see when searching for information, the career opportunities that appear on job platforms, and even the way in which a university class is organized with the support of AI. However, most people do not know how to explain how these systems work or how they condition their experience. This lack of understanding opens up space for the misinformation, polarization and loss of cognitive autonomy. Therefore, it is necessary to learn to coexist critically with machines that make decisions alongside us.

Towards full algorithmic literacy

In recent years, international organizations such as UNESCO have published reference frameworks to guide this emerging competence. Its proposal for AI Literacy defines three core capabilities: understanding what algorithms do, critically evaluating their limits and biases, and safely and responsibly using AI-based tools. The European Union, for its part, incorporates these principles into its initiatives on advanced digital skills, where algorithmic literacy is at the level of other fundamental skills for citizens in the 21st century.

Recent research reinforces this urgency. Studies focused on youth and digital environments show that those who better understand how algorithms operate are also better able to detect disinformation dynamics and recognize the mechanisms that order and prioritize content on digital platforms. Understanding that systems recommend information based on our behavior is key to making more informed decisions.

Schools and universities are also adapting. The European Commission and the OECD have released a AI literacy framework for primary and secondary school that proposesto integrate algorithmic thinking, data ethics and critical analysis of automated systems in different subjects . Some centers experiment with exercises inspired by these frameworks, such as analyzing why two people get different results when searching for the same news or how a platform decides which videos to show first.

In parallel, more and more universities include training on AI, algorithmic biases, and Critical evaluation of automated systems in undergraduate and postgraduate degrees in social sciences, communication, law or education. These experiences show thatdaily exposure to algorithms does not guarantee their understanding: interpreting the logic of a “feed” requires guided reflection, specific vocabulary and an adequate conceptual framework.

The business sector is moving in the same direction. For many organizations, algorithmic literacy has become a strategic requirement for the decision-making, talent selection, risk management or process automation. The Digital Skills & Jobs Platform brings together executive training resources and programs focused on AI literacy, and several companies incorporate specific modules so that managers and teams can correctly interpret the systems they use to process data or support internal decisions.

From literacy to human-machine co-evolution

Algorithmic literacy is not only an educational challenge: it is also a field of innovation. In the technological and university spheres, projects are emerging that develop cognitive tutors based on Explainable AI (XAI), designed to show why an answer is suggested, how a reasoning has been generated or what data supports a recommendation. These tools seek to increase transparency and allow students to Understand the logic of the system rather than accept it uncritically.

European research projects – such as those linked to educational platforms that incorporate XAI – work on models that adjust content according to the student’s behaviour and explain why a specific activity is proposed. This transparency is essential to prevent personalization from leading to an educational “black box.” Students need to understand not only what a system recommends, but why. Only in this way can AI support autonomy, not replace it.

The The ultimate goal of algorithmic literacy is not to learn to distrust AI or delegate everything to it, but to collaborate intelligently with machines, keeping human agency at the center. This implies recognizing that algorithms are not neutral, that they respond to economic and social incentives, and that they can amplify inequalities if they are not carefully designed and monitored.

Today, knowing how to read algorithms implies knowing how to read the world: detecting biases, understanding how automated decisions are made, anticipating impacts and building more ethical technological cultures. Spaces for reflection such as the Future Trends Forum of the Bankinter Innovation Foundation contribute to this task: to form a collective intelligence that is more critical, more aware and better prepared for a future in which humans and machines will learn (and decide) together. Because algorithmic literacy is not just about understanding technology. It is to understand the present. And, above all, to prepare to participate fully in the digital future.