AI-generated summary
The Bankinter Innovation Foundation’s Akademia programme is distinguished by its rigorous student selection, cutting-edge curriculum, and exceptional faculty, fostering innovation-driven graduates ready to apply creative solutions in their fields. Aldan Creo, an alumnus and AI expert at Accenture Labs – The Dock, exemplifies this success. With top academic honors in Computer Engineering and international experience, Aldan transitioned into AI through a fortuitous research project in machine learning during his studies. His work spans natural language processing (NLP) and knowledge graphs, focusing on structuring complex data and enhancing AI language models to reduce inaccuracies known as hallucinations by integrating semantic knowledge with structured data.
Aldan highlights key challenges in NLP, including model hallucinations, scalability, and explainability, proposing hybrid approaches combining language models with knowledge graphs to improve accuracy, transparency, and data handling. His research has real-world applications in personalized medicine, climate prediction, and content recommendation systems, demonstrating AI’s broad transformative potential. Looking ahead, he foresees gradual, steady advancements integrating AI more deeply into daily life and business, emphasizing the importance of understanding AI’s historical and technical foundations for future innovators. He advocates for strengthened academia-industry collaboration through joint research and knowledge transfer programs, with initiatives like Akademia playing a pivotal role in bridging theoretical and practical advances in AI.
Aldan Creo is innovating in the improvement of natural language processing systems to make them more scalable, more understandable and more reliable
At the Bankinter Innovation Foundation, we are very proud of the alumni who have been part of our Akademia programme.
The uniqueness of the program lies in its design and execution: it ranges from a meticulous student selection process to a practical and avant-garde approach to the content of the classes, complemented by the excellence of the teachers. This results in students who are enthusiastic about innovation, ready to bring new ideas and creative solutions in their fields of expertise.
On this occasion we interviewed Aldan Creo, a former student of Akademia and expert in artificial intelligence. Aldan is a Technology Research Specialist at Accenture Labs – The Dock, with an impressive academic and professional track record in the field of artificial intelligence, especially in natural language processing and knowledge graphs.
Aldan graduated as number one in Computer Engineering from the University of Santiago de Compostela. In addition, he completed exchange programs at the Università della Svizzera Italiana (USI) and at the Sorbonne Université, standing out for his high grades in both institutions.
After graduation, Aldan moved to Dublin to focus on AI research applied to knowledge graphs at The Dock, Accenture’s largest global AI research lab. Previously, he participated in research projects at the Singular Center for Research in Intelligent Technologies (CiTIUS) and Gradiant, focused on the analysis of Large Language Models (LLMs) (to understand us, models of the ChatGPT type), and the implementation of text summarization algorithms. He has also contributed to Open Source at the Google Summer of Code.
Aldan has been recognized with several awards and scholarships for his academic excellence and contributions to the field of AI. Among them, the Extraordinary Degree Award stands out for being the first of his class, or his selection in the list of Nova 111 students in Spain.
Below, we summarize the interview we had with Aldan:
Experience at Akademia: How did your participation in the Bankinter Foundation’s Akademia programme influence your professional career and academic interests, especially in the field of artificial intelligence?
For me, the experience was fundamentally enriching in three key aspects. Firstly, it allowed me to meet people with similar interests, especially in the field of entrepreneurship and, more specifically, in artificial intelligence. This aspect turned out to be of great value, as it facilitated the creation of a network of active contacts related to my interests. This is crucial, as it makes you feel accompanied in your endeavors and connects you with people who share your passions.
My involvement in Akademia began in the last year of my university career, while I was finishing my TFG focused on natural language processing (NLP). Academic research can usually be quite isolated, focusing exclusively on results and publications in a closed environment. However, Akademia helped me to expand my vision, to understand how my research could be applied in real-life situations and have a significant impact. It was a revelation to understand that the work I was doing could transcend academic boundaries and contribute in a concrete and tangible way to society. This is the second aspect.
And the third fundamental aspect that marked me about Akademia was the unique perspective it provides on entrepreneurship. Often, we can have a somewhat limited or distorted vision of what entrepreneurship entails. The program helped me understand that true entrepreneurship is based on developing a sustainable proposal, that is, an idea that not only has a momentary or ephemeral impact, but that lasts over time and has a relevant meaning. This new vision of entrepreneurship has been extremely valuable to me. Rather than focusing on the mere action of entrepreneurship, the program emphasizes the importance of sustainability and the evolution of ideas. Fernando Alfaro continually insisted on this point. The idea that entrepreneurship should not be an end in itself, but a means to create something that not only survives, but also adds value in a continuous and progressive way.
Transition to AI: With a strong background in computer science, how was your transition to specializing in artificial intelligence?
My transition to specialising in artificial intelligence began quite unexpectedly and fortuitously during my Erasmus stay at the Sorbonne Université in Paris. There, I had the opportunity to collaborate with a professor on a research project in artificial intelligence, specifically in Machine learning. Despite having no previous experience in AI, I accepted the challenge, and immersed myself in research on autonomous agents learning to solve a virtual game using reinforcement learning (It is a machine learning method based on rewarding desired behaviors and punishing unwanted ones. In general, a reinforcement learning agent – the entity being trained – is able to perceive and interpret its environment, perform actions, and learn by trial and error.) The interesting thing about this project was the application of experimental optimizations to a technique called population-based training. In it, various trained agents underwent an evolutionary process: the most efficient agents replaced the less effective ones, but with variations in their parameters to promote adaptability and continuous improvement. This approach was my introduction to AI research and I was fascinated by the ability of this technology to transcend borders and offer new solutions.
My interest in AI increased when, while conducting a research report, I noticed the absence of AI tools capable of summarizing scientific papers in a specific use case. This lack inspired me to focus my final degree project (TFG) on the summarization of scientific texts, in the field of natural language processing, just a few months before technologies such as ChatGPT began to gain relevance.
My foray into artificial intelligence was therefore a combination of chance, curiosity and the identification of an unmet need. Upon completion of my degree, I continued my journey in NLP, which eventually led me to join Accenture in their research lab. It was a transition of exploring different fields within AI, from reinforcement learning to NLP, always driven by the search for practical and meaningful applications of this technology.
AI Research at Accenture: In your current role as an AI researcher at Accenture The Dock, what specific natural language processing and knowledge graph projects have you worked on and what have been your main contributions?
I have focused mainly on two areas: natural language processing and knowledge graphs. One of the key areas in which I have worked is the structuring of complex data through knowledge graphs. This involves the creation of interconnected networks of information, where each node represents a unique entity, such as a person or an event, and the vertices represent the relationships between these nodes. For example, in a health context, we could construct a graph that connects patients, their medical conditions, locations, and other relevant factors. Using artificial intelligence techniques such as the algorithms used to generate Knowledge Graph Embeddings, internal representations of these concepts and relationships, it is possible to explore significant associations, such as the relationship between certain genes and diseases, crucial in pharmaceutical research. These algorithms help to significantly reduce the time and resources needed to identify possible genetic relationships with certain diseases, speeding up the process of developing new drugs.
In terms of natural language processing, we are opening a new line focused on improving the interpretation and generation of language in AI models. A key weakness in current algorithms is their tendency to produce meaningless results (hallucinations). To address this, we are exploring ways to integrate the semantic knowledge of language models with the topological understanding provided by knowledge graphs. This involves combining representations generated by language models with those of graphs to improve the accuracy and relevance of AI predictions and analysis. The goal is to help get more logical and coherent answers, thus reducing the risk of hallucinations or logical errors in text generation.
Challenges in AI: Based on your experience, what do you consider to be the biggest current challenges in the field of natural language processing and how are you addressing them in your work?
One of the biggest challenges in natural language processing models is the one I just explained: that of model hallucinations. One way to approach this is by trying to combine these models with general-knowledge graphs. For example, ChatGPT could generate a sentence like “Tonight is a bright sun,” which is grammatically correct but logically false. If we wanted to address this problem, we could integrate a general knowledge graph into ChatGPT (or another language model). This type of graph includes nodes and connections representing facts and logical relationships of the real world. For example, nodes for “night”, “day”, “sun”, “moon”, and relationships that reflect facts such as “night is associated with darkness” and “day is associated with the sun”. When generating text, instead of relying solely on their previous language learning, the model would also query this knowledge graph. By processing a sentence such as “Tonight is a bright sun,” the system could contrast this statement with the graph information, recognizing that “night” and “bright sun” are incompatible according to general knowledge. Using this approach, the system benefits from a structured knowledge base that helps it validate the logical consistency of its responses, significantly improving the accuracy and relevance of its text generation.
My experience has also allowed me to identify two other major challenges in the field of natural language processing that are crucial: scaling and explainability. In NLP, we often come across huge data structures. Effectively handling such volumes of data is a considerable challenge. For example, when trying to process an entire workbook, current systems often fail. To address this problem, the combination of NLP technologies with advanced knowledge graph algorithms, which are more capable of handling large volumes of data, is again interesting. This combination seeks to achieve more scalable systems that can efficiently process large amounts of information.
The final challenge is the explainability of NLP systems. Currently, these systems are very difficult to interpret in terms of why they generate the responses they do. With trillions of parameters involved, it’s nearly impossible to trace the logic behind every response generated. In contrast, algorithms based on knowledge graphs offer greater explainability. You can examine the graph and understand the logic behind the predictions, making them more transparent and reliable. Hybridizing language models with knowledge graphs not only increases confidence in the generated results, but also reduces the likelihood of unforeseen or erroneous results.
In short, the union of language models with knowledge graphs is a way to build systems that are more scalable and efficient in the handling of large data sets, and also more understandable and reliable.
Practical applications of AI: Could you share any concrete examples of how your work in natural language processing and knowledge graphs is being applied or could be applied in the real world?
The truth is that our work has application in very diverse and promising areas. A concrete example is the prediction of associations between genes and diseases, which represents a significant advance in personalized medicine and biomedical research. In addition, these technologies have potential applications in managing climate crises. We can use them to improve climate predictions, which is crucial for planning and responding to natural disasters. In fact, the ability to make accurate predictions in this field can have a substantial impact on how we prepare for and respond to climate changes.
Another area of application is in the improvement of algorithms for streaming services, such as Netflix. Although I have no specific knowledge of how Netflix implements their recommendation algorithms, it is plausible that they use knowledge graphs. These graphs can help to better understand user preferences and recommend content in a more accurate and personalized way, based on profiles of similar tastes among different users.
In short, the work we develop in our team, for example with our AmpliGraph library, has the potential to be applied in multiple fields, from health to entertainment, demonstrating the versatility and transformative scope of artificial intelligence in the real world.
Future of AI: How do you see the future of natural language processing and knowledge graphs in the coming years, and what impact do you think they will have on industry and society?
The future of natural language processing and knowledge graphs seems to be heading towards constant and progressive evolution, rather than immediate disruptive changes. In the short term, I do not expect a revolution comparable to the emergence of artificial general intelligence, but rather a gradual and increasing integration of these technologies into everyday life and business processes.
Research in NLP and knowledge graphs will continue at an accelerated pace, as it has been doing so far, with significant advances in terms of robustness and reliability. These advances, however, will not necessarily represent a radically new innovation, but rather an improvement of existing capabilities. For example, the integration of NLP and knowledge graphs will result in more robust systems, but not fundamentally different in nature.
In practical terms, this means that we will see greater adoption of these technologies in everyday tasks and in business applications. Artificial intelligence will start to appear in more common aspects of our lives, such as managing emails or generating news summaries. This integration will be similar to the way smartphones gradually became an essential part of everyday life.
At the enterprise level, artificial intelligence, including NLP and knowledge graphs, is already attracting a great deal of interest. Companies are actively studying how to apply these technologies to their business processes. Although implementation may take time, it is clear that the trend is towards wider and more efficient adoption of artificial intelligence in various sectors.
Tips for aspiring AI experts: For those interested in pursuing a career in AI, especially in areas such as natural language processing, what advice or recommendations would you give based on your experience?
My main advice to aspiring AI experts is to deeply understand the technical underpinnings and historical evolution of this discipline. It is essential to recognize that many of the fundamental concepts and technologies in AI were conceived more than 50 years ago. Although the conceptual foundations have remained constant, the ability to implement these ideas has evolved dramatically due to advances in hardware and the availability of large amounts of data.
Understanding this evolution is crucial. In the 60s, for example, although the fundamental ideas of AI already existed, we did not have the computational capacity or the data necessary to develop effective AI systems. Today, although technologies and libraries have changed, the underlying principles remain largely the same. Therefore, it is critical not only to familiarize yourself with today’s tools and technologies, but also to understand how and why these technologies were developed and what problems they were trying to solve.
In summary, for those interested in a career in AI, I highly recommend in-depth technical training that not only encompasses current technologies and techniques, but also includes a study of the history and evolution of AI. This will provide a richer and more comprehensive understanding of the discipline, allowing not only to apply current technology, but also to innovate and contribute to the field in the future.
Collaboration between academia and industry: From your perspective, how can collaboration between academia and industry be strengthened to drive significant advances in AI, especially in areas such as natural language processing and knowledge graphs?
An effective strategy to achieve this is through joint research projects between universities and companies. These projects not only promote the transfer of knowledge, but also allow theoretical advances to be directly applied in practical environments, benefiting both the academic and industrial spheres.
In addition, programs that focus on knowledge transfer play a critical role. Research carried out in universities can and should be transferred to companies efficiently. I think this model is already working well, but there is always room to improve and expand these initiatives.
On the other hand, entrepreneurship programs that connect academia with industry are essential. Programs such as Akademia, which I consider a benchmark, play a key role in this regard.
Thank you very much, Aldan!
If you want to know the testimonies of other Akademia alumni, you can see them here.
And if you want to know more about the Akademia program, we invite you to visit the Foundation’s website.