This is one of the questions that has been on our minds for some time now every time we read about the latest advances and promises of artificial intelligence (AI). The bombardment of information about the progress of AI is continuous and comes from many fronts with different objectives; either conveying excessive optimism (e.g. superintelligent, conscious AI, etc.) or, on the contrary, drawing dystopian scenarios (e.g. AI that will exterminate humanity). In either case, often with opinions or information far removed from reality, and with a confused objective, which often gives us a mistaken idea of AI and its real transformative potential. Shedding light and clarity on this issue is the aim of a recently published paper from Tecnalia titled “Can transformative AI shape a new age for our civilization?: Navigating between speculation and reality.“
From speculation to reality: The evolution of AI
AI and its capabilities have been the subject of speculation and confined to science fiction since its inception. Today, we can say with certainty that it has become a mature discipline, evolving into one of the forces with the greatest transformative potential for our civilization. In recent decades, the evolution of AI has moved from theoretical speculation to tangible achievements. The most recent systems based on foundational models, and in particular those based on generative AI, are demonstrating more than remarkable capabilities, surpassing human performance in specific areas such as strategic games, natural language processing and image recognition, among others.
However, AI’s impact on our civilization goes beyond its technical and algorithmic achievements. Its integration into critical social functions, from healthcare to governance, can be a double-edged sword. On the one hand, AI promises to solve complex problems such as climate modeling, personalized medicine, and economic optimization. On the other hand, it raises concerns about surveillance, job displacement, algorithmic biases, or endangering democratic values, for example. This highlights the urgency of establishing ethical frameworks and solid governance structures for AI.
We are on the verge of a possible leap forward at the level of civilization; although this leap may have already taken its first steps, its impact on our society remains uncertain. Whether it will lead to unprecedented prosperity, a catastrophic outcome, or another bubble that will burst, will depend not only on technological advances but also on how our civilization decides to harness and regulate this powerful tool of unparalleled transformative potential.
The idea of transformative progress is not new. History provides examples of innovations that have catalyzed epochal changes, such as the invention of the steam engine during the Industrial Revolution or the rise of the internet in the Information Age. However, what distinguishes AI from previous advances is its potential to serve as a general-purpose technology, with capabilities spanning almost all domains of human activity. As research advances, discussions increasingly focus on whether AI can represent not only a significant technological advance but a crucial change in civilization itself. Although the excitement around AI is palpable, so are the ethical, social and existential challenges it poses.
Ethics as a cornerstone: Shaping AI’s development
To assess whether AI really does represent a turning point for our civilization, we must raise our perspective, otherwise, we run the risk of treating it as just another technology, losing the vision that current circumstances demand. In this vision, ethics must be a key tool, an integral part of the design and implementation of AI from the outset, and not an afterthought. How ethics perceive AI is crucial to its development as a truly transformative technology and its subsequent integration into our civilization. This perception affects not only its adoption and regulation but also the development of applications in key areas in almost all human facets.
Furthermore, it helps us understand the impact and consequences of our actions and decisions, as well as our interests, moral values and the future of our civilization. In a changing world, certain aspects of our humanity, our essences, must remain constant while AI transforms everything around us. And to know which aspects to maintain and how, we must use ethics as a tool to help us weigh up change. We need AI that is aligned with our values, and not solely driven by economic interests. It would be a shame to have a multi-purpose tool at our fingertips with the potential to overcome some of the most pressing challenges facing our civilization and to be left with only its ability to generate money.
However, applying ethics to AI is not without its challenges. First of all, the diversity of ethical approaches can generate conflicts when trying to establish a unified ethical framework, since different approaches can suggest different and even contradictory actions in the same situation. Furthermore, ethics is not universal and can vary significantly depending on the cultural and social context; what is considered ethical in one culture may not be in another.\
This variability complicates the creation and implementation of global ethical standards, as it is necessary to consider and respect these cultural and social differences. It is here that we find the European AI Act as an effort to reconcile the different ethical perspectives in its regulatory framework. This is the first comprehensive regulation of AI by a major regulator and can be seen as the formalization of a social contract between governments, developers, companies and the general public.
“The question of whether machines can think is as relevant as that of whether submarines can swim” — Edsger W. Dijkstra
Human vs. machine: Rethinking intelligence
Despite the great advances that are taking place practically every month, AI still depends to a large extent on humans, lacking total autonomy and human capacities, such as complex reasoning to fully understand their actions or fully perceive their environment. Achieving the quintessence of AI, either in the form of artificial general intelligence (AGI) or a conscious or similar entity, is the race that the big tech companies and the all-powerful laboratories have been presenting to us for some time now. And it is what is being sold in the short or medium term, but we fear it will have to wait or be partially ruled out as the capabilities of this type of AI are conceived; this time closer to fiction than science.
If possible, some watered-down version will be sold to us as the ultimate achievement, like the goal that the company OpenAI has set itself: “Highly autonomous systems that outperform humans at the most economically costly jobs”; although it would still be a remarkable advance. There is no consensus today on the definition and scope of this quintessence, so we will be at the mercy of whatever is sold to us. Furthermore, we may not need this quintessence to have an AI that transforms our civilization. An airplane does not fly like a bird, a submarine does not swim or dive like a sperm whale, yet both advances have been milestones despite their differences from biology.
So, the debate about truly transformative AI may not be about whether it can think or be conscious like a human, but rather about its ability to perform complex tasks across different domains (“general purpose”) autonomously and effectively. It is important to recognize that the value and usefulness of machines does not depend on their ability to exactly replicate human thought and cognitive abilities, but rather on their ability to achieve similar or better results through different methods. Although the human brain has inspired much of the development of contemporary AI (e.g. neural networks), it need not be the definitive model for the design of superior AI. Perhaps by freeing the development of AI from strict neural emulation, researchers can explore novel architectures and approaches that optimize different objectives, constraints, and capabilities, potentially overcoming the limitations of human cognition in certain contexts. This conceptual flexibility highlights the potential of AI as an innovation that would not be limited by biology, but would draw on it.
And there are many challenges ahead before we can even think about this transformative AI in the short term. Some human factors that could be stumbling blocks on the road to transformative AI include: the information overload we receive, the possible misalignment with our human values, the possible negative perception we may be acquiring, the view of AI as our competitor, the excessive dependence on human experience, the possible perception of futility of ethics in AI, the loss of trust, overregulation, diluted efforts in research and application, the idea of human obsolescence, or the possibility of an “AI-cracy”, for example. However, this also implies that scientific-technological factors may appear as barriers that we must overcome before achieving a truly transformative AI, such as: the data paradox, the difficulty in recognizing the emergence of new capabilities in AI, “world modeling” (World Modeling), challenges in sustainability and physical limitations, or the lack of consensus in the theoretical foundations of computing on the possibility of having human-level AI, among other factors.
However, it is true that there are some “green shoots” suggesting it could become possible; and probably not by looking at them individually, but through an intersection between many of them, as if it were a knock-on effect, feeding back into each other.
“The question is not what AI will be like in 10 years’ time, but what we want it to be like” — Peter Norvig
AI’s role in scientific explosion
From a scientific-technological point of view, we have autonomous multi-agent systems, advances in neuro-computing, interactive AI, advances in specialized hardware, highly sophisticated virtual environments, causal modeling, Open-World Learning, self-improving learning and self-learning, or quantum computing, among others. If we look at the non-scientific-technological point of view, we could talk about the integration of interdisciplinary approaches, advances in global collaboration, significant investments in AI, or the emergence of new approaches in the generation and processing (learning) of data. But what really suggests a transformation at the level of civilization through AI is a possible “scientific explosion”, or as Dario Amodei (CEO and co-founder of Anthropic) recently said, “the compressed 21st century”. AI is starting to play an increasingly broad role in science, spanning numerous fields and acting both as a catalyst for scientific advances and as an essential tool in the research process (e.g. AlphaFold, LucaProt, the “AI Scientists”, etc.). This development could mark the beginning of a new era characterized by accelerated discoveries, driving progress at the frontiers of knowledge and achieving results that overcome the limitations of current methodologies. Such acceleration has the potential to address crucial societal challenges, such as climate change, public health, and the green and digital transitions, among others.
Redefining consciousness and identity
Finally, we would like to focus on an aspect that we consider relevant: what would happen after achieving an AI with such characteristics? Profound changes could arise in the ethical and philosophical frameworks that guide our interaction with this disruptive technology, perhaps requiring new forms of philosophical thought. This fact could challenge current conceptions of consciousness and identity. For example, functionalist theories suggest that consciousness could be defined by processes, not by the biological substrate, which would imply reinterpreting Cartesian dualism and expanding the concept of “personality” to non-biological entities based on their rationality and self-awareness. From an ethical point of view, it would be questioned whether AI could become a moral agent. According to the Theory of Responsibility, those with power have moral obligations towards the forms of life they impact. Thus, an AI with transformative power could assume responsibilities toward humans and its own existence. More extremist and futuristic positions, such as the transhumanist movement led by Nick Bostrom, see AI as a logical stage in human evolution, warning of existential risks if they are not aligned with human values. Authors such as the philosopher Yuval Noah Harari suggest that a “religion of artificial intelligence” could emerge, which would attribute an almost divine status to these entities, redefining current religious and philosophical systems. In this context, a “metaphilosophy” or “metareligion” could emerge to reconcile humanity with synthetic intelligence, transforming our notions of purpose and morality.
Long-term vision: AI for collective well-being
We want to conclude by emphasizing the relevance of adopting a long-term vision in the development and application of AI. Instead of focusing solely on immediate achievements, we propose a strategic approach that ensures that AI systems are designed and used for the benefit of collective well-being and global advancement as a civilization. Achieving this goal requires balancing technical progress with a solid ethical commitment, and fostering an education that enables future generations to interact critically and effectively with these technologies.
Jesús López Lobo has a degree in Computer Engineering (University of Deusto, 2003), a Master’s in Advanced AI (UNED, 2014), and a PhD in Information and Communication Technologies in Mobile Networks (University of the Basque Country UPV/EHU, 2018). He is currently a scientific researcher at the applied research center TECNALIA, and also a collaborating professor at the Open University of Catalonia (UOC). His mission is to explore, develop and transfer scientific-technological solutions in AI that generate value for society and organizations. His field of specialization gravitates towards Adaptive AI, which addresses the challenges posed by dynamic and changing environments for machine learning systems. He is also interested in the ethics of AI, the governance of AI and the alignment of AI with human values, among other topics. Finally, he has participated in several research and innovation projects, published several scientific articles in high-impact journals and conferences, and contributed to the dissemination of AI at the national and international levels.
Javier del Ser Lorente is a Telecommunications Engineer (University of the Basque Country UPV/EHU, 2003), Doctor in Control Engineering and Industrial Electronics (University of Navarra, 2006), Doctor in Information and Communication Technologies (University of Alcalá de Henares, 2013). He is currently the scientific and technological director of AI at TECNALIA, and also distinguished professor at the University of the Basque Country (UPV/EHU). His research interests focus on applied AI (with special attention to trustworthy and responsible AI, learning in an open world, and explainable AI) for paradigms emerging in industry, healthcare, transportation, and mobility, among many other fields. He has published more than 470 articles in journals and conferences, directed 19 doctoral theses, edited four books, invented nine patents and directed several applied research projects. He is a Senior Member of the IEEE and has received several awards for his research career. He has been included in the list of the top 2% of the most influential AI researchers worldwide by Stanford University (since 2021), and has been part of the team that developed the AI R&D&I strategy for the Spanish Government (2019).