Abstract
The recent appearance and massive user adoption of ChatGPT and related technologies is rapidly turning AI from what to most people was once merely a distant concern based in theoretical speculation into one of the most pressing and significant developments we are facing as individuals and as a society. In order to better navigate the abrupt changes we are experiencing, we must consider the history and broader context of AI, as well as spur a dialogue that involves as wide an array of voices as possible. This special issue seeks to contribute by offering a platform to scholars in psychology and the social sciences from which to share their unique perspectives and reflections, so that they may enrich our understanding of this global, urgent and multi-faceted phenomenon.
The looming appearance of actual Artificial Intelligence (AI) has been with us for a long time. Not only as a theoretical possibility actively pursued by a select few, be it for scientific, philosophical, technological or commercial goals, but more broadly, as a source of inspiration for narratives that have permeated our collective discourse. And here at the outset we must already offer two qualifications to the foregoing statement. On the one hand, the seeming vagueness of calling such a span merely a ‘long time’, with no further precision, is due to how extremely challenging it is to pin some sort of specific date on such a fundamentally human and existential affair as the creation at our hands of a new kind of being provided with that distinguishing attribute long heralded as our birthright among creatures: thinking and reason. Which leads us immediately to the second qualification, how are we to understand the precise (or blurry) boundaries of those loaded and deeply significant terms, ‘intelligence’ and ‘artificial’. In a sense, all the works presented in this special issue unfold against the backdrop of such a need for further conceptual clarity.
However, in spite of the enormous intellectual allure and the theoretical importance of AI, up until now there seemed to be no real reason for the majority of people to actively engage with the thought of our machines increasingly resembling us and perhaps eventually surpassing us in skill, nor with the immediate and distant consequences of such a development. But now, the recent appearance of ChatGPT and its competitors, with their rapid improvements and, more importantly, their having provided direct access to the use of these technologies to virtually every single person so inclined, has stressed how imminent this prospect now appears. We have only begun to intuit the extent of the disruption that this new reality will bring to industries and to our day-to-day lives, and while some scurry to make the most of the new blink-and-you-miss-it opportunities, others are seriously pondering the seriousness of the risks it all entails.
Insofar as the avowed goals of AI at the very least imply the creation of a new type of mind, psychology as a discipline — as do the social sciences in general — has an enormous contribution to make to the ongoing discussion surrounding it. In this special issue of Studies in Psychology we sought to share and amplify a wide array of visions, so that researchers and theorists could add their voices to help frame the conversation in a way that makes good use of the accumulated wisdom and know-how of our field. We wanted to hear not only from those thinkers who are already working in AI-adjacent areas but rather from any social scientists seeking to offer their perspective and experience and willing to engage seriously with the questions brought forth by AI approaching its maturity. We believe the field of AI and our collective reflection at large has much to gain from such ‘outsider’ perspectives. Even if we are not involved in the technical inner workings of the design of the current generation of AI software, our lives are already deeply affected by its consequences, and thus we have a right to question the paths humanity is embarking on, adding our voice to the discussion.
And the reflections of social scientists are needed since the questions surrounding AI are not only technological but deeply human, and they have a tendency to transform and reemerge over and over, as many of the pieces in this selection will show. The discussions in AI often recapitulate some of the oldest debates in philosophy, and therefore, there is much value in making those connections clearer and more explicit. After all, John McCarthy himself — who first juxtaposed ‘artificial’ and ‘intelligence’ and thus gave the emerging field its official name — wisely pointed out that ‘AI research not based on stated philosophical presuppositions usually turns out to be based on unstated philosophical presuppositions. These are often so wrong as to interfere with developing intelligent systems’ (McCarthy, 1999, p. 72).
These presuppositions have led us to consider such questions as: Is the Turing test, once a cornerstone of the pursuit of AI, still relevant? And if so, what have we learned after seven decades of musings on it? How might we prevent our natural capacity for empathy from being hijacked — as the case of Joseph Weizenbaum’s ELIZA program revealed — so that we project a mind onto things that don’t have one? And how could we tell if they do or don’t, which seems to be the crux of the matter?
As AI programs grow more complex and powerful, a massive debate may well ensue regarding their potential consciousness, agency and selfhood; what ethical elements, then, should we bear in mind when interacting with intelligent systems, perhaps especially with regard to their autonomy? Will the impressive visual renditions by tools such as DALL-E and Midjourney as well the prose written by GPT-4 (and their successors) eventually force us to reconsider the notions of creativity and authorhood?
And what about the interplay between popular culture and the research, theory and deployment of AI? Whether we like it or not, certain fictional stories — particularly cherished blockbuster franchises such as Terminator — seem to operate as veritable Schelling points in the global conversation on AI and its development. But are they helping or hurting our collective understanding of these coming technologies? And, in case of the latter, how can we replace them?
When we envisioned this issue, we aimed at allowing a multiplicity of perspectives to be voiced dealing with these and other questions along similar wavelengths, particularly those that aren’t necessarily at the centre of the conversation surrounding AI. And what we notice is that despite all the works here having their unique perspectives and focus, some overarching connecting threads can be found that join them together.
In And once AI finally beats the Turing test, then what?, Rosas (2024) revisits his earlier musings on the field, written at a time when the technical advancements we have seen in recent years (recent months, even!) seemed but the stuff of fantasy. Armed with the experience of the ensuing decades, he re-engages with questions about the deep nature of the Turing test and whether it has been adequately understood, the difference between reality and simulation and the motivations that may well hide behind the avowed goals of AI researchers in the creation of new minds and new beings.
In Gamifying programs, Musa Giuliano (2024) explores how games have shaped both human evolution and AI development. By looking at the past history of many AI systems as game-playing systems, we better understand the painstaking and gradual steps involved in their design and get an opportunity to ponder how much our own species’ need to play influenced our endowing our mechanical progeny with a penchant for games. This glimpse at the past hopes to shed light on the future. What does this interrelated ‘gamefulness’ help us anticipate concerning our future interactions with machines? What kinds of games are we likely to play with them in the future? What games will they, instead, only play among themselves?
Carretero and Gartner (2024), conversely, focus not on how the past of AI will influence its future but rather on how its present may well influence our past, namely, by impacting our understanding of it. In Artificial Intelligence and historical thinking: a dialogic exploration of ChatGPT, they set out to address the question about the role that these emerging technologies should play in the different spheres through which the past is represented. By sharing and analysing a real interaction with ChatGPT regarding the Spanish Reconquista, they explore the limits of said program as a pedagogical aid that might deepen students’ historical thinking, and also consider the ethical and multidisciplinary orientation that must be in play if this transmission of historical knowledge is to be beneficial.
Much like Turing examined just how skillfully a machine might imitate a thinking being, Brescó de Luna and Jiménez-Alonso (2024) reflect on what happens — both at a personal and social level — when said imitation involves a very specific loved one, once they have passed away. In Deathbots. Discussing the use of Artificial Intelligence in grief, they deal with the many ethical and psychological questions raised by the adoption of such ‘thanatechnologies’ as a way for the bereaved to ‘talk’ with the departed. Their very comprehensive exploration of this phenomenon is enriched by concepts drawn from cultural psychology, with the aid of which they address, among others, both the dependency and self-deception that these tools may lead mourners into, as well as their potentially beneficial therapeutic uses.
In Anxiety in the face of Artificial Intelligence. Between pragmatic fears and uncanny terrors, Rodríguez (2024) addresses one of the main emotional reactions that the public is experiencing at the accelerating approach of these technologies threatening to radically alter our world and our lives. He exhaustively surveys and analyses extant empirical approaches to the phenomenon of AI anxiety, offering sound methodological critiques and guidelines. By looking at how AI is portrayed in science fiction and its connection to research from social and cognitive psychology, he offers a fruitful distinction between the two main facets of the ‘technophobic’ fears, which contributes to a richer understanding of the complexity underlying AI anxiety.
In People and machines in communication, Jacomuzzi and Alioto (2024) seek to explore what differences remain between humans and intelligent machines, by following the route along which several famous social chatbots — each increasing by bounds and leaps in complexity — have left their mark in the history of AI and our interaction with it. In re-treading this historical and conceptual road, they critique the likening of minds and algorithms on the part of the Computational Theory of Mind and point to concepts such as enaction, autopoiesis and agency as offering evidence to dispute such an equivalence.
Finally, Baquero (2024) offers an intuitive and very personal reflection (dreamlike, at times) that seamlessly moves from the intersubjective attributional intricacies of computer chess to the make-believe powers of dreams as primeval forms of fiction. In Who passes the Turing test?, the stress is laid precisely on that ‘Who?’, a question that AI discourse should spend more time and attention on.
These seven contributions certainly cannot but cover a tiny fraction of the manifold angles and challenges which are extant in this important topic. However, it is our hope that they will offer meaningful context both to the history and the treatment of themes that are crucial for debate in this field.
