Abstract
This essay shares a series of intuitions about certain paradoxes that artificial intelligence reveals when confronted with the Turing test. Using chess as an example, we ask about the feasibility of distinguishing intelligent behaviour from the ability to simulate, or even the impossibility of discrimination by the average human. We offer sense-making and corporeity, as opposed to mere computations, as the central attribute of living beings. And in attempting to discern the limits of these simulations, we even consult ChatGPT's own opinion.
In what follows, I will offer a sampling of intuitions regarding certain paradoxes which arise when we inquire into the properties of AI when confronted with the Turing test.
Who passes the Turing test?
What is it that we are really evaluating? The simulation capabilities of complex artefacts or algorithms, or rather the inability of the average human being to accurately discriminate in such a context?
In the end, from a certain point of view, this is really about human discernment as the criterion for defining the features of an object. A problem which, we should agree, is very old yet nevertheless still controversial.
The obvious ambiguity lies in whether we are in fact evaluating the ability to simulate or actual intelligent behaviour. That the litmus test for this should be a human being’s ability to tell both things apart seems at times (and not seldom, at that) trivial.
It must be admitted that the impossibility of distinguishing between AI and ‘human’ is undoubtedly falsifiable, but also that Magnus Carlsen, the former world chess champion, was able to identify his opponent’s moves as non-human in a real-world event. So much so that he angrily left the match and withdrew from a coming tournament where he was to face the same rival. Such was the strength of his conviction (cf. Chess.com, 2022; CNN Español, 2022).
I therefore propose for us to consider the following scenario:
To ‘A’, ‘B’ is indistinguishable in appearance from ‘C’. Or rather, A does not notice that B isn’t C. However, ‘M’ can indeed tell that B is not C. Which means that B is only apparently indistinguishable from C to A.
There might be an ‘H’ that manages — or at least attempts — to prevent M from noticing that B and C aren’t equal. But then it is H who would be rendering B and C apparently indistinguishable, given that H can certainly tell them apart.
I invite the reader to substitute B for Deep Blue, as homage, C for a human chess player, A for a chess aficionado, also a human player and even a good one. M, in turn will stand for Magnus, the world champion up until very recently, and H as the initial in the first name of the player penalized for cheating online with the help of a chess engine, and suspected of doing the same in official matches (which, however, has not been proven).
Besides, we already know or could know that B, in fact, is not a human or even a living being. This, of course, is not the focus of Turing’s conundrum. But if so, then how relevant can it really be for us, when what we are actually concerned with in the real world is whether bio-digital hybridization is creating not merely new forms of functioning but new forms of ‘existence’ (Benasayag, 2021).
The core issue lies, however, (and not in a small way) in ‘fakes’ and their powerful potential for predation. But then again, this problem is all too human — political, if you will — and not strictly technical or cognitive (Sadin, 2018, 2022).
Living beings
If we anchor the existence of living beings in their ability to create meaning as opposed to mere calculations, such a distinction warrants being considered as qualitative in nature, not quantitative. And as biologists have abundantly demonstrated, the ‘intelligence’ of living beings, or their development, if you will, lies not in the calculating powers of brain-like formations but rather in a contextually located corporeity which, apart from calculating, interprets and imbues the world with meaning. Calculating or being able to, in and of themselves, are not a self-sufficient adaptation but only one relatively relevant component among many (Benasayag, 2019).
As Kull (2022) states, each living being, as a semiotic being, makes choices, interprets and creates meaning, but this comes as a result of an enormously complex chain of long-term environmentally embedded processes, which cannot be reduced to mere calculations.
Drifting
If we allow, then, that we really do know the non-human nature of AI — unless we are discussing endowing it with life or humanity — I can find no substantive difference between being unable to detect the non-humanness of AI and the make-believe powers of a nightmare or a hallucination. Other than the odd cult that sings the praises of being a human-made ‘artifice’ rather than the result of mere biological or psychological drifts. A dream’s protagonists are astoundingly real and smart, at times even to the point of being cunning and obscure as to their intentions. They can make our hearts race, our voices turn into screams, or even get us to fall off the bed. Thousands of years ago, dreams — which, as Borges (1980) says, might be the most ancient artistic or dramatic genre — had already passed the Turing test.
A substance that causes hallucinations materializes an experience or image which blurs the line demarcating ‘real’ from ‘not real’. We do not scrutinize whether the substance passes the Turing test because its effect is known beforehand. The proper comparison would have to take place with someone wholly unaware of such a hallucinatory effect. But then again, who is it that passes the Turing test or doesn’t? The person who consumes peyote knowing what it can do? The person who takes it not knowing what awaits? Or the peyote itself?
I simply ask ChatGPT
- ‘Do you pass the Turing test?’
ChatGPT
‘No, I cannot pass the Turing test, since I am an artificial intelligence program and I do not have the ability to consciously carry out conversations or have a level of intelligence comparable to that of a human being. The Turing test is used to evaluate whether a machine can imitate human behaviour well enough that a human judge cannot distinguish between the machine and a human in a conversation. My main function is to process and generate text based on the data I was trained with, but I have no consciousness, emotions or self-awareness.’
Paradoxically, its answer seeking to convince me through such a sensible argument that it does not pass the test makes me think, oddly enough, that it actually does pass it. It is clear that I’m not Magnus, I’ve been warned as to what peyote can do and, up until now, I wake from my nocturnal nightmares in order to inhabit the real ones each morning brings. Please excuse the irony.
