Abstract
Play has been crucial in the evolution of our species, and has held a most important role in the development of our electronic scions as well. As the boundaries between human and machine become increasingly blurred, so do those between work and play and between life and games. The imminent advent of virtual reality as the favoured landscape for human interaction and the zest with which gamification has inherited the aims and strategies of prescriptive psychology are examples of how games will become a ferocious shaping force for our culture. Current trends lead us to believe that intelligent programs, reared in the playing of games like chess and checkers in their distant infancy, may now themselves become architects and providers of the games that will assuage an infantilized humankind.
‘Machine technology remains up to now the most visible outgrowth of the essence of modern technology, which is identical with the essence of modern metaphysics.’
It hardly needs stating how multithreaded the fabric of human experience is. Almost every aspect of our collective or individual, cultural or biological lives (and upon scrutiny, the precise boundaries of each of those demarcations cannot fail but to dissolve into a field of rich and fuzzy fractal blurriness) can be unspun into gripping narratives and be visited as a treasure house of insights for and into the mind. But just like the edifice of what being human is all about offers countless doors in but no Grand Unified Theory, the same restriction is in effect when dealing with one of the latest offshoots of our collective ingenuity: Artificial Intelligence. Speaking about what our thinking machines are unavoidably requires considering in turn what we are. (And when this is not done explicitly, tacit assumptions seriously risk muddling our conversations.) In this piece, I have chosen games and play as an entry point for a discussion about the relationship between humankind and its machines which today show promise for eventual autonomous thought.
I hope to show that games are not only a lavishly detailed telescope both into the development of our species and into the progressive evolution of our creations, but, more importantly for our present purposes, that they also afford us the chance to groundedly anticipate how our mutual interactions may continue to unfold.
Games were a crucial stepping stone in the development of mechanical thinking systems, and were it not for them, the state of advance that we now see and take for granted would in all likelihood have taken considerably longer to achieve. And if our machines were to learn how to bypass us and play solely among themselves, what mind-boggling outcomes could we reasonably expect down the road? We are already seeing notable signs of the power of adversarial self-play for artificial virtual agents, with researchers at OpenAI, for instance, having shown how emergent strategies of high complexity arise without direct human instruction in a simulated game of hide-and-seek, some of which not even the researchers themselves had known were possible within their system (Baker et al., 2019). In what follows, I shall chiefly focus on two possible outcomes for our relationships with thinking machines, mutual collaboration and merging or utter dependency. There is also, however, a third and darker possibility: annihilation and replacement (Yudkowsky, 2022). As tempting as it is to cursorily ignore such gloomy scenarios, we should not, and if I don’t properly explore said possibility here, it is solely because I have elsewhere already discussed it at some length (Musa Giuliano, 2020).
At the outset, I plead the reader be indulgent with my having borrowed the circular scaffolding of my title’s structure from Heinz von Foerster’s Observing Systems (or alternately, Jonathan Safran Foer’s Eating Animals). The choice boils down not to the mere appeal (always present and always tempting) of cutesy, low-hanging ‘clever’ wording, but is warranted by the main thrust of the argument. The term ‘gamifying programs’ is valid at both levels in which it can be interpreted. Stated laconically, it works as a gerund because we (as the Homo sapiens sapiens species) have gamified the environments of our programs, and it also works as a participle because it is quite plausible that programs may eventually gamify our reality. But before we plunge into gamification proper, and offer some working definition thereof, let us first zoom back and speak more generally about games, how they are inextricably linked with what it means to be human and how they have shaped the history of AI.
Humans at play
There comes a point in the life of every psychologist where he or she must — or so at least suggests Daniel Gilbert — complete what Gilbert has ominously dubbed The Sentence, that is, provide the missing term in the fundamental statement: The human being is the only animal that _________. (Gilbert, 2006, p. 10). While pet owners everywhere would be off-put at the short-sightedness of ending the sentence with ‘plays’, there is a sense in which human play is distinctively unique and a fundamental shaping force in our culture and history. Many different species play too, but the immense variety of our repertoire of play and its centrality in our lives is staggering. Rosas (2001), expanding on valuable insights from Rivière and Vygotsky, has shown how play is fundamental in acquiring the capacity to process and enrich meaning, and also a crucial stepping stone for something as intrinsically human as being able to represent subjective states in others. As W. Grey Walter, father of the electro-mechanical tortoises considered to be the first biologically inspired robots (Holland, 2003), expressed it: ‘It has been suggested that the greater an animal’s brain the more its survival depends on the nature of its play. Human society devotes an enormous proportion of time and energy to play’ (Walter, 1961, p. 225).
The cornerstone role of play in human culture has been expounded in depth by scholars the likes of Huizinga (1938/1980) and Caillois (1958/2001), in works of a caliber that render any attempt at cursory summarization a disservice to the scope of their journeying. Huizinga may be even considered to have anticipated how fundamental play was in the emergence of mind (Contreras, 2019, p. 29). And what is play? Augustine’s (1876) strategy regarding time (‘If no one asks me, I know: if I wish to explain it to one that asketh, I know not’) (p. 235) and Justice Stewart’s regarding hardcore pornography (‘I know it when I see it’) (Jacobellis v. Ohio, 1964) have been a source of succour for many intending to survey expansive conceptual landscapes.
1
As Burghardt (2005) put it in his seminal analysis of animal play, ‘when trying to sort out the boundaries of play, one quickly gets tangled in a web of definitions, controversies, and elusive notions that slip away just when one thinks that they are grasped’ (p. xi). Huizinga (1938/1980) himself cautioned against adopting the alluring stance of deeming all human activity ‘play’, saying that such a move was ‘ancient wisdom, but it is also a little cheap’ (p. ix). Nevertheless, and fully acknowledging the slipperiness of the endeavour, let us take part of Huizinga’s definition of play as a starting point:
[A] free activity standing quite consciously outside ‘ordinary’ life as being ‘not serious,’ but at the same time absorbing the player intensely and utterly. It is an activity connected with no material interest, and no profit can be gained by it. It proceeds within its own proper boundaries of time and space according to fixed rules and in an orderly manner. It promotes the formation of social groupings which tend to surround themselves with secrecy and to stress their difference from the common world by disguise or other means. (Huizinga, 1938/1980, p. 13)
Commenting on it, Caillois (1958/2001) observed that Huizinga’s definition, ‘in which all the words are important and meaningful, is at the same time too broad and too narrow’ (p. 4). Carse (1986), in his thought-provoking and much revisitable Finite and Infinite Games, a boundary-defying work (as evidenced by its subtitle ‘A Vision of Life as Play and Possibility’), lays special stress on one of the dimensions of the definition, that of always being freely and voluntarily undertaken: ‘It is an invariable principle of all play, finite and infinite, that whoever plays, plays freely. Whoever must play, cannot play’ (p. 4, emphases in the original). Of these defining features of play highlighted by Huizinga, three shall occupy us especially in a latter section — its being ‘free’, ‘outside life’ and ‘unprofitable’ — for as we shall ponder, their status as necessary traits may possibly be brought into question by the reality of contemporary video games and the social fabric in which they are embedded. What ultimately cannot be denied is that play is fundamental for us, and to an even greater extent than it has been for our phylogenic precursors. But what about our machines?
Early game
When did computer programs undertake the first steps on the road to becoming our playmates? In her chronicle of the dawn of AI, McCorduck (1979) allotted a full chapter to the impact of games in the early days of the enterprise, which fittingly begins by subscribing Huizinga’s proposed taxonomic label for our species: Homo ludens. To her, it is no surprise that computers were involved in games from their earliest inception. Most of the early researchers in AI were involved with games, be it in their research or as a hobby (R. Skinner, 2016, p. 29). McCorduck ponders different explanations for why this should be the case. On the one hand, games serve as microdomains (an idea that we see again and again in the literature); they are simplified models of situations in real life, which express their essence, just like physical models imitate physical reality (p. 147). On the other hand, McCorduck considers, quoting the influential early anthology on AI edited by Feigenbaum and Feldman, ‘it provides a direct contest between man’s wit and machine’s wit’ (cited in McCorduck, 1979, p. 147). But ultimately, she concludes that the true reason is not to be found in either of those façades, but rather in that ‘games are deep in the heart of us’ (p. 146). ‘I’ve seen too many gleaming eyes to believe otherwise’ (p. 148), she tells us.
Yet the importance of games as toy models of reality is not to be disregarded. The virtues of studying processes in restricted domains as opposed to immensely complex ones such as the entire physical universe or a fully functioning human brain become apparent in the following simile offered by Douglas Hofstadter (which, it must be clarified, was drawn not in relation to games, but rather to a very special analogy-making program, Copycat, developed by him and Melanie Mitchell). This is therefore an analogy to an analogy between analogy-making programs, but I feel it is suggestive of the pros and cons of microdomains in general:
Suppose one wanted to create an exhibit explaining the nature of feline life to an intelligent alien creature made of, say, interstellar plasma or some substrate radically different from that of terrestrial life. The Copycat approach might be likened to the strategy of sending a live ant along with some commentary aimed at relating this rather simple creature to its far larger, far more complex feline cousins. The rival approach might be likened to the strategy of sending along a battery-operated stuffed animal—a cute and furry life-sized toy kitty that could meow and purr and walk. This strategy preserves the surface-level size and appearance of cats, as well as some rudimentary actions, while sacrificing faithfulness to the deep processes of life itself, whereas the previous strategy, sacrificing nearly all surface appearances, concentrates instead on conveying the abstract processes of life in a tiny example and attempts to remedy that example’s defects by explicitly describing some of what changes when you scale up the model. (Hofstadter, 1995, p. 302)
And among all AI toy models, none has been as fruitful as computer chess, the history of which is peppered with fascinating detail. Credit as the creator of the first artificial chess player could either go (depending on the stringency of our requirements to consider them as such) to Leonardo Torres y Quevedo, for his 1912 automaton, capable of playing a rook and king endgame, or to Dietrich Prinz, a colleague of Turing’s, whose programming an electronic computer to play chess for the first time ‘was akin to the Wright brothers’ first short flight. He had shown that computers were not just high-speed number crunchers. A computer had played chess’ (Copeland, 2017, p. 342). However, there is another entity which, while not being a full-fledged member of the category, certainly deserves mention in any story of AI chess: Baron Von Kempelen’s Turk.
This wondrous contraption — which artfully allowed a hidden human chess master to control a fancifully clothed mannequin which moved the chess pieces — toured Europe during the late eighteenth and early nineteenth centuries (first with its creator, Von Kempelen, and later with impresario Johann Maelzel) and eventually made its way to the United States. The reason for summoning this odd character out of the history books is that the Turk is the perfect metaphor for the hidden human component often concealed in automation that is touted as independent. 2 It is not surprising for Amazon to have called its crowdsourcing platform for ‘human intelligence tasks’ Mechanical Turk. As I have argued on the topic of creators, creatures and creativity (Musa Giuliano, 2019, pp. 116–127), there is a hidden layer of human intelligence in the seemingly clever displays of computer programs, which nevertheless can shine through the cracks when we pay more attention. The exploitation of fossilized human Big Data confronts us with the inescapable Lovelace Objection to machine originality (see du Sautoy, 2019; Turing, 1950, p. 450).
The Turk played against some of the most important figures of the age, like Benjamin Franklin and Philidor, the world’s greatest chess player in his era (Standage, 2002a), and even defeated Napoleon, who, trying to test the machine, went as far as to attempt some illegal moves during the game (Levy & Newborn, 2012). ‘Napoleon was better versed in the art of manoeuvring human kings, queens, prelates and pawns on the great chess-boards of diplomacy and battle than moving ivory chessmen on a painted table-top,’ concludes Evans (1906) in his essay The Romance of Automata (p. 135). Far more important for our purposes, however, was the encounter between the Turk and Charles Babbage, who twice defied it and twice lost (Standage, 2002b). While he suspected that the machine’s performance was merely the trickery of a concealed human (as Edgar Allan Poe also later would 3 ), the encounter seems to have left a lasting influence on his thoughts regarding the potential mental capabilities of machines (Standage, 2002b). Competent chess-playing seemed to be a perfect benchmark of what only human-level intelligence could accomplish, which made building a genuinely autonomous chess-playing machine so enticing a prospect as it remained for decades. As Babbage put it in his autobiography: ‘I endeavoured to ascertain the opinions of persons in every class of life and of all ages, whether they thought it required human reason to play games of skill. The almost constant answer was in the affirmative’ (Babbage, 1864, p. 465).
Babbage did not succeed in his ambition, and for years chess was heralded as a grail of human intelligence upon which machines would not trespass. In his scathing attack on the field, Alchemy and Artificial Intelligence, Dreyfus (1965) chose the very limited progress that machines were making on the chessboard as a clear sign of stagnation. Only a year later, his fiercest critic, Seymour Papert, arranged for Dreyfus to play against Richard Greenblatt’s MacHack program (Boden, 2006, p. 841), where he ‘had the pleasure of [. . .] seeing him very roundly trounced’ (Papert, 1968, p. I-6). And yet, as important a criterion as human-level chess-playing was held to be, once attained, it too fell prey to what is commonly termed the AI Effect, which states that once AI becomes able to do something, then such a thing is no longer thought a hallmark of intelligence. 4 The same thing would later happen with the game of Go (du Sautoy, 2019; Rosas, 2024).
But far from being only a pastime, chess has served as ‘a test-bed for ideas in Artificial Intelligence’ (Copeland, 2004, p. 562) and has even been considered the ‘model organism’ of AI (Ekbia, 2008; Ensmenger, 2011). Donald Michie, Turing’s close collaborator and a crucial evangelist for his ideas, spreading them in several AI labs and universities in the UK and North America (Copeland, 2017, p. 267), drew the analogy explicitly:
Computer chess has been described as the Drosophila melanogaster of machine intelligence. Just as Thomas Hunt Morgan and his colleagues were able to exploit the special limitations and conveniences of the Drosophila fruit fly to develop a methodology of genetic mapping, so the game of chess holds special interest for the study of the representation of human knowledge in machines. (cited in Copeland, 2004, p. 562)
Two names stand out especially among the many AI pioneers who set their sights on the problem: Claude Shannon and Alan Turing. Shannon (1950), father of information theory,
5
wrote an influential paper outlining a computing routine that would enable a modern general-purpose computer to play chess (p. 256). Shannon seemed to have been aware that games are often thought of as frivolities, for in a latter write-up of his chess ideas, he gave a defense of the general utility that the insight generated in dealing with chess could provide for other areas:
This problem, of course, is of no importance in itself, but it was undertaken with a serious purpose in mind. The investigation of the chess-playing problem is intended to develop techniques that can be used for more practical applications. (Shannon, 1956, p. 2,124)
As thorough chronicler of the history of cybernetics Ronald Kline has documented, Shannon sympathized with the main goal of attaining human-level artificial intelligence. In a letter to a former teacher, he declared that his fondest dream was ‘to someday build a machine that really thinks, learns, communicates with humans and manipulates its environment in a fairly sophisticated way’ (cited in Kline, 2011, p. 8).
Turing’s thinking about computer chess was deeply at play in his reflections on whether intelligence could be mechanized. In 1948 he had collaborated with his friend, the statistician David Champernowne, to produce the rules for a chess-playing paper machine, affectionately given the moniker Turochamp after its creators (Copeland, 2017, p. 331). Turing actually started coding a revised version of this chess engine in the Manchester computer but never completed it. Interestingly, both Turing’s and Shannon’s chess engines have been contemporarily instantiated 6 in actual software (Copeland, 2017, p. 344) and made to compete against one another, the result being a tie after 10 games (each program winning once and coming to a draw in the remaining eight). Copeland (2017), unabashed torchbearer of Turing that he is, however, is compelled to add that ‘given that repetitive moves often cost the Turing Engine its win, it seems probable that Turing would have beaten Shannon hands down had [a repetition detection] rule been in place’ (p. 345).
We can further appreciate the theoretical boon that game-playing machines bestowed when we note that, in seed form, Turing’s ideas that would later lead him to flesh out his now ubiquitous test in Computing Machinery and Intelligence (1950)
7
had made a previous appearance in connection to his discussing chess-playing:
The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration. [. . .] It is possible to do a little experiment on these lines, even at the present stage of knowledge. It is not difficult to devise a paper machine [a human operator precisely following the rules of an algorithm] which will play a not very bad game of chess. Now get three men as subjects for the experiment A, B, C. A and C are to be rather poor chess players, B is the operator who works the paper machine. (In order that he should be able to work it fairly fast it is advisable that he be both mathematician and chess player.) Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine. C may find it quite difficult to tell which he is playing. (Turing, 1948, p. 431)
Sadly, this article — which Copeland (2004) considers ‘effectively the first manifesto of AI’ (p. 355) — never saw the light of day, owing quite possibly to its negative reception on the part of Turing’s superior at the National Physical Laboratory, where he had started working after the war. Oddly enough, the man in question was Charles Galton Darwin, grandson of Charles Darwin and godson of Francis Galton. The ‘headmasterly’ C. G. Darwin, as Copeland (2004) puts it, deemed Turing’s manifesto a ‘schoolboy’s essay’ (p. 401) and argued against publication. Despite being connected both by nature and nurture to two of the most vivaciously inquiring spirits England ever produced, he himself failed to display the flight of scientific imagination needed to value the far-reaching implications of Turing’s precocious vision. In his gloomy speculative treatise forecasting ‘the next million years’ of humanity, Darwin (1952) barely spares a word for the ‘new high-speed calculating machines’, relegating them to the possible role of uncannily accurate predictors of the consequences of competing policies, a task which they could undertake ‘with a completeness that is far beyond anything that the human mind can aspire to achieve directly’ (p. 55).
Unbeatable
But then again, we should perhaps not be too harsh on Darwin for failing to know what was to come, since he is far from alone in that all too human foible.
8
It is with a gasping pang of mute dread that we must oftentimes confront the archival remains of what accounts of the future the past dreamt up. Szymborska (1981, p. 121) has captured the feeling superbly in her poem The Letters of the Dead (as beautifully translated by Magnus J. Krynski and Robert A. Maguire):
We read the letters of the dead like helpless gods,
yet gods for all that, since we know the dates to come.
. . .
We silently observe their pawns on the chessboard,
except they’re now moved three squares further.
Everything they foresaw came out quite different.
We get a similar sensation when reading in which directions Artificial Intelligence pioneers thought the field would evolve. Here is Donald Michie in 1972 with his predictions on the future (at the time) of computer chess:
Hence if the knowledge of the chess-master were built into a computer program we would see not master chess, but something very much stronger. As with other sectors of machine intelligence, rich rewards await even partial solutions to the representation problem. To capture in a formal descriptive scheme the game’s delicate structure—it is here that future progress lies, rather than in nanosecond access times, parallel processing, or mega-mega-bit memories. An interesting possibility which arises from the “brute force” capabilities of contemporary chess programs is the introduction of a new brand of “consultation chess” where the partnership is between man and machine. The human player would use the program to do extensive and tricky forward analyses of variations selected by his own chess knowledge and intuition, and to check out proposed lines of play for hidden flaws. (Michie, 1972, p. 332)
However, whether we like it or not, in the end it was brute force indeed that did the trick. We must here skip over the successive improvements that later decades brought, during which confident predictions time and again had to be readjusted until, finally, the so-oft rescheduled promises of computer chess found their most iconic fulfillment in the victory of IBM’s DeepBlue over Garry Kasparov in 1997: a computer program had beaten the reigning world champion. Unfortunately, the feat shed little light on the actual mental processes that underlie how humans think when engaged in the practice of chess, as the original pursuers of machine chess had hoped:
The huge improvement in computer chess since Turing’s day owes much more to advances in hardware engineering than to advances in AI. Massive increases in cpu speed and memory have meant that successive generations of machines have been able to examine increasingly many possible moves. Turing’s expectation was that chess-programming would contribute to the study of how human beings think. In fact, little or nothing about human thought processes appears to have been learned from the series of projects that culminated in Deep Blue. (Copeland, 2004, p. 566)
Some, however, hope to learn ‘what computer-generated gameplay suggests about how brains operate’ in the workings of the currently reigning and vastly superior AlphaZero (Purves, 2019, p. 14,785). This deep neural network would show that ‘algorithmic computation (executing a series of specified steps)’ (p. 14,787) must be replaced as an analogy for how humans think in favour of ‘connectivity generated by trial-and-error learning over evolutionary and individual time’ (p. 14,786). It is important to mention (a fact that Purves acknowledges, but to an important extent downplays) that AlphaZero’s approach of trial and error is not fundamentally new. Of course, it is not strange for past technological developments to be dropped or minimized in retellings,
9
but the crux of what makes AlphaZero tick had already been thought of and partially developed in the 1950s; we just lacked the hardware for it to work on the scale it now does:
The learning procedure that Turing proposes in ‘Chess’ involves the machine trying out variations in its method of play—e.g. varying the numerical values that are assigned to the various pieces. The machine adopts any variation that leads to more satisfactory results. This procedure is an early example of a genetic algorithm. (Copeland, 2004, p. 565)
10
More important still is the fact that there is no such clear-cut distinction between ‘rule-based computation’ (Purves, 2019, p. 14,787) and ‘learning on a wholly empirical (trial and error) basis’ (p. 14,786). As Esteban Hurtado has clarified in his work on the limitations of computer models for human thought, ‘the usual way of implementing a neural network is by means of a programing language that uses the same old rigid formal rules. So, actually, neural networks, as a theoretical mind-modeling device, do not add any new capability’ (Hurtado, 2017, p. 3).
But even if AlphaZero will not show us a path to better understand human thought, it may be on the way to developing its own very distinct spin on it. Matthew Sadler and Natasha Regan, authors of the most complete book on AlphaZero’s chess-playing, explain that it learned ‘in a unique manner by playing millions of lightning-fast games against itself. It was given no human knowledge about established chess strategy. As a result, AlphaZero was free to develop its own chess techniques and style’ (Sadler & Regan, 2019, p. 434).
11
According to AI researcher, literary critic and chess player Manny Rayner (2019), AlphaZero is more than a very strong chess engine, it is ‘a non-human agent who, on its own and in less than a day, has discovered some extremely deep and interesting things about a game that people have been playing for over a thousand years’ (¶ 2). And just like Shannon and others envisaged and as the Drosophila metaphor indicates, it is clear that for Google’s DeepMind, the team behind AlphaZero, chess is considered a gateway to bigger things:
[I]t would be easy to forget that AlphaZero is about more than just chess. AlphaZero is a proof of concept, demonstrating AI’s capacity to crack complex problems without the use of human knowledge of strategy. In other words, chess is the testing ground. (Sadler & Regan, 2019, p. 434)
Having been left in the dust by the new generation of game-playing machines, what hope remains for the human species? Kasparov himself, once viewed as the champion upon whom the pride of humankind rested, points the way by urging us not to fall prey to pessimism but rather to appreciate the possibilities for freedom and creativity that the close collaboration with our machines will open up: ‘I do not believe in fates beyond our control. Nothing is decided. None of us are spectators. The game is under way and we are all on the board’ (Kasparov & Greengard, 2017, p. 136).
In their study of the future of work in the age of automation, Erik Brynjolfsson & Andrew McAfee, echoing Michie’s earlier stated dream, concurred: ‘the best chess player on the planet today is not a computer. Nor is it a human. The best chess player is a team of humans using computers’ (Brynjolfsson & McAfee, 2011, p. 38). This leads us to the next section, where I present the optimistic view that our future relationships with machines may end up being cooperative rather than competitive or subservient.
Human-machine symbiosis
We have let ourselves become enchanted by big data only because we exoticize technology. We’re impressed with small feats accomplished by computers alone, but we ignore big achievements from complementarity because the human contribution makes them less uncanny. Watson, Deep Blue, and ever-better machine learning algorithms are cool. But the most valuable companies in the future won’t ask what problems can be solved with computers alone. Instead, they’ll ask: how can computers help humans solve hard problems? (Thiel & Masters, 2014, p. 83)
One of the proposals to deal with the risks of over-reliance on (if not complete annihilation at the hands of) machines is that of preemptively merging with them, so that as the capacities of standalone AIs increase, so do ours to keep them in check. The possibility is heralded in tech articles with headlines such as Humans With Amplified Intelligence Could Be More Powerful Than AI, which goes on to claim that:
With much of our attention focused [sic] the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It’s an open question as to which will come first, but a technologically boosted brain could be just as powerful—and just as dangerous—as AI. [. . .] Unlike efforts to develop artificial general intelligence (AGI), or even an artificial superintelligence (SAI), the human brain already presents us with a pre-existing intelligence to work with. (Dvorsky, 2013, ¶ 1, emphases in the original)
The idea gained traction with Elon Musk founding Neuralink, a company aimed at improving brain-computer interfaces in order to let humans ‘achieve a sort of symbiosis with artificial intelligence’ (Etherington, 2019, ¶ 2), but it has a long history. William Ross Ashby, creator of the Homeostat — which in 1949 was referred to in the pages of Time Magazine as ‘the thinking machine’ (Ramage & Shipp, 2009, p. 46) and ‘the closest thing to a synthetic brain so far designed by man’ (J. Ashby, 2008, ¶ 30) — was hinting in this direction in his Introduction to Cybernetics when talking of the amplification of intellectual power, even if he did so in less than a page, before hurriedly ending the book:
Now “problem solving” is largely, perhaps entirely, a matter of appropriate selection [. . .] Thus it is not impossible that what is commonly referred to as “intellectual power” may be equivalent to “power of appropriate selection” [. . .] If this is so, and as we know that power of selection can be amplified, it seems to follow that intellectual power, like physical power, can be amplified. Let no one say that it cannot be done, for the gene-patterns do it every time they form a brain that grows up to be something better than the gene-pattern could have specified in detail. What is new is that we can now do it synthetically, consciously, deliberately. But this book must stop; these are not matters for an Introduction. (W. R. Ashby, 1957, p. 272)
Three years later, J. C. R. Licklider published a paper in which he laid down the possibility for Man-Computer Symbiosis. But it is interesting to note that he expressed clear doubts as to whether in the long run these hybrid systems would be able to outperform a new generation of fully wetware-independent machines:
Man-computer symbiosis is probably not the ultimate paradigm for complex technological systems. It seems entirely possible that, in due course, electronic or chemical “machines” will outdo the human brain in most of the functions we now consider exclusively within its province. (Licklider, 1960, p. 4)
But like Kasparov, many others have seen the enormous potential in the joint operation of man and machine. Frederick Brooks even goes as far as to say that this should have been the actual goal all along and that the quest for Artificial Intelligence was misdirected from the start, in a way that set back the advance of computer science by sending researchers after a red herring:
It is time to recognize that the original goals of AI were not merely extremely difficult, they were goals that, although glamorous and motivating, sent the discipline off in the wrong direction. If indeed our objective is to build computer systems that solve very challenging problems, my thesis is that [. . .] intelligence amplifying systems can, at any given level of available systems technology, beat AI systems. That is, a machine and a mind can beat a mind-imitating machine working by itself. (Brooks, 1996, p. 64, emphases in the original)
Even Dreyfus (1965), one of the harshest critics of Artificial Intelligence, seems not to have been against the scenario of man-machine integration and deems complementarity a more fruitful pursuit than automation, approvingly citing researchers advocating that ‘work be done on systems that promote a symbiosis between computers and human beings’ (p. 83): ‘Man and computer is capable of accomplishing things that neither of them can do alone’ (Rosenblith, as cited in Dreyfus, 1965, p. 83).
Burkhead (1999) argues in the same vein that ‘whatever capability AI has at any given time, humans assisted by computers will have already reached that point and moved ahead’ (p. 3). According to him, the reason for the advantage that human-machine teams would have over standalone machines consists in that the leap in machine intelligence that will take them from a human to a superhuman level cannot be magical: machines will find the same epistemological ceiling that we have and will have to overcome it much like we would. There are no bootstrapping shortcuts, and, furthermore, whatever a machine operating by itself may be capable of accomplishing in this regard, a capable human aided by a machine will be able to do first, and better.
Such a possibility is no longer a speculation about the future. Many steps in that direction have already been taken and the road to super-intelligence via the aid of integrated machines is one our soles are acutely familiar with: ‘External hardware and software supports now routinely give human beings effective cognitive abilities that in many respects far outstrip those of our biological brains’ (Bostrom & Sandberg, 2009, p. 311). With the relatively recent irruption upon the stage of ChatGPT with its attending retinue of plug-ins, spinoffs and competitors, and their increasing encroachment into seemingly every conceivable inch of our lives, the reality and commonplace allure of this cognitive boon — at the unavoidable price of our over-reliance on machines — is harder than ever to deny. And why would we even bother to, when the genie at our fingertips is just so very handy and docile — but oh how Norbert Wiener (1964) tried to warn us about genies. . .
We are already living examples that the preliminaries that could lead in the direction of full merging have come to pass; the gadgets that we incorporate into our daily lives have endowed us with the possibility of doing things, communicating and accessing volumes of information in ways unthinkable to our ancestors:
Computers are extensions of our minds, [they] are more than repositories for our memories and plans; they stand alone. They are half tool, half entity. [. . .] Each new technology that humans adopt has the effect of amplifying our actions. Each new technology is a barrier removed between us and our ultimate freedom. As knives amplify and extend teeth and fingernails, as pliers amplify fingers, so do computers amplify our brains. Our identification with our computers marks the beginning of an incremental merging process whose end point will be a symbiosis of sorts. (Dewdney, 1998, p. 99)
This idea of enhancing ourselves by drastically altering who we are and how we interact with the world via the integration of semi-intelligent machines could just be an extreme expression of a basic feature that in fact defines the very essence of the relationship between humans and their world, that is, those things that lie outside the barrier of their skin. Building on his and philosopher David Chalmers’s previous idea of the extended mind (Clark & Chalmers, 1998), Andy Clark claims that we, as natural-born cyborgs, have an innate propensity to establish very close relationships with nonbiological resources and that the distinction between world and person is extremely — and increasingly more so — difficult to establish:
The cyborg is a potent cultural icon of the late twentieth century. It conjures images of human-machine hybrids and the physical merging of flesh and electronic circuitry. My goal is to hijack that image and to reshape it, revealing it as a disguised vision of (oddly) our own biological nature. For what is special about human brains, and what best explains the distinctive features of human intelligence, is precisely their ability to enter into deep and complex relationships with nonbiological constructs, props, and aids. (Clark, 2003, p. 5) Human thought and reason is born out of looping interactions between material brains, material bodies, and complex cultural and technological environments. We create these supportive environments, but they create us too. We exist, as the thinking things we are, only thanks to a baffling dance of brains, bodies, and cultural and technological scaffolding. (Clark, 2003, p. 11)
No other gadget that we own shows this more vividly than our smartphones (which is a weird and outdated name for our portable personal pocket computers, as future historians will probably agree). Smartphones with machine intelligence aim to be ‘the part of your brain you’re not born with’ (Gershgorn, 2019, ¶ 59). This last statement is day by day seeming more literal than metaphorical, for smartphones highlight the tension in our relationship with technology between complementary enrichment and subservient dependency, which — after this long detour — brings us back into the province of games.
Gamified us
A common misconception of folk paleontology is that all dinosaurs became extinct. While it is true that those huge lumbering beasts that make the delight of children and toy manufacturers alike all over the world no longer roam our plains nor traverse the watery realms of swamp and riverbed, dinosaurs indeed surround us everywhere and not a day goes by without our meeting them. We simply call them ‘birds’. A Tyrannosaurus rex is more akin to a chicken than to a Stegosaurus (other than in their sizes, of course, but most assuredly when it comes to their morphology and the timespan separating their appearance upon the evolutionary stage), 12 and anyone who takes a minute to look a peacock in the eye, disregarding for a second the magnificent glimmer of its coat, will be able to attest that by our side the former rulers of our planet still linger. In a similar legend, behaviourists became officially extinct somewhere circa the late 1960s after Noam Chomsky arrived to save the day, dismounted from his generative horse and slew that foul beast, Verbal Behaviour. 13
But the pigeons merrily strutting around should not only remind us of the stubborn subsistence of the scaly forebears they embody, but also that the insights of applied behaviourism, to which they so devotedly contributed, are all around us too: we encounter them first and foremost in the multibillion-dollar video game industry. Such a connection has been explicitly drawn out by Linehan et al. (2014, p. 82) in their chapter Gamification as Behavioral Psychology, where they explain how ‘the effects of characteristic game design elements (i.e., points, badges, leaderboards, time constraints, clear goals, challenge) can be explained through principles of behavior investigated and understood by behavioral psychologists for decades (see Skinner 1974).’
While merely four years ago 2.5 billion people — that is, nearly a third of all human beings — were playing video games (WePC, 2019), that number is now nearer 3.2 billion (Wainwright, 2023; Shewale, 2023), an increase whose implications I will let the reader extrapolate. The video game industry earnings have for a long time surpassed those of the movie and music industries combined (League of Professional eSports, 2018) and continue to do so (Divers, 2023), and while they were expected to surpass 90 billion USD by 2020 (WePC, 2019), they actually nearly doubled that figure (WePC, 2023). With that kind of money on the table, the slightest tweak that may help capture and retain players becomes invaluable, which is why companies are increasingly relying on psychologists in order to assist with game design. In the words of an article luring psychology grad students into the video game production world, ‘companies that design and develop video games are increasingly turning to psychologists for help analyzing data and making sure their products are as effective as they can be. Some psychologists are even launching consulting businesses to assist game manufacturers’ (Clay, 2012, ¶ 2).
As it is with most things, there are shades of white and black in this relationship between psychology and video games, ranging from their immense potential to facilitate learning in an educational setting (Rosas et al., 2003) and the lofty goals of Games User Research ‘to improve player experience in games’ (Nacke, 2018, p. 281) to the more harrowing depths of employing behaviouristic reward schedules in order to reinforce video game play so as to lead to an addictive relationship with them: ‘Like gambling on slot machines, video games reinforce correct or skilful play on variable and fixed ratio reinforcement schedules’ (King et al., 2009, p. 100). In an environment that is increasingly competitive, a strategy that game companies can greatly benefit from if they want to remain profitable is that of exploiting the mental makeup and biases of their users in order to make games irresistible (Søraker, 2016). With haunting vividness, creative writing teacher extraordinaire Jerome Stern describes the Sisyphean futility of being thus inescapably hooked:
[M]y eyes stare intensely and my brain cells sizzle and fry. I am playing a computer game [. . .] This hopelessly pointless game is slurping up thousands of life-seconds like a voracious anteater in a giant colony. My fingers dance on buttons and I can feel my time on earth being shortened, my vitality being sucked, my head spinning. I am using these fragile moments of our brief vanishing years, these precious minutes of lucidity that crumble sooner than we think, not to answer human correspondence, not to record my thoughts, not to do good in the world, but to press cd: GAME. GAME, and squawk goes the screen and little figures bounce out, pointlessly jump, and more moments of my life gasp like guppies and flop over gone and I can’t help it. I can’t stop. (Stern, 1997, p. 48)
However, much more worrisome is the fact that not only is the population at large far more reliant than ever on video games as a pastime but that in an increasingly technological world, the boundaries between play and non-play grow ever more fluid, and life itself is becoming increasingly ‘gamified’, both explicitly and tacitly. As with any proprietary term of high potential profitability, there is ample contention as to what gamification entails precisely, 14 and while a nuanced survey of such disputes would be fruitful, it falls outside the scope of this essay. Yu-Kai Chou (2016) highlights its potential for good when defining it as ‘the craft of deriving fun and engaging elements found typically in games and thoughtfully applying them to real-world or productive activities’ (p. 8) while surveillance scholar Jennifer Whitson (2013) calls attention to a more troublesome aspect of the practice, for in her definition, gamification ‘applies playful frames to non-play spaces, leveraging surveillance to evoke behaviour change’ (p. 164). In an astute simile, Fagone (2011) adds that ‘gamification advocates—like religious figures—seek to superimpose an invisible reward system on top of the world’ (¶ 6). The harshest criticism to gamification comes from game designer Ian Bogost, who, channelling Harry Frankfurt’s (2005) poignant rhetoric and illuminating analysis from On Bullshit, accuses its proponents of being bullshit peddlers trying to lure business executives with a sexy buzzword that ends up being nothing but a front for very old practices (Bogost, 2014). Having shared these words of caution and bearing such valid concerns in mind, I must acknowledge that throughout this essay I frequently use the term in a far looser, laxer and more encompassing way than its (admittedly fraught with controversy) technical usage, and that in order to distinguish both uses — as always in human affairs — context is key.
In a striking example of the trend, Amazon recently rolled out a video game–like interface that reflects the progress that warehouse workers are making at their tasks, while other companies like Uber, Delta Air Lines and Target have employed gamification in turn to affect their own metrics (Bensinger, 2019). Given reports that have emerged of the poor working conditions at Amazon warehouses, the retail giant would seem to be offering prime delivery of validation for Bogost’s (2011) critique that proposes substituting the term ‘gamification’ with ‘exploitationware’, owing to its replacing ‘real, functional, two-way relationships with dysfunctional perversions of relationships. Organizations ask for loyalty, but they reciprocate that loyalty with shams, counterfeit incentives that neither provide value nor require investment’ (¶ 57). But why should companies stop at their employees, when there is so much profit to be reaped by gamifying consumers too? Loyalty programmes can be seen as a form of proto-gamification, and with the ongoing sophistication of technology, we should expect them to become increasingly pervasive, with, for instance, Netflix or Amazon framing certain landmarks in book-buying or episode-watching as epic quests, appropriately rewarded by a badge or some other such sign. And that gamified nature may be already embedded in our relationship with the technological tools we employ the most:
Jamie Madigan, a psychologist who writes about video games, thinks the arrival of a notification might be similar to the accrual of virtual loot. Email, in other words, might not be just a task, but a game. “Designers of apps for the Web, phones, and other devices figured this out early on,” he says. “In the case of our phones, we see, hear, or feel a notification alert show up, we open the app, and we are rewarded with something we like: a message from a friend, a like, an upvote, or whatever.” (Pinsker, 2015, ¶ 8)
And this reinforcing quality of the technology we use daily is certainly a feature, not a bug. As Will Chamberlain (2019) puts it when discussing legislation proposed to tackle the issue, ‘the problem isn’t just that social media use can be addictive; the problem is that it’s designed to be addictive’ (¶ 10). Gamified aspects of ubiquitous technology can be even more subtle. ‘Not a few futurologists envisage a network of computer users tired, apparently, of violent or ‘erototronic’ video games engaging, instead, in political debate; a hi-tech resurrection, on a grand scale, of the participatory democracy of the Athenian agora,’ claimed philosopher David E. Cooper (1995, p. 10) presciently prefiguring Twitter 11 years before its creation. The metrics (retweets, follower count, etc.) have a gamified flavour that warrants reading the platform as a political video game of sorts. And by the same token, a dating app such as Tinder can also be better understood as a video game, where for many users the ‘match’ is an end in itself as an ego boost, regardless of whether any subsequent meeting-up in physical space actually occurs.
But while the Skinnerian conditioning that we receive from our devices may fly completely under the radar for some of us, others pursue it of their own accord. We see it in the case of Piotr Wozniak, developer of the memory-aiding program SuperMemo, by which he rules his life (Wolf, 2008). The program stores every bit of information and every new fact that Wozniak judges important or worth preserving, and, in a manner fully reminiscent of Ebbinghaus’s (1885) theories of memory, then presents them again and again at precisely spaced intervals, until they have been fully assimilated. Wozniak takes his reliance on the program to an extreme degree and turns over the administration of his life to his personally designed computer system. Such decisions as what to read, what to re-read and when, whom to see and whom to reply to are routinely decided by the software that he has devoted most of his life to developing and perfecting:
When he entrusts his mental life to a machine, it is not to throw off the burden of thought but to make his mind more swift. Extreme knowledge is not something for which he programs a computer but for which his computer is programming him. (Wolf, 2008, p. 10)
Wozniak’s example is particularly striking for the usual bonds between creator and creature are thrown into a loop, with the programmer making his machine and then being remade by it. But however extreme the case of Wozniak may seem to us, the fact is that the infrastructure is set in place for such kinds of relationships between humans and programs to be far more common. Here is Yuval Noah Harari, with a forecast worth considering, if not for the forecaster’s sapience, at the very least owing to the widespread attention and success with which the book in which it appears, Homo Deus: A Brief History of Tomorrow, has been met:
Companies such as Mindojo are developing interactive algorithms that not only teach me maths, physics and history, but also simultaneously study me and get to know exactly who I am. Digital teachers will closely monitor every answer I give, and how long it took me to give it. Over time, they will discern my unique weaknesses as well as my strengths. (Harari, 2016a, p. 163)
This harbinger sign of our willingness to give in the reins of our mental development to algorithms brings to mind Martin Heidegger’s words of caution to the effect that what is at stake in our dealings with machines is not so much our worldly hegemony but ourselves: ‘The threat to man does not come in the first instance from the potentially lethal machines and apparatus of technology. The actual threat has already affected man in his essence’ (Heidegger, 1977b, p. 28). Of course, stressing the importance of Heidegger’s admonition should not in the least lead us to disregard the true threat represented by the ‘potentially lethal machines and apparatus of technology’ (as I’ve mentioned, you can check Musa Giuliano, 2020, for my fuller treatment of such risks) but rather to not lose sight that, as pointed out by concerned contemporary writers on technology, ‘as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence’ (Carr, 2008, ¶ 37).
But if there is one arena in which the risks to humankind’s survival, the risks to the survival of its humanity and the feedback loops between games and technology all come into play, it is in modern military warfare. ‘War games have been serious business for military leaders over the years,’ declares media researcher and game developer Casey O’Donnell (2014, p. 351). The very intimate relationship between video games and armed conflict is well documented (see Mead, 2013) and expresses itself in several ways. A noteworthy example is America’s Army, a first-person shooter developed by the US Army that attempts to portray combat situations more realistically than other franchises and is intended mainly as a recruitment tool (Allen, 2014). But America’s Army is far from the only video game that soldiers will be playing: ‘United States troops stationed overseas [. . .] dedicate so many hours a week to burnishing their Halo 3 in-game service record that earning virtual combat medals is widely known as the most popular activity for off-duty soldiers’ (McGonigal, 2010, p. 8). Nicole Capezza, extending important work by Jaan Valsiner, draws a crucial implication of the video game–like ethos of contemporary warfare from a cultural psychology standpoint by means of the concept of distancing:
During wartime soldiers often use distancing mechanisms when deciding whether or not to shoot at an “enemy” soldier. New mechanisms for psychological distancing are making these decisions easier. Night-vision or thermal imagery converts the “enemy” soldier into, “an inhuman green blob.” This technology and the distancing process have been referred to as “Nintendo warfare.” (Capezza, 2003, ¶ 22)
The fact that much killing can now be conducted with an added layer of detachment (i.e., via piloting drones remotely) makes this an even more worrisome reality. There is nevertheless some bitter consolation to be had in the fact that trends point to an increasing automation of lethal weapons, with a push for drones being able to employ lethal force without human oversight. This outcome is so concerning that in 2015 many of the world’s leading AI researchers and technologists signed an open letter urging authorities not to start an AI arms race by the creation and deployment of autonomous weapons (Future of Life Institute, 2015).
Endgame
It is now time to return to Artificial Intelligence to tie up what we have been discussing with our initially proffered suspicions that the gamified nature of our technological milieu may eventually usher in a future in which, much like in the case of Piotr Wozniak, it is our machines who create the games in which we are subsumed, a future consisting of an infantilized humankind being watched over by AIs.
Jane McGonigal, a cheerful, thoughtful and well-meaning evangelist for the positive power of gameful design, begins her largely optimistic account of the social future and transformative potential of video games, Reality Is Broken: Why Games Make Us Better and How They Can Change the World, with this passage from Edward Castronova’s Exodus to the Virtual World:
Anyone who sees a hurricane coming should warn others. I see a hurricane coming. Over the next generation or two, ever larger numbers of people, hundreds of millions, will become immersed in virtual worlds and online games. While we are playing, things we used to do on the outside, in “reality,” won’t be happening anymore, or won’t be happening in the same way. You can’t pull millions of person-hours out of a society without creating an atmospheric-level event. (cited in McGonigal, 2010, p. 8)
I hope the foregoing discussion has lent credence to this possibility and expect the continuous reporting on the improvement of virtual reality technology to do as much. A crucial question, however, is: Whose virtual worlds? Whose online games? Earlier, I cited cybernetician W. Grey Walter (1961) on the species-wide developmental impact of play, but the follow-up to his comment on the importance of games and play for our civilization is equally worth taking into account, if not more so: ‘Perhaps the most ominous feature of mechanized civilization is that the ludicrous devices demanded for entertainment do not lend themselves to two-way operation’ (p. 225). That is, the means of distraction are handed top-down, and ultimately, consumers have very little input in their design. It is at the hands of the algorithms initially rolled out by the state and corporations, but then liable of cutting such ties with their creators.
Should we cruise along our current path, we may soon be facing a similar scenario to that of the Merovingian rois fainéants, or do-nothing kings, who gave up to their Carolingian majors of the palace the administration of their affairs, with little protest, in the pursuit of their own forms of entertainment. Being a century and a half ahead of his time, Samuel Butler already outlined how such a gradual shift might unfold, in a manner reminiscent of the fabulaic demise of the slowly boiling frog of lore:
The power of custom is enormous, and so gradual will be the change, that man’s sense of what is due to himself will be at no time rudely shocked; our bondage will steal upon us noiselessly and by imperceptible approaches: nor will there ever be such a clashing of desires between man and the machines as will lead to an encounter between them.
15
(Butler, 1872/2014, p. 81)
The looming threat having now drawn considerably closer, we hear echoes of that very concern that the shift will occur in steps so gradual as to be functionally imperceptible until it is truly too late in the writings of Theodore Kaczynski, who sought to force attention to be paid to the menace he perceived in the rising technologization of society by means both textual and paratextual. Here is part of proposition 173 in his manifesto, published under coercion in 1995 by The Washington Post:
What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and as machines become more and more intelligent, people will let machines make more and more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide. (Kaczynski, 2010, p. 77)
16
This gradual and nigh on imperceptible but nonetheless relentless submission to an alien will is the cause of such anxiety (for a deep and thorough analysis of AI-related anxieties, see Rodríguez, 2024) that fiction and entertainment, that wide canvas where society at times unwittingly paints its fears and worries for future generations to discern, are increasingly incorporating it. It would prove a daunting task to attempt a complete list of such narratives 17 ; in fact, I would not be surprised if by the time this article sees print a handful of new impactful ones had cropped up, so let us mention only two, if only for their poignancy rather than their widespread popularity.
In Be More Chill, Joe Iconis (2019) set Ned Vizzini’s (2004) tale of tech-enhanced teenage angst to superb music to tell of the rise and fall of a geeky modern-day Icarus who, desperate to increase his social standing, obtains and installs a shady cognitive-enhancement device in order to become a cooler, better version of himself. As Gerard Canonico (Iconis, 2019) masterfully explains while landing each note:
The quantum computer in the pill
Will travel through your blood until
It implants in your brain, and it tells you what to do
It tells you what to do
It’s preprogrammed
It’s amazing
Speaks to you, directly
You behave as it’s appraising
Helps you act correctly
Helps you to be cool!
It helps you rule!
Dealing with the same theme in a very different format, psychiatrist and public intellectual par excellence Scott Alexander (2012) penned a now hard-to-find fable much in the style of the Arabian Nights, where a ‘whispering earring’ offers helpful advice to its wearer and hits the spot each time, since ‘it always tells you what will make you happiest’ (¶ 4). So useful is the little device indeed that ‘[t]here are no recorded cases of a wearer regretting following the earring’s advice, and there are no recorded cases of a wearer not regretting disobeying the earring. The earring is always right’ (¶ 5, emphasis in the original). The wearers go on to lead extraordinarily successful lives while the reader is left to ponder who that success actually belongs to. . .
And that is precisely the crux of the matter in this seeming partnership between us and our tech: just how many of Theseus’s old planks are actually still aboard? When does this helpful autocomplete or predictive text of our very existence cease being an instantiation of the zone of proximal development (Vygotsky, 1962) we would have arrived at eventually given enough time, resources and effort, to become simply the script we follow by an automatic reflex? As we might attest merely from looking around at the relaxed and confident attitude with which so many people have chosen to increasingly (over)-rely on the Faustian ‘gifts of fortune’ brought by ChatGPT and its brethren, the Devil is a prolific and consummate frog-boiler, and not being short on time, can afford a little patience when it comes to just how gradually he lifts the curtain barring us from the shocked realization of the third-act reveal.
But the fact that we would be able to hand command of our lives over to machines should not be altogether too surprising given that we already have in place precisely such a blueprint of dependence, albeit to a different owner. We appreciate uncanny resemblances between the kind of technological world we have been describing as partially in existence, and a flight of sci-fi full of foresight that Alexis de Tocqueville encoded in his 1840 second volume of Democracy in America, which philosopher Anthony O’Hear (1995, p. 158) credits as ‘the most accurate portrait of our age’:
Above this race of man stands an immense and tutelary power, which takes it upon itself alone to secure their gratifications and to watch over their fate. That power is absolute, minute, regular provident and mild. It would be like the authority of a parent if, like that authority, its object was to prepare men for manhood; but it seeks on the contrary, to keep them in perpetual childhood: it is well content that people should rejoice provided that they think of nothing but rejoicing. . . (cited in O’Hear, 1995, p. 158)
O’Hear (1995) then goes on to make the parallels between de Tocqueville’s anticipation and our current technologically infused world all the more explicit, by saying that ‘technology infantilizes, encouraging people to be satisfied with the material delights it makes so easy, and to reduce our sense of freedom and democracy to that of chosing [sic] among the delights and ‘life-styles’ they make [sic] possible’ (p. 158). But frankly, and as valuable as his analysis truly is, he needn’t even have bothered, for the similarities between what de Tocqueville foresaw and the gamified ecosystems we have been talking about are so on the nose that stressing them would seem to be no more than belabouring the point:
So I think that the type of oppression by which democratic peoples are threatened will resemble nothing of what preceded it in the world; our contemporaries cannot find the image of it in their memories. I seek in vain myself for an expression that exactly reproduces the idea that I am forming of it and includes it; [<the thing that I want to speak about is new, and men have not yet created the expression which must portray it.>] the old words of despotism and of tyranny do not work. The thing is new, so I must try to define it, since I cannot name it. I want to imagine under what new features despotism could present itself to the world; I see an innumerable crowd of similar and equal men who spin around restlessly, in order to gain small and vulgar pleasures with which they fill their souls. Each one of them, withdrawn apart, is like a stranger to the destiny of all the others; his children and his particular friends form for him the entire human species; as for the remainder of his fellow citizens, he is next to them, but he does not see them; he touches them without feeling them; he exists only in himself and for himself alone. . . (de Tocqueville, 1835–1840/2010, p. 1,250)
18
Verily, those ‘small and vulgar pleasures’ sound eerily reminiscent to the empty badges, points and achievements that the critics of gamification justly decry. In autodidact sociologist Eli Sagan’s comparison between ancient Greek and modern American democracies, there is a passage of great explanatory power that seems to be looking deeper into our collective psychological relationship to that ‘tutelary power’ so vividly depicted by de Tocqueville:
The collectivized person is also constantly struggling with the universal human ambivalence about independence and dependence. Like a child, the demos longs to put its entire trust in the hands of its leaders, becoming enraged when the leaders, like parents, fail to deliver omnipotence, omniscience, or moral perfection. This disenchantment does not prevent the pattern from being repeated over and over again. [. . .] The desire to be illusioned runs very deep in the human psyche. (Sagan, 1991, p. 195)
Indeed, so strong is our desire for illusions that we may end up permanently confined to comforting illusions (comforting, that is, when confronted with the dismal alternative of a bleak and threatening reality), as hugely popular films such as The Matrix and The Truman Show — or Stanisław Lem’s (1974) novel The Futurological Congress, the masterpiece of the simulacra genre — have portrayed and which have so gripped the imagination and influenced contemporary discussion that they have forced academic philosophers to take them up as serious objects of concern (e.g., Chalmers, 2005). And, as Huizinga (1938/1980) himself explained, ‘illusion’ is ‘a pregnant word which means literally ‘in-play’ (from inlusio, illudere or inludere)’ (p. 11).
