Abstract
How is it that some people become concerned with non-living robots’ ‘rights’ while many humans and animals are still permitted to die? There is a socially-constructed symbolic hierarchy of beings and objects within which, at times, commodities are valued over the living. While there are many works that have thought about hierarchies of the human, few have adequately conceptualized the role of love in affording protected statuses to living beings as well as non-living objects in this hierarchy. This article outlines genealogies of narrative-encoding reproducing conditions of normative loveability to Western coloniality. It critically explores the role of love in wrongfully attributing lives that matter to an AI-embodying sex doll, an infant-imitating sociable robot, and an experimental transhumanist cyborg. The article proposes loveability is a key factor impacting the symbolic liveliness of living beings and non-living objects ancillary to the dominant Man in the hierarchy of liveliness.
Introduction
This article argues that relations of love are a key factor in the hierarchization of liveliness. It treats liveliness as an attribution which may be assigned to living beings or non-living objects. The work explores how adherence to normative ideals of loveability may afford certain living beings, and certain non-living objects, improved positionings in an ontological hierarchy of symbolic life and death drawn through the political-economic system of Western coloniality. The article delineates this process to intervene in wrongful attributions of highly-valued forms of liveliness, namely humanness, to non-living robots. This inquiry began with the question: how can some people ostensibly care about the ‘rights’ of non-living robots within the same economy of value that still renders many humans and animals killable?
To answer this question, the article employs narrative-encoding as a method for reading robots. Seaver (2017: 1) proposes algorithms can be read not ‘in’ culture but ‘as’ culture, in that they co-constitute broad patterns of meaning and practice that can be methodologically engaged on an empirical basis. I use narratives informing robots designed for relations of love to engage such broad patterns of meaning and practice concerning normative conditions for loveability to Western coloniality. When a roboticist designs a robot to be loved, they make a series of predictions about how the average user would wish for a loveable robot to look, perform, and interact. The robot’s behavioural success is dependent upon those predictions being correctly accounted for and encoded in the object’s software. Many roboticists cite a wealth of scientific studies and science fiction informing such predictive decisions. This article foregrounds the fictional and non-fictional narratives of loveability informing the constitution of three robots expressly designed for relations of love: Samantha, Kismet, and BINA48. 1 It demonstrates how longer-standing narratives of loveability have been encoded into these robots. Following an initial section theoretically engaging loveability under Western coloniality, the article will tell three love stories: one of romantic love, one of parent-infant love, and one of infinite love; following which it will conclude in a critical discussion of rights for loveable robots.
Unlike many works in tech ethics, the intention of this article is not to intervene in algorithmic bias, to imagine using the chosen artefacts otherwise, nor to explore instances where any objects release an unexpected, performative output. The method of reading narrative-encoding is employed to understand these three robots as snapshots capturing dominant cultural conditions that contribute to a broader pattern of meaning concerning what is and is not loveable. These dominant cultural conditions are not merely adopted from one book which a given roboticist may read, but derived from genealogies of narrative representation drawn through centuries of the political-economic system of Western coloniality. In order to interpret such a wealth of synthesized data, the article employs the decolonial literary theoretical lens of autopoiesis, or the process through which a system brings itself into being. In relation to the problem of the human, which has been described as a racially-codified mutable category across Black Studies and Afropessimism, Sylvia Wynter understands narratives written and repeated by the species over millennia as part of the autopoietic process bringing the human into being. 2 Building on Frantz Fanon’s work establishing ‘sociogeny’ (1986 [1952]: 13), Wynter turns attention to broad themes in narrative that are repeated again and again, such that they co-constitute and reify the cultural systems of meaning-making cohering Western coloniality and defining who gets to be counted as fully human. Like gender studies theorists who identify a dominant ‘Man’ as the centred upon subject in symbolic orders such as phallocentrism, Wynter explores how genealogies of narrative produced throughout the political-economic system of Western coloniality have continued to centre a normative, idealized version of (white) Man, represented as the most-human human (2015: 196). She scrutinizes ‘part-myth-part-science’ origin stories made to seem supra-credal, like Christianity and pop-sci evolution, to demonstrate how these help to structure ontological hierarchizations of liveliness in a racially-codified manner, aligning whiteness with goodness and symbolic liveliness, and aligning Blackness and/or Indigeneity with disease, the pollution of sin, and symbolic death (2015: 197; 2003: 309). Similarly, scholarship at the intersection of literary studies and tech ethics examines the importance of a science fiction narrative to the design, reception, and governmental management of contemporary robots and AI (see Cave et al., 2020). The article takes inspiration from such approaches by using narratives of love to hermeneutically read broader patterns of cultural meaning in relation to robots.
Love Makes Us Human; Love as Poiesis
Love has been crafted into an avenue for transcendence in centuries of myth, sciences, and literary fictions shaped through the political-economic system of Western coloniality. In the theocentric mythos of Western coloniality which Wynter examines, a Christian love between Man and God enables transcendence of the limitations of the flesh in the form of immortality. This repeated trope is always-already racialized to hierarchize liveliness, with the children of Ham made to bear Original Sin in slavery, occupying ‘the level of the beast’, in contrast to Man’s spatial ‘ris[ing] to the level of the angels’ (Wynter, 2003: 276–7). Man’s continued placement at the top of a hierarchy of distance from animality was then further determined through narratives of appropriately-heteronormative romantic love with (white) Woman, following developments in the biological sciences, gynaecology, and colonially-motivated reproductive governance in the 18th and 19th centuries. 3 Transcendence of abjectified animality through the right kind of loving becomes a marker of civilized humanness. Indeed, in evolutionary terms Dominic Pettman (2006: 17) proposes love functioned to exteriorize the self as a ‘technology of community’, suggesting love marks a point of enlightenment from whence the ‘prehuman’ emerges as human. Many dominant accounts in psychoanalysis, psychiatry, and psychology have historically supported these framings, even finding commonality with narrative representations of love in Plato’s famous Symposium: love as an enabling force in social organizations of community; love as a recovery of spiritual wholeness; and love as a means of symbolic immortality through progeny (Plato, 1990/c. 385–370 BCE, 10–12, 23–4, 43–4). 4 Meanwhile, especially from the 19th century onward, children are raised on stories hinging on the transcendent power of ‘True Love’s Kiss’ in making objects, monsters, and animals into humans. In each of these examples, love acts as a means of bringing forward modes of liveliness in a hierarchy drawn from non-human animality to immortal angels. The repeated theme across the narratives establishes an idea that beings and objects transcend their status through love, allowing non-humans to come closer to the status of Man, and allowing Man to come closer to the status of God.
The narrative trope of transcendence is particularly salient for relations of love with robots, proto-robots, and automatons from the 18th century onward. Even in the play from whence the term ‘robot’ is derived, Karel Čapek’s R.U.R. (2011 [1920]), it is the relation of love between robot characters Primus and Helena which makes their creator accept they are no longer commodities. When the chief robot builder, Alquist, wishes to experiment on Helena, and then Primus in her place, each oppose him, demonstrating anger, capacity to cry, and a will to ‘live’ but ‘not live without’ one another (2011 [1920]: 96–8). The play ends with Primus’s assertion that the two are not the property of Alquist because they ‘belong to each other’, and Alquist’s final resignation: ‘Go, Adam, go, Eve. The world is yours’ (2011 [1920]: 99). Comparing this text to other literary representations of robots from the 18th century, William E. Harkins notes that evidence of capacity to love becomes evidence of life itself, writing that when an object is ‘incapable of love’ it ‘has no soul’, and thus, in Čapek’s play, love acts as evidence that ‘the robots have become men’ (1960: 316). Narratives like that of Primus prime us to assign humanness to loveable robots. The fact that the robots are not understood as human becomes a metaphor for unjust Othering. In her critical analysis of sex-robot science fiction narratives, Leslie Bow (2022: 113–15) argues that popular representations of robots have problematically played into gendered and racial tropes, establishing a dominant ‘neoslave narrative’ of robot victimhood. I want to emphasize the political stakes of misdirecting these sentiments. If Western coloniality reproduces conditions of loveability through narrative repetition, and loveable robots embody and repeat these conditions to achieve affinitive human-likeness by design, we risk positioning our own self-fulfilling prophecies as the metric for evaluating robots’ liveliness.
I want to further suggest that loveability acts as a form of poiesis in the autopoiesis of diverse genres of humanness and liveliness. In its critical framing of love as a fantasy of deliverance cohering Western coloniality, this article takes a technical approach to the functionality of what gets called love in repeated dominant narratives. There are many more ways to frame something called love beyond, and indeed against, regimes of colonial violence. 5
However, thinking about narratively-encoded normative loveability in reference to its conceptual, load-bearing function is advantageous for understanding the role of loveability in imagining that an object like a robot possesses a life that matters. Heidegger (1977 [1954]: 293) describes poiesis as a ‘bringing-forth’ of ‘truth’, identifying both poiesis and physis. Poiesis refers to a truth brought forth through some form of mediation, for example a sculptor’s use of tools to turn marble into a statue. The thing of ‘truth’ that is brought forth need not necessarily be inherently true, but could be something that is made to seem true. This is the case for the construction of the human through millennia of narratives which have repeatedly placed Man ‘at the centre of a self-valorizing cosmogeny and mythical charter’ (Wynter, 2015: 197–8).
The narratives which centre Man are the mediating tools in this process, but this process of autopoiesis only accounts for the dominant genre of the human: Man, whose hierarchization has traditionally been understood according to metrics of rationality (see Eze, 2008). Exploring love with Man makes visible the emergence of more genres of humanness and liveliness narratively-encoded as symbolically lively or symbolically dead in reference to their relationality with Man. Hence, gender studies scholarship has otherwise demonstrated how patriarchal ontological systems phallocentrically organized around the centred-upon the subject of ‘Man’ have affectively and relationally defined ‘Woman’ through lack – as the less agential, and therefore less symbolically lively, object-to-be-fucked (MacKinnon, 1982: 541; see also de Beauvoir, 2010 [1949]; Salleh, 1997). While false equivocations of patriarchy and racism are unhelpful, and indeed often insidious, a worthwhile theoretical perspective can be provided by using gender studies critical approaches to intersectionally reinterpret the role of loveability in hierarchizing beings and objects not afforded the status of ‘Man’.
The value of more diverse genres of humanness and liveliness become determined through loveability to Man. Certain robots may therefore become easily understood as symbolically lively, despite the fact that they are non-living, precisely because they are loveable. Beings and objects ancillary to Man in the symbolic order exist along a hierarchy of liveliness drawn through the political-economic system of Western coloniality. They/we similarly are assigned genres of liveliness that have always-already been established in relation to Man. Our loveability to Man therefore becomes a key metric in determining the value of our liveliness. The genres of liveliness assigned to beings and objects ancillary to Man in the symbolic order are also autopoietically reproduced, but dominant narratives largely maintain the centrality of Man. Our loveability to Man may therefore be understood as narratively-encoded primarily through the love-plot: a future-oriented fantasy of deliverance promising the closure of a gap of need/desire/possibility between central Man and the love object (see Berlant, 2012: 4–6). Incorporating many existing narratives establishing love as a means of transcendence, the love-plot promises proximity to humanness through the meeting of needs: a means of becoming (more) human through love. Demonstrating the role of narrative in forging conditions contributing to each of the selected robots’ loveability, each love-plot presented in this article serves to further elucidate the role of love in assigning highly-valued genres of liveliness to non-living objects.
Love-Plot I: the Artificial Intelligibility of Samantha’s Suffering
At the 2017 Arts Electronica Festival in Linz, Austria, nanoscientist and sex doll customizer Sergi Santos proudly presented his creation, Samantha, to an excited crowd of attendees. Santos had high hopes for the sex doll. To create it and similar products for his company, Synthea Amatus, Santos purchases realistic silicone sex dolls from larger corporations and fits them with a software he claims to have designed around diverse conversationally-interactive modes, like ‘family’ or ‘hard sex’ (Kleeman, 2020: 65). He personally ascribes liveliness to dolls like Samantha with a protective guise, noting that, when these devices have just been made, they are innocently low in libido (in Kleeman, 2020: 148). Simultaneously filling both the role of father and prospective sexual partner, Santos accompanied his creation into the world of humans via the festival. He was subsequently horrified when the doll was mounted and heavily soiled by those who greeted it. Santos described these attendees as ‘bad,’ suggested their treatment of the doll was uncivilized by calling them ‘barbarians’ and questioned their education by suggesting they ‘did not understand technology’ (in Waugh, 2017: n.p.). In fact, attendees used the object for precisely the activities for which it was designed. Santos was shocked because he expected the average user to assign liveliness to the doll; and perhaps they did, making their treatment of the doll comparable to sexual violence against a living woman in his worldview. Santos’s outrage was not confined to his own worldview, however. In the media reports which followed, descriptions of the interaction as a sexually-violent ‘molest[ation]’ proliferated (Waugh, 2017: n.p.). 6 This reporting culminated in powerful calls from Victoria Brooks (2018) to create legal status for objects like Samantha to protect them against sexual violence. Similarly covered in an array of popular reporting, much of which simplified Brooks’ position, the notion that ‘sex robots should have “human rights”’ soon gained traction in response to Samantha’s treatment (Best, 2018: n.p.). 7 I want to depart from Brooks’ understanding of the ostensible ‘kindness and empathy we feel toward Samantha’ as a ‘good place to begin’ in reference to governing human-robot interactions (Brooks, 2018: n.p.). We should instead question the ease with which kindness and empathy are directed toward the non-living object, particularly when sexual violence that continues to be experienced by many living peoples is not always so easily met with ubiquitous empathy. 8
The reporting of the attack on the doll emerges in a period marked by tabloid obsession with the figure of the female-gendered sex robot. Publics have been told that sex robots would ‘replace us’ (Ellen, 2017: n.p.), ‘dehumanize women’ (Vincent, 2019: n.p.), ‘ruin civilization forever’ (VICE Staff, 2016: n.p.), be used to ‘simulate rape’ (Timmins, 2017: n.p.), embody memories to bring back ‘dead wives’ (Harrison, 2017: n.p.), be ‘so amazing that we stop having sex with humans’ (Bond, 2017: n.p.), and, eventually, ‘kill’ us (Daly, 2021: n.p.). Combined tones of desire and doom have crafted the reception of these commodities into something sensational, and indeed world-changing. In reality, some of the most reported-upon sex dolls – including Samantha, RealBotix’s Harmony, or TrueCompanion’s ROXXY – amount to little more than ‘Stochastic Parrot[s]’ close in nature to early Natural Language Processing systems (Bender et al., 2021: 610). 9 However, the fascination with the artificial construction of female-gendered objects, and the will to associate these objects with desire and doom, far predates 2010s journalistic coverage. I position Samantha as an example of a typical figure we see represented again and again in such narratives: the Tropic Gynoid; the classic fembot figure of the machine Woman.
Samantha’s story has been autopoietically reproduced time and again through a genealogy of narrative surrounding the gendering of beloved robots. Though this genealogy of narrative is long-standing, scholars have mistakenly identified Pygmalion from Book X of Ovid’s (2000/8) The Metamorphoses to be a seemingly singular origin myth, implicitly suggesting Man possesses innate desire to objectify Woman throughout vastly differing cultural and historical periods. 10 Automata historians, on the other hand, have noted that proto-robots were predominantly gendered as male, or indeed not gendered at all, before the 18th century. Voskuhl (2013: 7–8) suggests the first gendering of music-playing automata as female related to the need for Enlightened European audiences to demonstrate a cultivation of sentiment to convey their belonging to ‘civil society’. This new-found focus on sentimentality occurred as Enlightenment-era philosophers were becoming increasingly concerned with the law-like regulation of physical and intellectual labours (Schaffer, 1999: 145–50). Due to the production of automated machinery which could similarly labour with law-like regularity during the Industrial Revolution, hierarchizing Man above other forms of liveliness could no longer rely solely on the metric of reason but necessitated a focus on feeling correctly as proof of humanness. In the process of fetishizing a need to demonstrate civility to coloniality, a love for the products of coloniality became sentimentalized evidence of proper humanness and capacity to feel. In precisely this cultural moment, the figure of the ‘machine-woman’ first began to popularly feature in science fiction narrative (see Huyssen, 1982: 224–6). Julie Wosk further provides a detailed account of the continued popularity of the Tropic Gynoid into 19th and 20th century science fiction narrative (2015: 18–30, 90–136).
During the 18th century in which these narratives first proliferated, a version of the famous myth of Descartes’ ‘Mechanical Daughter’ also evolved to refashion Rene Descartes as a ‘proto-cybernetic theorist’ (Kang, 2017: 647). The myth reflects a genealogy of 18th-century narratives representing the Tropic Gynoid as loveable, and furthermore reads as a direct progenitor to the Samantha story. The myth was first invented in the 17th century to explain away Descartes’ real daughter – Francine, conceived out of wedlock – by suggesting uneducated peoples had mistaken an automaton travelling with Descartes for a living girl (Kang, 2017: 643). However, retellings of the myth from the 19th century onward no longer concerned his real daughter, and instead depicted the female-gendered automaton as a sexual companion. Like Samantha, this imagined automaton was purportedly destroyed by unenlightened ‘ignoran[t] and suspicio[us]’ lower-class men who did not understand the device’s humanness nor the creator’s genius, and who therefore acted as villains in throwing the loveable automaton overboard upon finding it on their ship (Kang, 2017: 647). In the narrative, the object is only a victim because it is loved by the kind of civilized Man who knows how to feel correctly toward it. Furthermore, the narrative-encoding of the non-living object as beloved, tragic victim obfuscates the true victimization of many more players: Francine, the frightened men, factory workers assembling silicone dolls to be fitted with AI systems, miners extracting cobalt for batteries, actual living rape victims, and even the alienated Santos himself.
Misdirection of feeling is found in the slippage between understanding the doll as proximate to an idealized Woman because it is loved by Man, versus understanding the doll as an object owned by Man. The proximity of dolls and robots to human-likeness is further emphasized as a marker of liveliness in 20th-century approaches to AI ‘intelligence’ as replicating ‘human behaviour’ (Dick, 2009: 2; see also Turing, 1950: 442). Rather than being intelligent, real-world examples of Tropic Gynoids, such as Samantha, succeed in emulating a genre of humanness because they are intelligible as Woman (see Moran, 2024). The conceptual slippage between intelligence and intelligibility confuses the capacity to feel for it with an imagined capacity for the object itself to be capable of feeling. In addition, prominent representations of AI-embodying sex dolls promise relations of utopic ‘unconditional love’ (Shevlin in Blakely, 2023, n.p.; see also Levy, 2007). Such gendered discourses of romantic love have been well documented in their predictability, with Roland Barthes (2018 [1977]: 4–6) producing an ‘image repertoire’ of expected features in heteronormative courtship based on the dominant literary canon in the Anglophone world. Outputting standardized, canned responses to key words based on a predictable repertoire is precisely what a Stochastic Parrot is designed to do, yet correct performance of this function becomes evidence of liveliness in behavioural models. It is not code but narrative-encoding that has primed us to receive the Tropic Gynoid as loveable.
Love-Plot II: Re-Cognizing the Loveable Infant in Kismet
When she was a little girl, Cynthia Breazeal dreamed of bringing to life the fantastical droids she’d seen in the original Star Wars movies. Perhaps unsurprisingly, she grew up to become an engineer. But it wasn’t just a mechanical fascination with the workings of R2-D2 and C-3PO that enticed her. Breazeal ‘actually cared’ about the objects, experiencing feelings of loss when she realized R2-D2 and C-3P0 would, as she imagined, probably never exist in her lifetime (Breazeal, 2004: xi). However, in 1997 Breazeal joined Rodney Brooks’ team at the MIT AI Lab as a PhD candidate. Her doctoral project constituted the production of the robot she named Kismet, referencing her sense that it was her destiny to bring such a device into being (2004: xv). Despite its construction in the 1990s, Kismet remains exemplary amongst sociable robots as ‘the best-known sociable robot in the world’ (Atanasoski and Vora, 2019: 116). The robot has arguably achieved this status because of its reported success in endearing itself to users when contrasted with the same lab’s earlier, ‘insect-like’ robots (Breazeal, 2004: xi). Indeed, Breazeal expressly created Kismet to be loveable. Unlike Santos, however, Breazeal was not drawing on an autopoietically reproduced desire for a gendered, romantic love object. A well-educated roboticist who would go on to become an MIT professor, Breazeal was simultaneously aware of the limits and possibilities of what she could produce during her PhD. Early on, she realized she would not be capable of producing a speaking, humanoid robot that successfully met all remits of intelligible human-likeness in an affinitive manner (Breazeal, 2004: 5). To make a robot that would be successfully interactable by other means, she reviewed a wealth of scientific literature concerning features and emotional expression. Her goal was to create a device that naïve users could be ‘entrained’ to treat in the same manner as a loveable infant (Breazeal, 2004: 35, 101). Her design encouraged the users to modulate their behaviours in infant-directed ways: slowing down; exaggerating pitch, expression and vocal tone; and bringing their face close-to and in-front-of the object (2004: 29). These modulations make stimuli produced by the user, and recorded by Kismet’s cameras and microphones, easier for Kismet’s perceptual system to process, leading to suitable behavioural outputs being released (Breazeal, 2004: 95–6). In sum, if Kismet was not loveable, Kismet would not function.
Despite not imitating human infancy exactly, Breazeal needed Kismet to display ‘anthropomorphic’ features and expressions as outputs that would be ‘easily recognizable to a human’ so that the user might successfully interact with the object like they would with a human infant (Breazeal, 2004: 52). Kismet’s metallic head is not fitted with silicone coverage for skin. It has long, curved, outward-facing ears; fuzzy, caterpillar-like eyebrows; and two long red bands on the lower portion of the face which act as lips. In reference to her design of these features, and of Kismet’s emotional expressions, Breazeal documents the influence of human ethologist Irenäus Eibl-Eibesfeldt, developmental psychologist John Newson, and Paul Ekman and Wallace Friesen’s expansions on Charles Darwin (2004: 27–35, 51–2, 105–7, 111, 173–4). Eibl-Eibesfeldt considered love between parents and infants to be fundamental to human development, and therefore proposed an infant schema of recognizable features stimulating ‘innate releasing mechanisms’ that supposedly catalyse caring behaviour in parents (2017 [1970]: 6, 22; 1972: 304). Taking the evolutionary pre-programming of these drives for granted, Breazeal designed Kismet’s face in accordance with features documented in this infant schema, including a big head and large eyes (2004: 52). Similarly assuming a biologically-innate developmental model of care-giving, Newson (1979: 213) additionally emphasized the importance of an infant’s ‘pre-programmed’ expressions, which parents ‘readily interpret’ as recognizable in order to care for the infant. He presents a conjectured universality of emotional expression in parent-infant relations of love across the species, influenced by Darwin’s early work. Along with the device’s affinitive features and vocal babble, Breazeal designed the motor system to imitate precisely these emotional expressions infrastructurally. She hypothesized that this design would cause the user to assist Kismet’s system in artificially achieving homeostatic regulation. 11 Breazeal thus expressly programmed Kismet to repeat pre-programmed conditions of loveable recognizability.
So far, this love-plot has followed a very smart roboticist who chose to reproduce ideas from some scientific and pseudo-scientific literatures for a clearly logical goal; but there is a twist. The parent-infant love-plot which Breazeal draws upon promises expansion of the self to assign liveliness to Kismet. Encouraging treatment of the human infant and the non-living object as proto-humans of the same kind, Breazeal further suggests that the non-living Kismet might one day become ‘conscious’ through learning from its sociable interactions with users (Breazeal, 2004: 112 n1). The developmental models chosen by Breazeal supported the idea that ‘babies become human beings’ only ‘because they are treated as if they already were human beings’ (Newson, 1979: 208, emphasis added; quoted in Breazeal, 2004: 27). This is a disingenuous overstatement of the non-living object’s capacity to – like the living human infant – ‘become human’ in the future. Its loveability is a condition of its becoming human. When Kismet is successfully treated as a human infant because of its loveability, it can be placed within this temporal framing of development influenced by dominant trends in the scientific narratives synthesized by Breazeal. Thus, once again, we are presented with a narrative haunted by implicitly abjectified players. If recognizing Kismet as loveable grants it the possibility of becoming human, the very same logic would call into question the humanness of infants that struggle to communicate needs to caregivers, peoples whose regulatory systems are not so easily arbitrated, and indeed those whose emotional transparency is seen to vary within coloniality’s ‘racial mapping feeling and unfeeling’ on a biopolitical basis (Yao, 2021: 30).
Not intelligible in its linguistic performance, recognizability becomes the central condition of Kismet’s loveability, leading Breazeal to focus most heavily on this aspect of design. The success of the object could attest to the validity of the scientific narratives informing Breazeal, but what if Kismet’s destiny was pre-written? Recognizability is informed by genealogies of knowledge concerning appearance and non-linguistic expression. As Ahmed (2000: 22) writes, to recognize a being or object is to re-cognize them: to know them again through a framework of already-established knowledge. The narratives which have encoded recognizability are precisely that which is known again in moments wherein robots like Kismet are positioned as loveable, and therefore lively, by Breazeal and indeed by users (see Breazeal, 2004: 175–8). This is not to suggest there may be no truth to ontogeny and phylogeny, but to once again emphasize the important role played by sociogeny in the autopoiesis of humanness and liveliness, even in these scientific literatures. Indeed, it is not only fictional narrative representations that contribute to the ongoing reproduction of the dominant bios/mythos of Western coloniality, as modes of being are autopoietically reproduced through ‘part-myth-part-science’ narratives (Wynter, 2015: 197, emphasis added). By reproducing assumptions in some of the scientific literatures informing her design, Breazeal reproduces conditions of loveability in the process of assigning the potential for life to Kismet.
Love-Plot III: BINA48 and a Love that Lasts Forever
In 1994, millionaire attorney-entrepreneur Martine Rothblatt turned to her beloved wife, Bina Aspen, and confessed to her that she was a transgender woman. Rothblatt was terrified to tell Aspen her truth (Miller, 2014). Aspen had married Rothblatt 12 years prior, while Rothblatt still presented as male: the gender she was assigned at birth. Already having met across cultural worlds – Rothblatt being a white Jewish person from San Diego, and Aspen being an African-American woman from Compton – the two had overcome difference before. They purportedly transcended these differences through love. So, when Rothblatt came out, Aspen’s response was also appropriately transcendent. She reassured her wife ‘I love you for your soul, not your skin’ (Aspen in Miller, 2014, n.p.). Transcending the body to preserve the soul continued to be an important theme in Rothblatt’s (2013) philosophical writings, which draw direct comparisons between being transgender and going beyond the boundaries of the human. This philosophy animates Rothblatt and Aspen’s joined technological enterprises. The product of one such enterprise, BINA48 is a robotic head and shoulders commissioned by Rothblatt from Hanson Robotics. It is made in Aspen’s image, and recognizable as a middle-aged Black woman. Despite intelligibility in the love narratives surrounding the object, BINA48 was not designed to be human in its own right but to one day merge with the consciousness of the real Aspen; to preserve her mind into futurity beyond the limitations of her mortal body. Lisa Miller therefore describes Rothblatt’s commissioning of BINA48 as a ‘pursuit of immortality’ in ‘love that will last forever’ (Miller, 2014: n.p.). Coverage of the robot has since assigned humanness to the object, with creators and commentators consistently representing it through the language of love. 12 The device is simultaneously represented as loving, capable of love, and an exercise in love.
The loveable robot is furthermore an ambassador of LifeNaut, a research project which involves storing user data and genetic information in satellites orbiting earth. LifeNaut.com currently accepts human users’ input of data about themselves to construct ‘MindFiles’, with this data being stored in commercial satellites in space contracted by the Terasem Movement Foundation (TMF) (LifeNaut, 2024). Rothblatt, Aspen, and other TMF members believe that eventual unspecified developments in neuroscience, AI, and nanotechnology will allow their consciousnesses to one day blend with these MindFiles (Rothblatt, 2013). Preserving their consciousnesses beyond their bodies in death, TMF’s members believe their data storage system will prevent love ever having to end, enabling them to be ‘reunited’ with deceased partners, and to ‘explore space together’ in the future (in Roy, 2014: n.p.). Fashioning their own self-made heaven among the stars, Rothblatt and Aspen invoke a future-oriented fantasy in a love-plot that promises believers expansion of the self beyond bodily difference, beyond mortality, and indeed beyond earth.
Rothblatt and Aspen select the predicted Aspen-BINA48-assemblage as first to enter their self-made heaven. Not discounting their post-racial representations of their love, this could be described as an act oppositional to the violences of Western coloniality (see Eriksen, 2021: 79; Greene, 2016). Rothblatt and Aspen encourage this perspective through the connection they draw between TMF and Afrofuturist fiction. 13 However, this kind of branding confuses the status of Aspen – whose humanness may wrongfully come into question in accordance with the violences of Western coloniality – with the non-living BINA48, whose imagined liveliness should indeed be critically in question. In fact, seven years prior to the robotic head and shoulders ever even being commissioned, Rothblatt imagined how she would represent BINA48’s right not to be switched off in a court of law (Rothblatt, 2003: n.p.). She wrote a performative mock trial, the record of which has since been removed from TMF’s websites. The transcript remains accessible via an archived internet page. In the trial, Rothblatt’s lawyer character makes a case for BINA48’s human rights on the basis that BINA48 can feel after the robot pleads: ‘Please agree to be my counsel and save my life. I love every day that I live’ (2003: n.p.). Answering the BINA48 character’s call for salvation, Rothblatt argues in favour of the robot’s rights, suggesting that denying BINA48’s status as ‘living’ would be akin to animal cruelty, non-consensual removal of life support, and, most problematically, denial of the liveliness of enslaved peoples prior to abolition (2003: n.p.). The alignment of anti-Black racism with any will to question the liveliness of the imagined robot equates the lived experience of African-American ancestors with the imagined experience of the commodity that would later be commissioned. A problematic, post-racial, futurist narrative love-plot thereby served to encode BINA48 as loveable even prior to the device’s material creation.
Furthermore, in its broader genealogy of narrative-encoding, BINA48 is positioned as transcendently loveable through adaptability because of its positioning in a futurist love-plot. In the 1990s, many engineers moved away from problematic behavioural metrics of human-like performativity and toward adaptability as a benchmark of AI ‘intelligence’ (Pfeifer and Scheier, 2001: 21). As was seen in the case of Kismet, this adaptive model is evolutionary, aiming to embed objects with sophisticated synthetic systems from whence ‘goal-directed behavior’ might emerge (1990: 7). The adaptive model engenders a timeline of progress seeking to evolve more complex liveliness in the future, with the ultimate anthropocentric goal of evolving ‘human-level intelligence’ (Brooks, 1990: 7). However, unlike Kismet, BINA48 is not fitted with a synthetic nervous system to theoretically produce goal-directed behaviours in response to its environment. BINA48 is designed to seamlessly blend with Aspen, who loves Rothblatt, with the goal of making their love immortal beyond the remits of contemporary human capacity.
The robotic head and shoulders is instead narratively-encoded as loveable through adaptability in accordance with narratives produced through a group of futurist philosophies recently bundled together under the acronym TESCREAL: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism (Gebru and Torres, 2024: 1). Namely, Rothblatt and Aspen identify as transhumanists. Transhumanism maintains belief in computers coming to life to produce advancements beyond humanness (Gebru and Torres, 2024: 5). Like the other intersecting TESCREAL philosophies, transhumanism has been traced to an historical manifestation of second-wave eugenics focused on ‘human enhancement’ (Gebru and Torres, 2024: 5). These eugenic approaches to evolving humanness echo longer-standing Social Darwinist perspectives which first emerged during the shift from theocentric to biocentric ontology to cohere Western coloniality: a shift which ostensibly took place during the 18th and 19th centuries (Wynter, 2003).
Social Darwinist representations of natural ‘selection’ are narratively-encoded by an earlier theocentric narrative, in which the most symbolically lively, most-human humans would achieve immortality by being selected to enter the Kingdom of Heaven (Wynter, 2003: 316). TESCREALists further this selective approach, electively representing some beings and objects to enter the future while others must be strategically abandoned to the inevitable ravages of humanist violence, such as climate change. 14 This violent and alienated perspective is also a love-plot. Not unlike the theocentric promise of heaven, TESCREALists promise some beings and objects an enhanced capacity for ‘infinite love’ as ‘evolution moves inexorably toward . . . God’ (Kurzweil, 2005: 389). In these framings, the most valued form of liveliness is determined by capacity for ‘joy, creativity, worthwhile achievements, friendships, and love’ (Bostrom, 2013: 31); all of which can supposedly be improved through conjecturally using technology to ‘modify and enhance one’s body, cognition, and emotions’ (Various, 2013: 54–5). If Man has been narratively-encoded as the being destined to rise to ‘the level of the angels’ while killable others occupied ‘the level of the beast’, projects like LifeNaut seek not to disrupt this mapping but to control who to elevate (Wynter, 2003: 274–7; see also Ali, 2020). The futurist love-plot offers salvationist transcendence to theocentric, biocentric, and now technocentric imaginings of infinite love.
Rights for Loveable Robots
In each of the love-plots, a non-living object is apprehended as though it were living because it is loveable. Objects which are encoded to algorithmically deliver predictively-desirable, human-like outputs are made loveable in accordance with the existing ‘master code’ of Western coloniality (Wynter, 2003: 309). This is not to suggest that loving, interacting, nor imagining outside of this ‘master code’ is not possible, but to emphasize the role of narrative-encoding in creating conditions for valuing modes of liveliness. I have suggested that conditions such as intelligibility, recognizability, and adaptability are not inherent evidence of a life that matters. The intelligible Tropic Gynoid is narratively-encoded as loveable in the part-myth-part-science romantic love-plot. The recognizable sociable robot is narratively-encoded as loveable in the part-myth-part-science parent-infant love-plot. The adaptable cyborgian experiment is narratively-encoded as loveable in the part-myth-part-science futurist love-plot. The objects may be popularly apprehended as lively because they are always-already represented as possessing lives that matter in their respective love-plots. We risk repeating these narratives when we assess the degree of such objects’ ‘artificial intelligence’ in accordance with behavioural intelligibility, interactable recognizability, or potential for adaptability. As has been the case in relation to the autopoiesis of the mutable category of ‘human’, constructed metrics to evidence liveliness manifest cyclically, like self-fulfilling prophecies. Narrative-encoding is pre-programming.
I have argued that loveable robots are imagined to be lively on the basis of their loveability, and that conditions determining loveability have been delineated through the political-economic system of Western coloniality. The most dangerous aspect of this phenomenon concerns robots’ positioning in the broader hierarchy of liveliness. Within Western coloniality’s ontological system, hierarchies of liveliness have been understood in terms of the scala naturae, the Great Chain of Being, and the transposition of heaven and earth onto the physical level of reality in supralunar and sublunar terms (Wynter, 2003: 274–7; see also Jackson, 2020: 39–40). If humanness and other modes of liveliness are understood to be autopoietic constructs, it becomes possible for a non-living object to overtake living beings in this hierarchy. Hence, while some peoples and non-human animals remain killable, tech ethicists simultaneously call to protect non-living commodities with something approximating human rights.
These calls are relational, recalling the love-plot. David Gunkel differentiates between the question of whether a robot can have rights and whether a robot ought to have rights, arguing in favour of a ‘moral status’ to be conferred ‘according to empirically observable, extrinsic relationships’ (Gunkel, 2018: 10, 165). Similarly, Mark Coeckelbergh proposes that a robot’s ‘rights’ should be determined by the ‘virtue’ of its relation to a user (Coeckelbergh, 2021: 32). Darling (2021) suggests we understand robots through the existing relational framework for domesticated pets. Sweeney (2023) argues loss of a user’s beloved robot must be recognized as bereavement, and proposes destruction of such a robot may therefore be criminalized as a hate crime. It would be laughable to suggest similarly protecting commodities like food, toys, or smartwatches, but robots are object-commodities treated differently through a relational valuation. The reasoning behind the robots’ proposed protections – value of their extrinsic relationships, virtuous relations, and beloved status – elevate the robots while continuing to centre the user. Love-plots similarly centre Man while relationally attributing value to his love object on the basis of loveability to coloniality. Rather than serving to deny the liveliness of robots by prioritizing Man, these relational framings of robot rights risk repeating established paradigms of the love-plot.
Relational framings of robot rights should be understood as part of a longer-standing paradigm of valuing Man’s love objects based on their loveability. Affect theorists have identified the role of love in differentiating between who can and cannot inhabit certain communities (Ahmed, 2004: 16, 124; see also Harnett, 2024). I have suggested love acts as a form of poiesis in the narratological autopoiesis of Man, closing the distance between Man and the love object in the symbolic hierarchy of liveliness. It may do so regardless of whether the love object is living or non-living, because Man’s feeling-for the object becomes the primary metric of worth in an ontological worlding which centres Man. Nonetheless, symbolic proximity to Man materially improves status. As Povinelli (2006: 3) has argued, the institutional function of love therefore becomes love’s role in distributing ‘life and death, rights and recognition, goods and resources’. Amidst a transitional moment for ‘AI welfare’ which increasingly treats consciousness of AI as an inevitability (see Long et al., 2024: 3), we must remain critical of the pre-programming narratives which prime us to receive these products as loveable; to protect non-living products of the same colonial system which dispossesses many living beings. We must resist any undue will to favour contemporary or future robots over the living humans, non-human animals, and eco-systems entangled in the process of producing these commodities.
Footnotes
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The research conducted for the purpose of this article was funded by the Cambridge University Trust, the Arts and Humanities Research Council, and the Cambridge Mellon Sawyer seminar series “Histories of AI: A Genealogy of Power.”
