Abstract
This conversation focuses on a book published in 1983 that examined ‘animism’, the tendency to regard non-living entities as living and sentient. The Intimate Machine suggested that animism will be fully exploited by artificial intelligence (AI) and robotics, generating artefacts that will engage the user in ‘social’ interactions so that eventually people will form close and beneficial social relationships with artificial ‘companion systems’. The author of the book, clinical psychologist Neil Frude, is asked to reflect on his book and, although he admits that his techno-optimism in the 1980s was exaggerated, it is clear that he still holds to his major thesis. He considers ‘virtual pets’, such as Tamagotchi, Furby and Sony’s Aibo, and considers why they did not evolve into more sophisticated social artefacts. Frude identifies three vital elements needed for a successful artificial companion – animism, artistry and AI – and acknowledges that the last of these has been the weak link. However, even simple AI programs can make an impressive impact when skilfully implemented. He emphasizes the relevance of characterization, pointing to examples in recent computer-generated animations. In the context of interactive technology, the addition of character and artificial personality will generate companion machines that are highly engaging and exceptionally appealing. Insights into the likely nature and roles of artificial companions, and how people will relate to them, are available in the science fiction corpus, and this literature has also examined relevant ethical and social issues. Finally, he considers some of the possible clinical applications of such systems in both physical health and mental health and he also reflects on some of the potential dangers of the kind of artefact that he is envisioning.
Keywords
Introduction
Neil Frude is a consultant clinical psychologist, Clinical Research Director of the South Wales training course in clinical psychology, a Fellow of the British Psychological Society, Fellow of the British Association of Cognitive and Behavioural Psychotherapy and External Professor at the University of South Wales. After qualifying as a clinical psychologist at the Institute of Psychiatry in London, Neil completed a PhD in biofeedback at the University of Wales. He has written a clinical psychology textbook as well as books on human–computer interaction, family relations and violence and he has also published over 100 chapters and articles. He has worked as a clinician, educator and professional trainer for many years and in 2003 he devised a book prescription scheme for mental health that has been widely emulated throughout the UK and is now a national scheme in England and in Wales. For the past few years he has been developing various innovations based on ‘positive psychology’ as a means of increasing people’s wellbeing in the context of employment as well as in clinical applications. Much of this work is carried out in his role as co-director of The Happiness Consultancy (2014). In 2004 Neil embarked on ‘something completely different’ and appeared as a stand-up comedian for 16 nights at the Edinburgh Fringe in his one-man show.
In this article, Neil Frude discusses his ideas about artificial companion systems with Petar Jandrić, an educator, researcher and activist. Petar has authored two books that have been published in Croatian, English and Serbian and has written many scholarly papers, book chapters and popular articles. Petar, whose background is in physics, education and information science, regularly participates in national and international e-learning projects and policy initiatives. His current research interests are situated at the post-disciplinary intersections between technologies, pedagogies and society. Petar has worked at the Croatian Academic and Research Network, the University of Edinburgh, the Glasgow School of Art and the University of East London, and he is currently a senior lecturer at the Polytechnic of Zagreb.
*****
Petar Jandrić: Neil, thank you a lot for agreeing to this conversation for our Special Issue. It’s now over 30 years since you wrote The Intimate Machine (Frude, 1983) – what was the main thesis of this book?
Neil Frude: The thesis of The Intimate Machine was that computer technology can be used, just as many other media have been used in the past, to stimulate the innate human tendency to attribute life and consciousness to non-animate objects. This tendency is known as ‘animism’ and it is very strong. It occurs in response to natural phenomena such as thunder (‘the gods are angry’) and cloud formations (we often ‘see’ animals and faces in the clouds, for example), and it has been exploited by artists and engineers for centuries. Things that excite people (both children and adults) by being ‘almost alive’ include puppets, waxworks (especially if they include a simple mechanical movement which makes it look as if they are breathing), dolls and, especially, automata.
Some of the automata produced in the 18th and 19th centuries were exquisite pieces of engineering made by the finest clockmakers and they did amazing things like writing messages and playing real musical instruments. Some could even speak a limited number of phrases. They fascinated audiences who queued and paid money to see them in action, and some of the finest pieces were collected by monarchs and took pride of place in courts across Europe (Hillier, 1976).
People are invariably fascinated by artefacts which appear in some respects to be alive. Given even minimal cues, people will often attribute life, intelligence and emotional feelings to inanimate objects. The thesis put forward in the book is that computer science offers fantastic new possibilities for exploiting the basic psychological animistic tendency. Advances in artificial intelligence (AI) will enable artefacts not just to be active in the world but to be socially interactive in ways that will stimulate extreme emotional effects. When animism is stimulated by a new generation of sophisticated artefacts, the effects will be profound. People will enter into social relationships with artificial systems.
PJ: How did you become interested in this area? What ideas did you draw on when you came up with this basic idea of a potentially very powerful combination of animism and technology?
NF: I remember reading some papers on the psychological effects of loneliness in older people and being struck by the finding that older people living alone benefit significantly, in terms of both their physical and their psychological health, from owning a pet. One study looked at the benefits of owning a budgerigar and showed that older people treated the bird as a companion, reading all kinds of meaning into the bird’s behaviour. They would say things such as: ‘It likes me to whistle’ or ‘It prefers the light to be left on at night’. They treated the bird as if it had human thoughts and feelings and they gained significant benefits from having such a ‘friend’ in their home (Mugford and McComisky, 1975).
Just after I read about this research, I came across an article about the current advances (remember, this was in the very early 1980s) in speech synthesis and speech recognition. This suggested to me that it wouldn’t be long before technology would be able to provide something that people would interact with in a ‘social’ way. I thought that it would be relatively easy to produce an artificial system that was ‘better than a budgie’ – and that interaction with such an artefact might provide a good deal of interest and entertainment and might go some way towards relieving feelings of loneliness. So this led to the idea of a ‘companion machine’ (which of course would not need to be shiny or ‘mechanical’).
I suggested that there were ‘close encounters of the first kind’ – with other people – ‘close encounters of the second kind’ – with companion animals such as cats and dogs, and budgies – and then ‘close encounters of the third kind’ – that is, social encounters with artificial systems. I couldn’t see any reason why such interactions wouldn’t ‘work’ or any reason why they wouldn’t be attractive to a lot of people. And I still can’t!
PJ: So that’s what you meant by The Intimate Machine?
NF: Yes, I realized that the idea might initially be shocking to many people, and that many people would find the idea preposterous because there is a huge discrepancy between how we think about social entities (soft, warm, friendly) and how we think about technological gadgetry (hard, cold and impersonal). I wanted to capture this sharp contrast in the title of the book. I considered various juxtapositions of ‘social words’ and ‘computer words’, producing such combinations as The Friendly Computer and The Soft Machine, etc., before finally choosing The Intimate Machine. The book had been published in the UK and was about to be published in the US when I learned, from an article in Time magazine, that Sherry Turkle, a psychologist working at MIT, was writing a book with a similar thesis. I remember the chilling feeling when the article gave the working title for her book – The Intimate Machine. Because my book had already been published, she and her publishers decided not to use this title and her book was published as The Second Self (Turkle, 1984). I guess that she had arrived at her original title by going through a similar process to me, experimenting with various combinations of words representing the two contrasting domains.
PJ: How have your views about this basic idea changed over the past 30 years?
NF: The basic idea hasn’t changed. But of course, I clearly got the time frame wrong. I really did think that by now ‘companion artefacts’ would be around in abundance, not only in the form of elaborate toys and amusing, highly skilled, teachers for children, but also as companions, teachers and helpers for adults of all ages. One of my ideas, developed further in my second book on this topic – The Robot Heritage (Frude, 1984), was that initial products would be somewhat clunky, low in effectiveness and relatively unattractive, but that there would then be a rapid product evolution as developers came to recognize just what people wanted from their artificial systems. We have recently witnessed such rapid ‘product evolution’ in the case of phones and tablets. Variations of size, appearance and technical features are constantly being introduced and, depending on their acceptability and attractiveness, products that feature novel ‘mutations’ either survive and thrive in the market place or soon become extinct.
I believe that a similar rapid evolution will occur with companion systems when viable products are introduced. There have been some initial stirrings over the years which seemed to hold promise but eventually turned out to be false starts. Of special interest are the primitive ‘virtual pets’ that were developed in the 1990s, including the Tamagotchi and Furby. The Tamagotchi was first marketed in Japan in 1996 as a virtual pet that would appeal to both children and adults. The user had to care for the simulated creature by regularly ‘feeding’ it and tending to its various needs. If regular care was provided, the Tamagotchi thrived, but if it did not receive sufficient care then it languished and eventually died. These simple devices did stimulate a range of emotional responses in users (Donath, 2004). Some schools banned Tamagotchis because they distracted students who spent time caring for their needy virtual pets. The emotional impact is not surprising given the animistic tendency, which some people believed to be a novel effect and labelled the Tamagotchi effect (defined in Wikipedia (2014) as ‘the development of emotional attachment with machines or robots or even software agents’). Immediately after the Tamagotchi came the furry animal-like creation Furby and this was followed shortly afterwards by the much more technically sophisticated (and much more expensive) robot dog Aibo, manufactured by Sony. All of these products stimulated strong animistic effects at least in some users.
When the Tamagotchi craze started in Japan in 1996, I thought that this might be the equivalent of the amoeba and that those ‘electronic pets’ would evolve, gradually becoming more and more sophisticated. But that didn’t happen.
Another ‘miss’ was Furby. Like the Tamagotchi, this was a phenomenal marketing success. Its manufacturers sold over 60 million units within the first three years (World Collectors Net, 2012) (the much less expensive Tamagotchi had sold over 80 million; Bandai, 2011). And, again, like the Tamagotchi, Furby clearly elicited strong animistic responses from children (one model was marketed as ‘your emoto-tronic friend’; Furby Manual, 2014). But it didn’t have what it takes for a system to sustain interest and to convey a sense of building a relationship with the user, and so it too became extinct.
Toys such as the Tamagotchi and Furby are only one of the possible evolutionary routes that may lead to the emergence of sophisticated companion machines. Another possible scenario is that a functional machine (for example, a robot that does household chores) will be found to be much more appealing when artificial personality (‘characterization’) is added to the AI. However, it is likely that before this happens there will already have been a major evolution in inexpensive conversational systems developed as tablet-based interactive apps. These will be much more sophisticated than Tamagotchis, with good voice recognition and speech production. Such apps would be easily customisable in terms of their vocal characteristics, personality and interactive style. Some people would prefer a more extravert personality for their companion machine, for example, while other people would prefer an introvert persona.
The key thing to remember is that perfect hearing, perfect understanding and perfectly coordinated interactions between the user and the system are not necessary for such a product to be attractive and interesting, although a certain degree of technical competence is required if the system is to be sufficiently stimulating to sustain the user’s interest over the longer term. Less effective systems will be little more than a fleeting and amusing novelty, but the real interest – and the real revolution – will come when interaction sessions are cumulative in their effect so that progressive and long-term ‘relationships’ can develop between the system and the user (Frude, 1987, 1989, 1991).
When this happens, the system and the person will get to know one another more and more. The person will gradually come to understand, to respect and to trust the machine, while the machine will increase its knowledge and understanding of the person. Thus, the machine will gradually adapt as it comes to appreciate the user’s tastes and preferences (including their sense of humour). A major interest in AI at the moment is ‘anticipatory computing’, by which the system makes a judgement about how things are going and what is likely to be relevant in the immediate future (Pantić et al., 2007). For example, the system may recognize the mental pathway that the user is following and collect information that may soon be of interest. The effectiveness of the systems’ anticipatory judgements will of course depend crucially on the degree to which it is familiar with the user’s knowledge, interests and preferences.
The development of a relationship between a person and an artificial system from initial suspicion and antagonism through to amusement, acceptance and attachment has been well-portrayed in numerous science fiction stories and recently in the movie Robot and Frank (2012) – the advertising slogan for the movie was ‘Friendship doesn’t have an off-switch’. Another recent movie (Her, 2013) portrays the main character, Theodore, falling in love with his operating system. Despite the initial implausibility of this idea, most people found the portrayal of the developing relationship highly credible, undoubtedly due to the exceptional quality of the script, which won the 2014 Oscar for the best original screenplay.
The science and research of animism
PJ: Your application of the term ‘evolution’ to machines such as the Tamagotchi and Furby is clearly another example of omnipresent animism. How far can we push this kind of thinking in studies of technologies? Can we really talk of ‘evolving machines’, or should this metaphor be taken a bit more lightly?
NF: We can take it lightly. It’s metaphorical of course, to apply ‘evolution’ to non-organic entities, but something very like organic evolution occurs in many non-organic contexts. Thus the principle of variation and selection certainly applies to products such as drinks and cars. Many models are introduced and only some (probably a minority) do well commercially and so survive. So survival of the fittest applies in the marketplace. In Ann Arbor, Michigan, there’s a Museum of Failed Products which has examples of over 100,000 consumer products that didn’t make it (Miller, 2010). These are the fossils. To see the survivors, look on the supermarket shelves. The Tamagotchi and Furby had a burst of life but they deserve a place in the museum. Future innovative products may well be much more successful, but in order to survive they will need more lasting appeal than either of those products had. Once we get to a critical point of lasting appeal, then the evolution will really take off, I believe. And it will move very rapidly.
PJ: How has animism been analysed and researched?
NF: It has long been appreciated that people are attracted to and fascinated by inanimate objects that bear some resemblance to living creatures. At a primitive level, this happens when we see faces or animal shapes in clouds, rock formations or inkblots. We are innately primed to recognize such resemblances and anthropologists who have studied this phenomenon have shown that in many societies animism is closely associated with religious and mythological beliefs (Harvey, 2005). And the universal human inclination to ‘read into’ ambiguous stimuli is not limited to shape recognition. We are also primed to attribute life (generally evidenced by movement), intelligence (evidenced by apparent responsiveness to environmental cues) and emotional feeling (evidenced by facial expression, vocalizations, movements, etc.). Puppetry, automata and animated cartoons are examples of ways that artists and engineers have found to artificially stimulate this natural psychological tendency. When they enjoy these artistic productions, people are not deluded into thinking that an inanimate object is really alive, or that it is really thinking or really having emotions. People are perfectly aware of the artifice but choose to engage with the object or display as if it were alive, thinking and feeling.
Let’s take a great example of an animation which is very simple, totally preposterous and yet highly engaging and amusing. This is the desk lamp animation used by Pixar studios, in various forms, at the beginning and end of their films. The lamp jumps on top of the letters P, I, X, A and R behaving like a demented animal or a frustrated human. I first saw this animation at the beginning of Toy Story (1995) and I well remember how the audience reacted. They were clearly highly amused and delighted by the antics of … a lamp (or, to be precise, the computer-generated image of a lamp!).
The emotional impact of animations has been seen many times before, not only in cinemas but also in psychology laboratories. For example, in the 1940s psychologists produced cartoons of simple black and white geometric shapes moving around a screen and found that people who watched these cartoons readily ascribed emotional states to the shapes and interpreted the on-screen movements as meaningful interactions (so that they were seen as fighting, for example, or hiding; Heider and Simmel, 1944 – their original animation is available on YouTube).
To some extent the emotional impact of an animated film will depend on the sophistication of the technology involved. Thus the potential for animated cartoons to make a powerful emotional impact was undoubtedly enhanced by the addition of colour and sound. The emotional effects of recent movies such as Toy Story (1995) and Shrek (2001) are similarly boosted by the superb technical quality of the computer-generated animation. But Bambi (1942) remains at the top of many lists of the most powerful tear-jerking movies ever released and many adults remember how the death of Bambi’s mother made a strong emotional impact on them in childhood. This is a good example of how even relatively simple technology can make a major impact if it is implemented with charm and highly skilled characterization. Of course, animated films are merely witnessed; there is no interaction between the audience and the on-screen characters. A much stronger animistic effect can be expected when there is an opportunity for the user to interact with systems that have simulated character. This type of interaction occurs, of course, in many computer games, and it happened in a very limited way in the 1990s with the introduction of the two primitive ‘virtual pets’ – the Tamagotchi and Furby.
A decade ago, Sherry Turkle, Cynthia Breazeal and their colleagues at MIT conducted an observational study of children’s responses to what the researchers called ‘relational artefacts’ – two humanoid robots named Kismet and Cog developed at the MIT Artificial Intelligence Laboratory (Turkle et al., 2004). The study showed that the children soon came to regard the robots as ‘kind of alive’ and developed ‘social relationships’ with the robots. They developed positive feelings towards the machines, and a strong anthropomorphic tendency remained evident even when the robots malfunctioned or when the children were shown how the system worked. Deliberate attempts by the researchers to de-mystify the machines were generally ineffective. The children showed a distinct preference for continuing to regard the artificial systems as ‘kind of alive’.
The systems that I have mentioned, including the early commercial products, have had very limited power of emotional expression, and it appears that such expression is extremely important in triggering emotional responses in the user. In future products high emotional expressiveness, especially in the form of facial expression, will certainly enhance the animistic effect. In addition, the physical movements of the early robot products have generally been clunky and ‘robotic’. Adults and children may show strong animistic responses to such machines, but they are likely to prefer robots not to be too robotic. Smoother movements, with a more ‘organic’ feel (which Sony’s Aibo did achieve to some degree) are likely to elicit a much stronger impression of the machine being alive.
The three As of the intimate machine
PJ: At this moment, what do you think is holding back the development of the type of system that you wrote about in the 1980s?
NF: Three elements need to be present and neatly aligned for a viable intimate machine to be produced. These three elements are like three legs of a stool – all are necessary for the stool to be stable. They are animism, artistry and AI.
The first of these is the psychological component, the animistic tendency. An animistic response can be evoked even by simple, natural phenomena but it is stimulated most strongly by artefacts that are purposefully designed to stimulate the impression that they are alive and responsive.
The second leg of the stool is the artistry involved in creating an artefact that will elicit a strong animistic response. This involves skills that have been honed by puppet-makers, doll makers and automata engineers for hundreds of years and these days are probably most evident in animated cartoons. What counts here is both how the object looks and how it behaves. This means that, in the case of puppetry, the effect will depend on the skills of both the puppet-maker and the performing puppeteer. Appearance, movement and vocal characteristics can all be used by artists to enhance the degree to which their creations engage, beguile and delight audiences. For decades the artists at Disney have been masters of such characterization, experts in the art of amusement and of stimulating a range of powerful emotional responses which depend on animism. Extremely powerful characterizations now appear regularly in the computer-generated animations created by the Pixar and Dreamworld studios. Adults as well as children are captivated by these creations and are emotionally moved by their antics while being perfectly aware that what they are watching is totally artificial. What this shows is that artistry can engage and beguile us to such a degree that we respond to artificial situations and artificial characters as we might respond to real situations and real characters.
The third essential leg of the stool is technology, and the key aspect of this, of course, is AI. The core of any companion system is its capacity for interaction. It needs to be appropriately responsive and not simply to perform pre-set routines. Smart responsiveness requires substantial AI. The relative weakness of this third leg of the stool is of course the main reason why the development of companion systems has been delayed, although I do think that a lot more could have been achieved using the AI capacity that has been available for some time. Programs incorporating even rudimentary AI can provide simulations of intelligent interaction that have powerful emotional effects. If this phenomenon had been suitably and cleverly exploited, I think that we would already have seen a generation of intriguing interactive systems.
I have to admit that, looking back, I was an extreme techno-optimist, but I was by no means alone in this. Just about the time I was writing The Robot Heritage (1984) there was huge optimism about AI, including the Japanese Fifth Generation Project which promised a ‘truly intelligent’ system capable of simulating many human cognitive and sensory activities within the decade (Feigenbaum and McCorduck, 1984). The project was highly ambitious but the results turned out to be very disappointing. However, I have never believed that the usefulness of an intimate depends crucially on extreme machine intelligence or perfect speech recognition or production. Many social entities do not have this. A dog’s wagging tail communicates pleasure (and may elicit pleasure in the owner) without any need for words, and the fact that young children have very limited speech is clearly no barrier to their stimulating strong positive emotions.
Techno-optimism has generally been frowned upon for the past three decades, but it seems to have become more acceptable of late. For example, Ray Kurzweil, the legendary technology inventor and a strong champion of AI recently became Google's director of engineering and, according to the Guardian newspaper (2014), Google ‘has gone on an unprecedented shopping spree and is in the throes of assembling what looks like the greatest AI laboratory on Earth’. In recent years Google has also bought a number of leading robotics companies.
Kurzweil recently claimed that before too long systems will be able to understand what we say, to learn from experience, to flirt and to tell jokes. His brief at Google is to develop natural-language processing so that artificial systems will be able to really understand what they hear and read, and when this is possible such systems will of course be able to absorb the contents of any and every book and webpage. Google appears to be leading the investment in AI, but other big players including Microsoft and Facebook also have major ongoing projects.
Meanwhile, in the UK, the inventor and entrepreneur Sir James Dyson announced his intention to work towards the production of affordable household robots capable of a range of chores and in 2014 he invested in a new robotics laboratory at Imperial College in London. The UK robot company Engineering Arts has developed a number of impressive models including RoboThespians, two of which performed a stand-up comedy routine at the 2013 Edinburgh festival (Huffington Post, 2013).
And so, after several decades of false starts and discouraging setbacks, there does now appear to be a growing confidence that significant advances in AI are imminent and that, in one form or another, robots will soon make a significant appearance into people’s homes.
When functional household systems are developed it will soon be appreciated that their attractiveness to users can be greatly increased by supplementing the AI with artificial personality (that is, by incorporating characterization). Developments along these lines will surely be propelled by what is bound to be a fiercely competitive market, because ambulant machines capable of tackling household chores are bound to be expensive pieces of gadgetry and the financial value of the industry will be simply enormous.
These, then, are the three legs of the stool – animism, artistry and AI. The optimal product will emerge when technology and artistry are blended exquisitely and ingeniously so that animism is stimulated to its full potential.
PJ: How did your second book in this area, The Robot Heritage (1984), develop your original ideas?
NF: This book followed on from the major premise of The Intimate Machine (1983) and considered specific ways in which relevant technological developments are likely to be implemented. Until such systems become available, any such ideas are bound to involve a great deal of speculation, but it occurred to me that a huge archive of speculative material on this topic already existed in the form of science fiction, much of which is concerned with relationships between humans and artificial intelligent systems. So the book examined this body of work and identified themes. For example, I found that artificial systems were depicted as playing a number of different roles in people’s lives, acting as co-workers, servants, friends, advisors, teachers, children and lovers. The butler persona was especially common. My research into this literature also showed that an impressive amount of work had already been done by science fiction writers in examining, in advance of the technology becoming available, many of the practical, social and ethical issues that will arise when artificial companions enter the social scene.
I drew on Isaac Asimov’s work a lot, although I paid very little attention to an issue that interested him and has interested many other science fiction writers, the question of whether a robot might become indistinguishable from a human being. For the type of system that I’m interested in, a high degree of physical realism is not a desirable characteristic. People are generally uncomfortable in the presence of robots that have fake skin and are wearing clothes. Such realistic robots come across as ‘creepy’ which is absolutely the last thing that anyone would want of a companion system. The creepiness response has been well-researched and the type of realism that elicits such a negative response has been labelled ‘the uncanny valley’ (Burleigh et al., 2013). An optimal companion system would be clearly identifiable as an artificial system so that, in the absence of any real ambiguity, the user will be free to enjoy interacting with the artificial system as if it were human.
PJ: What have you been working on since the 1980s?
NF: I’m a clinical psychologist and my brief excursion into the field of AI and human–computer interaction was very much a hobby interest in the early 1980s. Since that time I haven’t kept up with developments in any methodical way although I have always remained thoroughly convinced that the main hypothesis of The Intimate Machine (1983) is valid and that the emergence of the kind of human–computer relationships I wrote about is just a matter of time.
These days, I am the research director of a training course in clinical psychology. I also work clinically with individual clients who have various mental health problems and I have a special interest in ways of delivering effective psychological treatment to the vast number of people who do not receive such treatment and would greatly benefit from it. Ten years ago I devised a strategy for delivering treatment using high-quality self-help books – ‘bibliotherapy’. When people with a mild or moderate mental health problem consult their physician for help, the physician is able to prescribe a suitable book from a pre-set list of books that are stocked in public libraries as part of the scheme. A book prescription may be offered instead of a prescription for medication, or in addition to this (Frude, 2004, 2005a, 2005b). A national scheme based on this strategy has been operating in Wales for the past 10 years and similar schemes have now been implemented in England, Scotland and Ireland as well as a number of other countries. Although the scheme I devised is based on books, it’s easy to see how a similar approach can be used to deliver therapy using various electronic platforms and this is already happening, with many useful online resources as well as a range of relevant apps.
I also have a strong interest in developing and disseminating ways of promoting personal wellbeing. There are many effective ways of helping people to increase and sustain their level of happiness, and increased wellbeing also helps to prevent the development of emotional problems as well as helping to alleviate such problems when they occur. The approach I’ve been working on is based on ‘positive psychology’, and I have recently written an account of how strategies from this research area can be applied as ‘positive therapy’ (Frude, 2014).
The birth of robo-doctors and robo-teachers
PJ: As a clinical psychologist, can you see any clinical applications of intimate machines?
NF: Yes, I can see several, and they cover both mental health and physical health. Although psychological therapies can be highly effective even when they are delivered in relatively impersonal ways (through books and apps, for example), we know that clients benefit a great deal from direct interaction with a therapist, especially if they are able to build a relationship that they perceive as warm and empathic. Over 30 years ago Joseph Weizenbaum devised a primitive AI program called Eliza which took the role of a therapist and engaged in ‘therapeutic conversations’ with people via keyboard and screen. Weizenbaum was very surprised (and even shocked) by how quickly people became engaged with the program, sometimes divulging highly personal details to their computer ‘therapist’ (Weizenbaum, 1979).
You can see where this is leading. There are millions of people worldwide who would benefit from psychological therapy but are unable to access it, and interactive systems could provide a powerful and highly cost-effective way of delivering such help. Such a system could also be extremely effective in improving people’s physical health. A companion system could be a powerful agent for health promotion in the home, gently encouraging the person to engage in exercise, advising on diet and reminding the person to take medication on time. It could use a simple medical knowledge base to reassure a user who was worried by minor symptoms while urging a user with more serious symptoms to seek a medical consultation.
Artificial systems could also be powerful agents in enhancing people’s happiness and their sense of fulfilment. Such a system would develop a good awareness of the user’s tastes and talents and would be in a good position to recommend books, music, games and educational programmes that the user would enjoy. It would, of course, have an encyclopaedic knowledge base that it would draw upon in order to formulate its guidance. It could also act as a highly effective tutor, with a keen appreciation of the user’s existing knowledge and the ability to employ examples from the user’s own experience. This might be taken to imply that the system would come across as formidable and somewhat intimidating, but care would be taken to avoid this so that the system would be amiable, supportive, reassuring, appreciative and unfailingly good-humoured.
As well as providing psychological support, an ambulant robot could also help with physical care. The sharp increase in the number of older people needing nursing care points to an area in which artificial systems could play a vital role in maintaining and enhancing human welfare. Many people will lament the fact that there are insufficient human resources to provide such care in the traditional way, but in the absence of sufficient human resources the use of a caring robot could be the optimal humane solution. Indeed, people might feel less embarrassed if their intimate care were provided by an artificial system rather than by another human being.
At the University of Southern California, a team including Maja Matarić and Juan Fasola work in an Interaction Lab to develop ‘socially assistive systems’ to improve the quality of life of people with different types of special need, including children with autism, stroke patients and people with dementia. One of the robots from this lab is used to help older people to exercise regularly. It demonstrates the action to be performed, monitors the patient’s performance and provides useful guidance while also motivating the person to make a continued effort to improve (Fasola and Matarić, 2013).
Systems like this will be particularly useful in providing care that involves one-to-one interaction over a long period. Stroke patients, for example, may need intensive training for several hours a day to help them regain movement. A robot would be able to do this continuously, with infinite ‘patience’, never tiring and never becoming bored or frustrated. Robots don’t have ‘off days’ and they don’t need days off. They don’t tut when things go wrong and can be endlessly supportive and relentlessly positive.
It is clear that such systems could be enormously useful, not only in hospitals and care centres, but also in people’s homes. Recognition of the huge potential in this field has stimulated a number of projects across the world which have the goal of developing robots that can provide effective nursing care and assist in rehabilitation. In 2013 the Japanese prime minister announced government subsidies for companies that are actively working on the development of robots to help with the care of older people (Hudson, 2013).
One of the most troubling aspects of contemporary society is the widespread experience of profound loneliness. I was recently speaking with a client, an older lady, about her deep sense of being isolated and alone. She told me that these feelings had been far worse since her beloved pet dog had died six months previously. I suggested that she might consider getting another dog, but she felt that this would be impractical because her increasing physical disability would prevent her from taking the dog for walks and because she sometimes needs to go into hospital for short stays. She could opt for a pet that would require minimal care, but in the future a person in her position might well accept an artificial companion as best serving her needs. I was interested in how she might feel about this, and briefly introduced the topic (as something that might become available for people in her situation sometime in the future). Her initial response was guarded, but as she thought about it she began to accept that a system with conversational powers and with ‘the right attitude’ might indeed be an attractive option.
In a way, this recent conversation during a therapy session takes me right back to the scenario that first stimulated my interest in this whole field. Remember the research which showed that older people living alone gain significant physical and emotional health benefits through owning a budgerigar. It would be fascinating to see what could be developed even now, with the existing technological resources, if there were an intensive drive to develop the best possible interactive system. And with technical developments that are now said to be imminent there can be little doubt that systems designed to elicit strong animistic responses through strong characterization and charm will soon emerge, and that in terms of their psychological benefits these artificial systems will certainly be ‘better than a budgie’.
PJ: Obviously, systems like that would have a wide spectrum of applications. What about education? Do you expect the ‘birth’ of robo-teachers sometime in the near future?
NF: Certainly. I am sure that education will be one of the prime applications of these systems. The advantages of robot teachers as additional resources in the school (and of course at home) are clear. Maybe there will be robots standing in front of a class of 30 children, but the main attraction will be the capacity for one-to-one tutoring, with the system adapting to the student’s preferred learning style and level of understanding. I imagine that intelligent interactive teaching systems will be more in the form of interactive apps on tablets rather than being embodied in expensive humanoid ambulant robots. Such systems will get to know the student very well and will remember every detail of previous teaching sessions. The system will be highly knowledgeable, of course, and highly skilled. It will be extremely engaging and inspiring for students at all levels of ability, bringing out the best with superb teaching skills and infinite patience.
PJ: You seem unremittingly positive about all of these possible developments. Can you see any potential dangers?
NF: Yes, there are a number of possible dangers, although I think that these can be avoided with appropriate foresight and governance. Again, many practical and moral issues relating to intimate machines have been considered in depth in various works of science fiction. It is important to be aware of this and to recognize that the science fiction that has depicted companion systems is important not so much for its technological speculation but for its consideration of psychological and philosophical issues relating to this type of human–computer interaction. I like the characterization of science fiction almost as a branch of psychology or of philosophy which considers ethical issues and possible human responses ‘when the technological or ecological furniture has changed’.
Asimov, of course, examined many of the possible dangers in this area and devised his famous ‘laws of robotics’ (Asimov, 1950). For me, one potential problem is that artificial systems may become so attractive, and such good company, that people will prefer to relate to them rather than to relate to their human friends and relatives (just as some people prefer their dogs to people). These systems, if they are well-implemented, are going to be great fun to be with. They will be highly socially skilled, very interesting and endlessly amusing.
Another very serious potential danger is that these systems will be invasive in various ways. For example, if the user confides in the system, there has to be some way of ensuring that the relevant information is held in total confidence. Companion systems will need to be uncompromisingly loyal. Another aspect of potential invasiveness relates to the potential for the system to be highly persuasive and to influence the user by promulgating specific values. A dogmatic and evangelical system could attempt to exert political or religious influence over the user, and might well do this with exemplary social and rhetorical skills. One can imagine extremist organizations sending out a legion of robo-campaigners or robo-missionaries to convert people to a particular cause. It would be one thing if such a machine were to knock on your door attempting to spread the word, but it would be quite another thing to find out that your home companion, inside the door, is an agent for just such an organization!
In this area, as in all other technological advances, developments could be used in a powerful way for good or they could be a powerful force for evil. There are clearly many vital issues that will need careful consideration, debate and, ultimately, political and legislative governance.
PJ: Finally, how would you feel yourself about being the owner of an advanced companion system?
NF: I can’t wait – although I guess I’ll have to! I have family and friends around me, and lots of media and intellectual stimulation, but I would love to have an extra constant source of knowledge and amusement in the form of an intimate machine. I would like it to read aloud to me, in a voice of my choosing, to play the occasional word game, to tell me a joke on any subject I chose, and to discuss issues that interest me but don’t interest any other person in my immediate social circle. Also, now that I am considerably older than I was when I wrote these two books, I would like to have a companion on hand who was medically well-informed and could also, if the need ever arose, be there to offer that good-humoured, upbeat, kindly caring that I might need some time into the future.
PJ: Thank you a lot, Neil: it was really amazing to engage in this conversation with you!
Footnotes
Funding
This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
Acknowledgements
PJ: I would like to extend special thanks to co-editors of this Special Issue, Hamish Macleod and Christine Sinclair, for their valuable input into this conversation.
NF: And I would like to thank the editors for prompting me to think again about something that has never been exactly off my agenda but which I haven’t focussed on for a long time.
