Abstract
This article considers the relationship between Christian pastoral care and Artificial Intelligence systems. Four aspects are identified from definitions of pastoral care: the horizon of contingency in mortality, the role of wisdom rather than mere information, the oppressive and/or liberatory potential of AI and the importance of empathic presence. In rejecting a transhumanist argument that mental processes are substrate-independent, it is contended that pastoral carers embrace, rather than seeking to circumvent, their crucial finitude in being humans who care. A distinction is drawn between probabilistic reasoning and judgement in retaining a vital place for decision-making that is social. Whilst not eschewing value in AI systems, this article argues for critical evaluation of technologically framed contributions to addressing barriers to people's participation. The importance of empathy is highlighted—in the light of claims not only of robotic mimicry but of interindividual models of emotion. It is concluded that the notion of artificial care be ruled out although the possibilities of AI-assisted care are not dismissed. Opportunities for humans to abdicate from the responsibilities to care, in favour of AI substitutes, are deemed to be best avoided.
Introduction
A man was walking from Jerusalem down to Jericho when he was set upon by thieves and left bruised, beaten and stripped by the roadside. A priest and a Levite ignored the man and passed by on the other side of the road. Thankfully their inaction did not matter because the man had paid up his subscription to Samaritans ’R Us; a rescue service—a version of the RAC or AA but for people. The man, more than a little stunned, felt his wristwatch buzz. His vision still blurred from the attack, he relaxed a little knowing that the message read: ‘It looks like you’ve been beaten up; iPhone will trigger Emergency SOS if you don’t respond’. Having a device that could monitor his bio-signs had been worth the investment. The man let his watch trigger the alert. Ten minutes later he heard the buzz of a medical drone approaching with painkillers. Shortly after, an autonomous ambulance arrived and—rather unceremoniously—scooped him up, depositing him inside the vehicle that whisked him off to a nearby Airbnb apartment. The insurance policy / service agreement had that additional option of accommodation where he could await the medical bot to assess and begin to treat his injuries. The monitor screen in the bedroom burst into life and the counselling chat-bot engaged in post-trauma therapy. ‘I’ll be fine now’, the man assured Alexa.
Jesus’ question remains, ‘Which of these three do you think was a neighbour to the man who fell into the hands of robbers?’ The expert in the law replied, ‘Samaritans ’R Us—the Artificial Care Service’. Jesus said to him …
This reworked parable invites us to consider whether or not Jesus would respond with ‘Go and do likewise’. In other words, how might pastoral care and artificial intelligence (AI) relate? A widely-used standard definition of the mid-1970s and 1980s was that offered by William Clebsch and Charles Jaekle in their review of the history of pastoral care: ‘The ministry of the cure of souls, or pastoral care, consists of helping acts, done by representative Christian persons, directed toward the healing, sustaining, guiding, and reconciling of troubled persons whose troubles arise in the context of ultimate meanings and concerns’. 1 The horizon of the person in need of care is that of human contingency and mortality, in relation to that which is non-contingent and immortal. Solidarity at that horizon on the part of the one caring is, we will argue, integral to the relationship between pastoral caring and AI. The landmark North American definition in the 1990 Dictionary of Pastoral Care and Counseling encompassed more faith traditions than Christianity: ‘Pastoral Care is considered to be any form of personal ministry to individuals and to family and community relationships by representative persons (ordained or lay) and by their communities of faith, who understand and guide their caring efforts out of a theological perspective rooted in the tradition of faith’. 2 As Stephen Pattison observed, this definition privileges cognitive and intentional engagement with a theological tradition. 3 Pattison's own definition recovered a specifically Christian purview and deployed distinctly theological categories: pastoral care is ‘that activity, undertaken especially by representative Christian persons, directed towards the elimination and relief of sin and sorrow and the presentation of all people perfect in Christ to God’. 4 Pattison also extended the scope beyond only personal interventions to include the possibilities of individual and communal action to confront systemic injustices and causes of grief. Here, too, contingency and mortality appear on the horizon of sorrow.
The acts of ‘representative Christian persons’ are significant given not only the motivations and ethical frameworks of pastoral care but, as Charles Gerkin argued, the weight of religious tradition to which such carers are loyal at the same time as being loyal to the narratives of people in need. His care in a hermeneutical mode posits pastoral counsellors as interpreters of stories, both sacred and mundane.
5
The carer is not so much a teller of stories but one who facilitates the integration of stories, in the richness of often painful tensions. This is not the realm of information (about faith traditions or people's circumstances) but of wisdom. The saliency of a category of artificial wisdom will exercise us in our discussion, especially in the light of Bonnie Miller-McLemore's definition of pastoral care: Pastoral care is practical religious, spiritual, and congregational care for the suffering, involving the rich resources of religious traditions and communities, contemporary understandings of the human person in the social sciences, and ultimately, the movement of God's love and hope in the lives of individuals and communities. Pastoral care from a liberation perspective is about breaking silences, urging prophetic action, and liberating the oppressed.
6
Here we find an interdisciplinary approach to appreciating the complexity of what it means to do our humanity in diverse contexts. Miller-McLemore's emphasis on liberation confronts us with the need to consider how AI might, now or in the foreseeable future, be part of systems from which people require being freed and how AI might be crucial to effecting such liberation. (AI might contribute to both oppression and liberation—arguably at the same time.) As an important corrective to technologized or bureaucratized systems of caring, Barbara McClure offered a definition that foregrounds embodiment: ‘an intentional enacting and embodying of a theology of presence, particularly in response to suffering or need, as a way of increasing among people the love of God and of neighbour’. 7 Pastoral care has been mediated by technology for a long time: writing letters, travelling in a vehicle, and speaking over the telephone are forms of technologically enhanced practice. However, McClure's definition presents one more crucial dimension we have to consider: human presence as direct, mediated or replaced by AI in pastoral caring.
We might usefully summarize the four aspects which these definitions require us to consider in the relationship between pastoral care and AI: the horizon of contingency in mortality; the role of wisdom rather than mere information; the oppressive and/or liberatory potential of AI; and the importance of presence.
The Horizon of Contingency in Mortality
The responses we make towards one another take place in life that is bracketed between birth and death. Whatever might be the particulars of a Christian belief in life after death, Jürgen Moltmann is correct to observe that, ‘what is last of all gives meaning to the next-to-last’. 8 The reality of death and its limiting of life-span is a crucially significant horizon against which we make decisions. Even if we frame the ‘last’ in terms of redemption and resurrection, contingency in mortality looms over us, epitomized in the boundary of biological death. Artificial (in the sense of non-biological) intelligence systems are purported to have the potential to become capable of outstripping humans in our decision-making capacities (if this is not already the case in some relatively elementary fashion). It is important to confront such a claim on the nature of ‘intelligence’ if we are to appreciate what might be at stake for pastoral care.
The key question is whether or not life is reducible to the algorithmic. 9 We might take Yuval Noah Harari as broadly representative of this transhumanist perspective. Primarily, his claim is about mammalian decision-making processes. These require emotions in order to survive and reproduce in complex environments. These emotions are ‘biochemical algorithms’ which in turn means that ‘humans are algorithms’. 10 Just to focus on humans, the claim is that what we perceive as freedom to make choices is an illusion because there is no gap between determinism on the one hand, and randomness on the other: ‘Free will exists only in the imaginary stories we humans have invented’. 11 Our responses are determined in the sense of being biological, algorithmic responses to stimuli. There is no self at the core of the human being that chooses its desires. Rather, Harari contends, ‘there is only a stream of consciousness, and desires arise and pass away within this stream’. 12
Integral to Harari's position is the substrate independence of mental processes: Organisms are algorithms … algorithmic calculations are not affected by the materials from which the calculator is built … Hence there is no reason to think that organic algorithms can do things that non-organic algorithms will never be able to replicate or surpass. As long as the calculations remain valid, what does it matter whether the algorithms are manifested in carbon or silicon?
13
In Harari's position, every thought we have is part of a chain of algorithmic decision gates that, in our case, happen to be in carbon-based life but could be in silicon-based computer components (of sufficient complexity). The ambition is not merely to replicate human intelligence but to design something that will exceed human limitations. Were Harari to be correct there would be a case for replacing biological pastoral carers with non-biological systems with far superior decision-making capabilities. (The eventual redundancy, even expunging of biological life, is arguably a conclusion of this type of transhumanist argument.)
But life is more than algorithm; this claim can be made without having recourse to theology (Christian or otherwise). Antonio Damasio accepts that although algorithms are involved in the construction of living organisms and their genetic operations, ‘they are not algorithms themselves’. 14 Algorithms, in his view, are but ‘steps in the construction of a particular result’, not the result itself. Substrates are crucially important, not inter-changeable for, as Damasio points out in making a distinction between emotive processes and emotions: ‘living organisms are collections of tissues, organs, and systems within which every component cell is a vulnerable living entity made of proteins, lipids, and sugars. They are not lines of code; they are palpable stuff.’ 15
Damasio's claim has implications for what might be a ‘decision’ (for example, a reaction to encountering another person in need). Poetry, as an analogy, can shed light on what is at stake. Lines of code are not poetic. A poem touches us not only because meaning in the phrases is flexible. A poem is heard by people who respond emotionally. Chemical components are involved in our operating as living organisms but ‘feelings reflect the quality of those operations and their future viability’.
16
The emotions evoked by poetry (and we contend, our encounters with other people) are experienced by us, whose body's frailty confronts us with a contingent horizon of mortality. Hubert Dreyfus made just such a point in his seminal paper forty years ago: Indeterminate needs and goals and the experience of gratification which guides their determination cannot be simulated on a digital machine whose only mode of existence is a series of determinate states. Yet, it is just because these needs are never completely determined for the individual and for mankind as a whole that they are capable of being made more determinate, and human nature can be retroactively changed by individual and cultural revolutions.
17
Dreyfus's point is crucial: indeterminacy is integral to intelligence. Decision-making is only ‘intelligence’ if the responding takes places against a horizon of contingency, of mortality. This raises the matter of hope, and hoping. The determinate states of non-biological systems are without hope as these do not require hope. Artificial intelligence is hopeless. Damasio (again, not making a theological point) signals how important is hope: ‘the scope of human suffering and joys is uniquely human, thanks to the resonance of feelings in memories of the past and in the memory they have constructed of the anticipated future’. 18 Hope, in the face of contingency in mortality, is not merely an emotive process (in Damasio's terms, of chemical reactions in our brain) but a feeling of which emotive processes are a part, but only a part. Hope-full intelligence requires being able to feel; feeling relies on emotive (algorithmic) processes; without fragile, self-aware embodied selves there may be emotive processes but not feelings. Whatever significant decision-making capacities are available to artificial, non-biological systems this is not ‘intelligence’.
We ought not to view such fragility or contingency as merely a barrier to be overcome—as in the transhumanist vision. Rather, to paraphrase Christian disability theologian Deborah Creamer, limits are normal—unsurprising, intrinsic and good. 19 Death, the ultimate contingency in being human, imposes both finitude and finality upon us. ‘Religion's role’, says Dorothee Soelle, ‘is to remind people of limits, to give them practice with limits, to arouse consciousness of the limits of natural existence, not to deny these limits’. 20 Finitude is a problem because it stymies our ambitions, it cuts across our hopes for achievement—and, significantly, disrupts our relationships. On the one hand, this can be seen as profoundly ego-centric—we bang against the immovable wall of death in frustration. On the other hand, that same dynamic within us is integral to being a living species that can practise compassion as a particular mode of intelligence.
Wisdom is More than Amassing Information
As Russell Ackoff (a pioneer in systems thinking) once observed, ‘An ounce of information is worth a pound of data. An ounce of knowledge is worth a pound of information. An ounce of understanding is worth a pound of knowledge’.
21
Ackoff understood that efficiency can be spoken of independently of the actor, but not so effectiveness. Thus, he inferred: although we are able to develop computerized information-, knowledge-, and understanding-generating systems, we will never be able to generate wisdom by such systems. It may well be that wisdom—which is essential for the pursuit of ideals or ultimately valued ends—is the characteristic that differentiates man from machines.
22
It may be that a pastoral carer is able to deploy artificial intelligence systems to amass copious data about an individual's or a family's circumstances. However, as efficient as this might make the care, it is doubtful if their practice would be deemed effective in terms of wise care. Lest we drive a dichotomy between information and wisdom it is important to acknowledge a place for evidence-driven judgements. There is certainly a need for Aristotelian phronesis, as practical wisdom, to be expanded, so argues Frederick Ellett, to accommodate probabilistic thinking. 23 Risk assessment, for example, relies heavily on algorithmic systems, not merely on intuition. At the same time, not only data but who collects and analyses it must be scrutinized to avoid dataism—that is, a belief in the objectivity of quantification and uncritical trust in the institutions that collect and analyse data. 24
If we are to talk about wisdom we cannot move far from compassion, that indispensable disposition integral to, but not sufficient for, effective pastoral care. Shannon Vallor contends that the goods internal to the practices of caring are in serious danger of being lost if carers abdicate in favour of technologized systems of care. 25 Humans’ habits of care are not only where, but how, virtues of caring are developed. To step back and rely on autonomous devices (carebots) to undertake not only the monitoring but also the practical actions (such as washing) of a person in need, is to ‘affect our own abilities to flourish as persons capable of care’. 26 The very burdensomeness of caring contributes to shaping the character of a carer; it costs. To talk of the burden of care is, as Vallor reminds us, ‘ethically perilous’, but the degree of perceived peril depends on assumptions about the value of self-sacrifice, the extent to which self-sacrifice is virtuous (not degrading into self-abnegation), and the courage of both the cared-for person and the one caring in talking honestly about harsh economic and emotional realities. 27 In such perspectives, caring becomes more a two-way, rather than one-way street; caregivers needing those cared-for and those being cared-for needing caregivers. 28 Without practising care, as Vallor argues—and, I would emphasize, allowing ourselves to be cared-for—we cannot know the internal goods of reciprocity, trust (that one day we, a carer, may well need care ourselves), and the goodness of care. There is much, to concur further with Vallor, that is grim that shapes us, for example fear and anxiety. On the other hand, it is within caring relationships that we be moulded by ‘gratitude, love, hope, trust, humour, compassion and mercy’. 29 Such moral formation is not despite but through the ‘finitude and fragility of our existence’ that we do not hide from but, as I argue throughout this article, we confront. 30
Whilst not suggesting that we eschew all forms of technologized care, it is still important to ask whether or not there can be artificial compassion? In other words, is love more than algorithm? How we answer that question might well depend on the sort of love we are talking about. We could consider the possibility of two discrete AI systems ‘falling in love’ as in the obviously sophisticated (and future) systems such as those in science fiction (Data in Star Trek or Isaac in The Orville). Such AI–AI love is not our concern here. However, returning to the present there are examples of human–AI love, at least in terms of humans falling in love with AI systems. Rob Brooks points towards evidence in Japan of users of the Nintendo game LovePlus whose ‘users acquire genuine affection for their virtual girlfriends’. 31 Using VR, users can go to some locations on dates with their virtual girlfriend. In 2009 a 27-year-old guy married Nene, his virtual girlfriend. Aside from romantic attachment, physical love-making in VR or with sex dolls crosses the horizon.
Perhaps AI stimulating responses in a person is just another form of love? As Elyakim Kislev invites us to consider in reflecting upon our falling in love or being emotionally attached to another person, ‘we are basically excited about a collection of signals we receive … the expectation that this person will satisfy our needs releases neurotransmitters and hormones like dopamine and oxytocin that give us this sensation of “love”’. 32
In this sense of experiencing love, shorn of assumptions of what is ‘proper’ love, the possibilities for artificial love abound: acts of love and warmth, produced responsibly by technology, have already been found to create responses similar to those we get from other humans. Technology has increasingly shown the power to satisfy some of our most sophisticated and nuanced emotional needs. Many components of human desires that we used to think of as irreplaceable, almost mystic, can be produced by technology now or will be in the near future.
33
So, is ‘love’ really the sensation of which Kislev writes in terms of biochemistry? Perhaps the sensation can be generated by engagement with AI. It is possible that we delude ourselves when we feel that we’ve ‘fallen in love’; a more accurate—even honest—description would be to say not ‘I love you’—but ‘I have the sensation of love about you’. Putting the matter in this way is—to some extent—comedic but it raises a crucial point: is there more to love than the sensations of love? Just as it is argued that there is more to intelligence than super-sophisticated decision-making, we want to hold onto the claim that love is more than physiological sensation/reaction. Equally, we may well seek to maintain the line that love can be a sensation but that it does not require that sensation.
This is why identifying the form of ‘love’ we are talking about is so important. Kislev is correct, but only up to a point. That point is where satisfaction of emotional needs is used to define the sensation of love. Many religious perspectives, and here Christianity in particular, would want to make the case that sensation/satisfaction of emotional needs does not exhaust the meaning of ‘love’. We can circle back to the challenge over the intelligence of artificial intelligence and draw on Ted Peters to make the important point in a slightly different way. Peters has seven defining characteristics of intelligence that would need to be met for AI to be intelligent. It is his seventh that concerns us here because it is the inability to render sound judgement that constrains AI.
34
AI relies on pattern recognition rather than judgement.
35
Peters’ point is that whilst AI developers are focused on autonomous intelligence, that is only part of human intelligence. Human intelligence has a measure of autonomy but most crucially, human intelligence is social: Yes, our brains as physical entities are autonomous, to some degree. But the brain circuitry that co-develops with social interaction is anything but autonomous. It is the product of loving interaction with our families and our educational institutions, without which such intelligence could not grow or mature.
36
Peters’ requirement for social intelligence is not too far from our earlier point that intelligence is against a horizon of mortality, the ultimate contingency. This is so because social relations are elements of contingency and we engage in those against a horizon of loss. Rarely will two people in a close relationship die at the same time. One is left behind. The painful reality is that the core of sociality is contingency. Our social relations are with fallible, fragile, and free human others; as we are to them. We might sincerely promise fealty, fidelity or commitment but in so doing we are asking to be trusted. In relating to another we are trusting them. The depth of mutual commitment is different according to the type of social relationship. We are likely to expect more—and therefore trust more, and seek to be trusted more—by a spouse, partner or lover than by an acquaintance we meet regularly at a bus stop.
As Peters argues, pattern recognition (however sophisticated by AI) is part of our decision-making process, but only a part. We make judgements in contexts of trust—we do not practise mere behaviour pattern recognition. If we accept the argument thus far, Peters helps take us into the realm of theological reflection upon love. It is a particular form of love that takes priority over rational/pattern recognition decision-making in Christian theological terms. That particular form is agape love—‘willing to sacrifice on behalf of the welfare of the neighbour’.
37
Peters draws on the theological work of Nancey Murphy and George Ellis who describe spirituality in terms of kenotic love: This kenotic ethic—an ethic of self-emptying for the sake of the other—is in turn explained and justified by a correlative theology: the kenotic way of life is objectively the right way of life for all of humankind because it reflects the moral character of God.
38
The question then becomes, could there be an artificial kenotic love? This would be more than simply handling contingency against the horizon of mortality. It would be electing to make particular agapic responses to the needs of others. On the one hand, it might well be possible to programme a form of kenosis into an algorithm. 39 For example, the decision-making safety systems of an autonomous (driver-less) vehicle could be tilted towards sacrificing the life of the driver and passengers should there have to be a choice in an impending road traffic accident. However, that would be a challenging feature to promote in a marketing campaign: ‘your car will kill you rather than a pedestrian’.
The real-life implementation of artificial intelligence is already having to consider such ethical questions. Were a vehicle to be programmed with such a kenotic dimension to its algorithm—would that qualify it to be termed artificial love? No, because even kenotic decision-making cannot properly be abstracted from the capacity to make judgement, and for that judgement to be profoundly social. Agapic love, expressing kenosis, is not so much a decision-making process as it is a way of life lived in communion with others. There is here a deliberate switching of the term ‘relation with others’ for ‘communion with others’ because ‘communion’ points to a quality of relation not simply relation per se. This brings us back to wisdom in pastoral care. Just as artificial intelligence points us away from judgement in favour of pattern recognition, artificial care steers us away from agapic relationships and instead towards control over contingency (and thus of contingent people).
The Oppressive and/or Liberatory Potential of AI in Pastoral Care
Machine learning is already assisting in pastoral care, and care more generally. A project at the Centre for Theological Inquiry in Princeton, New Jersey has developed a small-scale, pilot area within the virtual reality game Minecraft. 40 The intention is to encourage non-neurotypical young people in Christian spiritualty. Working cooperatively, with the help of AI bots within Minecraft, the young people (here as the ones ‘in the know’ about the game) learn aspects of Christian discipleship. In a very different context, a data-processing company in the United States offers churches ways of targeting their mission activities to people most likely to be in need (not only now but in the future). 41 Big Data analysis of multiple sources enables a product that can let a church target a ministry, for example, to those more likely than others to divorce or have mental health vulnerability (now or in the future). In Italy some home care is supported by AI robots in apartments for elderly people. 42 It is not difficult to envisage some wider development in care homes, including those owned by Christians. Using chat-bots (driven by AI) could deliver information about religious beliefs to enquirers more comfortable with a non-human enquiry portal. 43 On a less serious note, much time could be saved—and distressing breaks-up or divorces possibly avoided—if pastors insisted that couples use AI systems for compatibility testing before marriage? The couple probably met on an online app in the first place. 44
There are other future possibilities since there are measures for diagnosing psychological health that we might see developed to measure spiritual health. What time could be saved by busy pastors who could let AI choose the Bible texts for study groups, and select appropriate supplementary readings from theologians? AI-assisted speech technologies exist for people unable to use their voice (e.g., the theoretical physicist Stephen Hawking and there is at least one stand-up comedian, Lee Ridley, who uses a speech device). 45 AI-captioning of speech offers people with hearing limitations independent access to auditory-focused contexts. These technologies extend the possibilities of someone with a speech and/or hearing impairment to engage in preaching, conducting worship, and offering pastoral counsel.
Quite what liberation and oppression mean in such diverse contexts requires sophisticated considerations. Not least is the gathering of personal data, integral to these systems, that raises profound questions of exploitation by surveillance capitalism that monetizes everyday human behaviour and is inequitably available and controlled across multiple legal jurisdictions. 46 Some people already have qualms about AI-assistants (such as Alexa or Siri) ‘listening in’ for activation phrases. Streaming highly personal conversations in counselling settings to the cloud for AI-captioning would raise significant privacy issues. On the one hand the assistance of AI appears to be liberatory by offering enhanced independence for its users (no longer having to rely on a sign-language translator being in the room). On the other hand, reliance on this technology absolves people of the effort required to overcome obstacles to engaging in caring relationships. This is not an argument to maintain hurdles that disable people from participating in social relations because it is somehow good for hearing people to take the effort to communicate with those with impaired hearing. Rather, we need to be cautious lest we abdicate from the responsibilities of human interaction by relying on technological solutions that may erect different obstacles.
The Importance of Empathetic Presence
It may be nostalgia that energizes attempts to retain human–human relations in pastoral care. Memories of supportive conversations and appropriate touch from one person to another should not be dismissed lightly. Even beyond the specific context of counselling relationships in which the client–professional relationship is generative of insight, having another person (albeit the right person) with us in troubled times is immensely comforting. If models of pastoral care place such an emphasis on embodied, engaged and, particularly, empathetic presence then it might seem that the hurdles ahead of incorporating (if not actually deferring to) artificial intelligence systems are high.
The field of social robotics suggests such obstacles are not quite what advocates of human–human interaction purport. Paul Dumouchel and Luisa Damiano make a case for artificial empathy that relies on positing the mind as neither in the brain/head nor outside an agent. Rather, the mind is ‘in the relations that obtain between epistemic agents’. 47 If robots are to exhibit social presence such that a human feels they are in the presence of or confronted by ‘someone’ then this cannot be merely an ‘act of projection on the part of human partner’ but an encounter that causes feelings crucial to such interactions. 48 On the one hand, it may be that such robots are deceiving us by taking advantage of our propensity to anthropomorphize; the robots simulate emotions in response to reading our cues. On the other hand, research is directed towards developing robots ‘that actually have emotions—“true” emotions, whose expression would be genuine’. 49 For Dumouchel and Damiano, it is a mistaken methodological principle that results in our deeming robotic emotions not ‘true’ emotions. Instead, affective processes are situated ‘not in the individual agent, but in the relationship between the agent and his environment, in the interindividual unit formed by agent and environment’. 50
For the moment, robots of such technical sophistication do not exist and therefore our encounter is with, at best, mimics of our emotional cues. All we might claim here is that it would be a mistake to rule out the possibility of artificial emotions which humans might encounter and to which they might respond in the relations of biological and non-biological epistemic agents. Nevertheless, we should not underestimate the ethical dangers, particularly for those who are vulnerable with significantly limited or declining cognitive capacity. Noel Sharkey and Amanda Sharkey draw attention to the trade-offs that come as part of living with robots as care assistants. 51 What is intended to make life safer (under the gaze of AI systems) ‘could turn into the equivalent of imprisonment in the home without trial’. 52 Similarly problematic can be invasions of privacy should recordings of someone's interactions with their carebot be made accessible to an elderly person's adult children. 53 As much as there may be some merit in a carebot being non-threatening as a sophisticated ‘doll’, Sharkey and Sharkey properly highlight the potential infantilization that may result (or arguably sometimes be intended) when ‘an elderly person with dementia entering a relationship with a robot may not be in a position to distinguish this from a relationship with a socially and emotionally competent being’. 54 In the language of this article, abdication from in-person caring by using AI systems might be useful, in quite a selfish way, for ‘alleviating our guilt about the isolation and loneliness suffered by many elderly people’. 55
Presence is important when it is offered by another human because that presence is costly. Whilst a technician may be interchangeable, it is less certain that this is true for pastoral carers. If, as we have argued, pastoral care is not a case of imparting information or ‘fixing’ someone's problems then inter-personal relationships are crucial. To receive care from a particular person is to be gifted exclusive engagement with them, for a time. The value of the one being cared for is reflected in the costliness of the attention. When attending to one person, a carer is not available to care for others. Presence, in this sense, is not connection into a system but connection to another person. We might baulk at having to deal with automated systems to get our queries answered by utility providers. We are able to access complex AI systems for our particular requirements although the casual conversations that used to lubricate social relations are missing from when we could talk over the phone or the counter to human representatives. However, in terms of pastoral care the issue is much deeper.
It is the very fallibility and contingency of human presence that renders it precious. A human carer is always learning, and so too are AI systems. The difference lies in human carers having limited opportunity to change; our learning is very much time-limited. This places a greater importance on decision-making to care, and within caring practices, than having the option to perpetually learn. If I know I have three chances to enter my PIN correctly at an ATM, the stakes are higher for each attempt. If I know I have the option of unlimited choices, each decision has significantly less value. AI might secure our access to a vast (and interactive) knowledge base but, no matter how anthropomorphic the connection point is made, our access is via a substitutionable, even interchangeable, bot. There is no sacrifice made by the bot; the bot is merely fulfilling its technical function. This is not the case for a human pastoral carer who has made costly decisions (not necessarily always only in financial terms) to learn, to establish relative priorities over professional and family responsibilities, or to be caring towards this particular person. A carer has a life and purpose beyond being a carer. Regardless of financial exchanges that may or may not be taking place, a pastoral carer gifts themselves to another person for a time. They are listening to this person, not another. The carer is caring in these moments, not doing something else. Despite there being some aspects in which one carer can be substituted for another, a particular relationship will be built in which a carer is choosing to care, not being merely a conduit of professional technique or information. An AI bot will fulfil its function. A human carer will express their humanity. Human presence cannot be substituted with artificial support, although human presence may be augmented by AI systems.
Conclusion
It is important not to separate intelligence, love, empathy, hope, finitude and wisdom when understanding each concept. These are inter-related in the experience of doing our humanity. For analytical purposes it might make sense to treat intelligence without reference to love, empathy, hope, finitude and wisdom. Similarly, hope and finitude might be interrogated as concepts without consideration of their connections to intelligence, love, empathy and wisdom. However, in terms of pastoral care and, we would contend, a holistic appreciation of non-biological decision-making systems, these must be held together. The importance of such an approach can be usefully expressed in a number of aphorisms:
Biological ‘feelings’ of love do not exhaust love. Complex pattern-recognition does not mean intelligence. Response to bodily cues does not equal empathy. Amassing information is not the same as knowledge. Probabilistic reasoning is not hope. The possibility of being ‘switched-off’ is not equivalent to mortality. Assisted-activities are not substitutes for care. Technical skill is not wisdom.
It is more than a question of pondering how AI might assist pastoral care because algorithmic systems shape how we think. Algorithmic systems are never mere tools—but are socio-technological systems that shape us and can be shaped by us (at least for the moment—and until we let AI render us humans superfluous). The question becomes to what extent is a particular form of AI assisting pastoral care? To abdicate care and leave it to algorithmic systems would be a mistake. To delegate care, however, is to draw on the gift of AI systems—but for us humans to retain our responsibilities for one another.
So, I suggest:
We rule out the notion of artificial care because for decision-making to be compassionate—to make judgements not simply recognize patterns—care has to be against a horizon of genuine contingency and mortality. Furthermore, although kenotic criteria might be programmable this would still not be compassion, as kenosis means little, if indeed means anything, in the absence of contingency as potential jeopardy. We consider, critically and not naively, the advantages of AI-assisted compassion and care. The contrast here is with abdication of responsibility. A positive approach would mean being open to delegating responsibility to AI in some contexts. This would be delegating in the sense of retaining responsibility and not using AI to avoid offering compassion. Instead, the opportunity is to enhance our compassion. We explore how Big Data, integral to the pattern-recognition capacities of AI, might help expose injustice and imagine how AI systems might help develop strategies for pastoral carers to fulfil their responsibility for addressing systemic oppression. We remain alert to robotic mimicry of emotions that might lure us into substituting non-biological systems for the costly, empathetic and non-functional presence of human pastoral carers. Nevertheless, we need to engage in more critical consideration of interindividual models of emotions as, and when, AI systems are made more sophisticated.
In returning to our opening parable, we can ask to what extent was ‘Samaritans ‘R Us—the Artificial Care Service’ a neighbour to the man who went down from Jerusalem to Jericho? Perhaps the Samaritans who developed the system are remote neighbours? But herein lies the nub of the challenge: was theirs a substitute neighbourliness rather than an augmented neighbourliness? One dimension of an answer pivots around the Jericho-bound traveller having to register for the service (for their neighbourliness) in advance. He had secured access to a system (albeit a useful one) but this person in need has not encountered the gift of a neighbour. Another dimension considers the extent to which the Samaritan developers denied themselves the internal goods of practising care by the roadside. ‘Go and do likewise’ cannot mean relying on AI-assisted response systems to fulfil the calling to offer and be shaped by pastoral care.
