Abstract
The unique capabilities of artificial intelligence (AI) have forced theologians to develop analytical categories beyond the instrumentalist model of technology. Recent work examines AI in terms of whether it has the qualities of a person, its effects on character, and its embedding in structures of sin. Constructive responses have focused on principles, communities, and virtues. None of these responses fully addresses concerns raised by critical analyses, suggesting that moral theology is still searching for a replacement for the instrumentalist model of technology.
Artificial intelligence (AI) is transforming society: 1 social media algorithms frame many of our relationships; finance occurs “at the speed of light” thanks to high-frequency trading algorithms; 2 and higher education struggles with how to address student papers written by generative AI. Some developments in AI promise to bring great benefits, such as the use of AI to better tailor cancer treatments or to design new forms of assistive technologies. 3 Yet other aspects of an AI future—such as the creation of autonomous weapons—raise significant ethical problems. This article, while not denying the many benefits of AI, will focus on problems presented by its development and suggestions for how to prevent them.
The last ten years have seen the development of a significant amount of literature in secular ethics and social science addressing these problems. Governments, NGOs, religious groups, and professional societies also have produced policy white papers suggesting ways forward. 4 Different units in the Vatican have proactively sought to form partnerships to address the dangers of AI, with the Pontifical Academy of Life releasing the “Rome Call for AI Ethics” in partnership with Microsoft, IBM, and the Food and Agriculture Organization. 5 The Dicastery for Culture and Education has partnered with academics to sponsor a book on AI and the culture of encounter, guidelines for the practical implementation of AI, and other consultations. 6 These efforts hearken to the call of Pope Francis for the “ethical development of algorithms . . . to help create a new ethics for our time.” 7
Despite these responses by secular ethicists, policy bodies, and Vatican dicasteries, moral theology, as a field, has been relatively slow to address the question of AI. 8 That relative dearth of study is changing rapidly, though, with a growing number of papers and scholars writing on the topic. 9 What is especially interesting about these developments is that AI has forced Catholic ethicists to engage new analytical categories in regard to technology. Historically, moral theology has had relatively little to say about the subject, aside from certain weapons technology and some areas of reproductive and medical technology. 10 When moral theologians have analyzed technology, they have tended to take an instrumentalist stance, meaning that technology was considered primarily as a neutral tool. Consequently, moral analysis has focused on the intention of the user.
In contrast, straightforward instrumentalist analyses are rarely applied to AI, at least in academic works. 11 While much popular discourse still claims that AI is just a tool, few scholars believe a technology that seems so independent of human control to be a mere instrument. Few contemporary ethicists claim that all that matters with AI are the user’s intentions. No, the ethical stakes of AI are much more intricate, foregrounding the complexity that philosophers have long seen in much of modern technology.
This article will review the analytical categories being developed in secular and theological analyses of AI. It will consider analyses of AI falling under the categories of person (will AI become superintelligent?), paradigm (how does AI change how we view the world and interact with others?), and structure (how is AI shaped by existing structures of sin, such as racism?). 12 Then, it will survey some of the emerging constructive ethics in relation to AI, such as principles and virtue ethics, using the categorization of the first part to determine their strengths and weaknesses. Finally, it will suggest some avenues requiring further exploration.
Person
The majority of theological writing on AI has responded to questions about its potential personhood. These writings speculate about super-advanced AIs that could achieve or at least mimic consciousness and human intelligence. Such questions emerged almost at the same time as AI with Alan Turing’s discussion of a test to distinguish whether a machine could think or be considered intelligent based on whether it could fool a human interlocutor. 13
Many discussions in this vein have arisen out of debates over transhumanism. Hans Moravec and others have argued that people could achieve immortality by uploading their minds into superintelligent machines. 14 Theologians have criticized these suggestions because they failed to acknowledge the importance of embodiment to the human person. 15 Moreover, the transhumanist, secularized vision of salvation was too narrow, missing the fulfillment to be found in the beatific vision.
A second strand of reflection on superintelligence, exemplified by Nick Bostrom’s work, focuses on AI as an existential risk. 16 A superintelligent AI that had the ability to improve its own programming would eventually become astronomically more intelligent than humans, leading to an intelligence explosion. As science fiction movies like Terminator or 2001: A Space Odyssey suggest, unless ethically constrained, a superintelligent AI might see us as a threat and destroy humanity. Even a non-malevolent machine could be dangerous. Imagine, as in Bostrom’s thought experiment, an AI programmed to maximize paperclip production. It might decide to do so by conquering the world and shifting all industrial production toward paperclips. This is an extreme example of the so-called “alignment problem,” the difficulty of explicitly programming our reasonable ends into machines in ways that do not lead to unreasonable outcomes. 17 These scholars thus argue for the urgency of determining ethical principles and programs that can constrain the existential risks of AI.
Such arguments contain a religious element insofar as they address the ultimate end of humanity. Scholars of religion have described discussions of transhumanism or existential risk as translations of the theological category of the Apocalypse into technological terms. 18 Both transhumanists and existential risk scholars assume that the development of superintelligent AI will introduce a break with the world as we know it, characterized by the dominance of machine-mediated forms of life with qualities that we can only dimly predict. Beyond this horizon, or “singularity” as Kurzweil terms it, 19 we can only see the future darkly. Transhumanists predict a utopian future of immortality, transformed experience, and material plenty, while those concerned with existential risk see a dystopian future of destruction. Both place what should be God’s transformative activity into the hands of a machine.
A third strand of literature concerns itself with whether humans can be in relationship with AI. If AI were truly intelligent, would it be a person? Would it have rights? Could an AI be spiritual? Given the shortage of human caregivers, should we enter into relationships of care with machines? 20 Can AI undertake pastoral tasks? 21 If one thinks machines possess true intelligence, and that this intelligence is a sufficient condition for personhood, one might answer all these questions in the affirmative.
Yet, there are basic philosophical reasons to doubt the possibility of AI’s intelligence and personhood, at least for the foreseeable future. As philosopher John Searle argued, computers lack the intentionality that is the basic requirement for conscious intelligence. 22 He compares AI to a man in a box who is handed combinations of Chinese letters and then, according to defined rules, must hand back a different sequence of Chinese letters out of the box. The man in the box does not understand Chinese, even if he is able to transform a query in Chinese into an answer in Chinese. He just follows the rules; he has no semantic understanding of meaning. Similarly, even our most advanced large language models merely transform queries into likely responses based on rules provided by their maps of relationships between words in a language. Meaning is a basic problem since all computations rely on Claude Shannon’s theory of information, which explicitly disregards semantic meaning. 23
Similarly, drawing on Martin Heidegger, Hubert Dreyfus has argued that AI lacks a world, a horizon of meaning, that provides the basis for human common sense. 24 This phenomenological world is tied to our embodiment. While generative AI like ChatGPT might be said to construct a web of connections between terms, and robots with AI are starting to address embodiment, they still lack a last element of Heidegger’s understanding of “world” (a horizon meaning): care. AI cannot yet care about itself or others in its world.
Further considerations raise doubts as to whether AI can be truly intelligent. Despite the more generalizable advances of generative AI, 25 AI’s most impressive feats in which programs seem to surpass human intelligence have come in very narrow tasks, like beating human experts at games like Go or chess. Critics have noted that these victories generally have not come from AI merely teaching itself from data. Instead, they required ongoing tinkering by engineers who translated human expertise and strategies into the machine. For example, Deep Blue, the program that defeated chess grandmaster Garry Kasparov, was programmed to appear more uncertain than it was to put its opponent off balance psychologically. Likewise, a lot of human ingenuity was programmed into AlphaGo, an AI that beat a professional human player at the Go board game. 26 Further, the intelligence of AI is a very straightforward kind of problem-solving intelligence. It is not a relational, meaning-filled encounter with the world and others.
Among theologians, Protestant theologian Noreen Herzfeld has perhaps gone furthest in challenging the personhood and intelligence of AI. She uses a Barthian lens to argue that AI cannot truly be in relationship with us. For Barth, the four aspects of encounter are: looking each other in the eye, speaking to each other, helping each other, and doing so freely and gladly. 27 Herzfeld documents how AI technologies lack a face, that they are incapable of self-revelation in speech, and lack the affective capacities necessary for relationship. 28 AI thus lacks capacities generally connected to personhood: intelligence, freedom, and relationality.
There are thus many reasons to doubt that AI is close to personhood. Still, I would be reluctant to deny that machine personhood could ever be possible. Some kind of radical rethinking of AI might arise that would address not only the concerns raised here but also the impoverished vision of intelligence held by the field of AI. As Turing argued, who can say that God might not infuse a soul into properly formed matter, even machine-matter? 29 Perhaps only organic matter can be the substructure for consciousness, but that has not been proven. Still, these discussions of AI personhood lie in the realm of speculative theology. They do not address the concrete issues facing us today.
Paradigm
Teresa Heffernan is concerned that conversations about superintelligent AI not only miss the concrete problems society faces but that these conversations also serve to distract us from the real issues. 30 AI’s implementation in social media, commerce, and politics is affecting us now, whereas discussions of AI personhood turn our attention to problems that might occur fifty years from now or might never occur. In contrast, the remaining two diagnostic lenses address present moral concerns. Instead of considering AI as person, these lenses examine AI’s effects on the human person’s mental and material condition. Recent research suggests that those effects may be profound.
One group of scholars examines how AI technology both emerges out of and reinforces a particular stance toward the world. Mid-twentieth-century critics of technology, like Heidegger or Jacques Ellul, argued that we should not consider technologies singly; instead, we should see each individual technology as the manifestation of a more basic stance toward the world. For Ellul, each particular technology embodied a general obsession with efficiency. 31 For phenomenologists like Heidegger or Edmund Husserl, every technology emerges from a reductionism that views creation as merely matter for use and manipulation. 32 Similarly, theologian Romano Guardini wrote about the underlying worldview that shapes how technological society approaches humans and nature. 33 For these authors, each individual technology is only an instantiation of a broader phenomenological stance.
At the same time, a new technological artifact like AI can reinforce this broader worldview, embedding it more deeply in both individual character and society. Thus, Albert Borgmann chronicles how basic, everyday technologies—like the microwave or central heating—emerge from what he calls a device paradigm, which views everything as a commodity. 34 Moreover, our use of these technologies has the effect of further bolstering that same outlook. Each new tool allows us to see something else as a manipulable commodity that was not seen before, just as the microwave transformed our experience of food and cooking. In a slightly different way, Hartmut Rosa describes how new technologies reinforce demands for speed and efficiency. For example, email emerged as a response to contemporary demands for speed and efficiency. In theory, email should leave us with more time than slower letter writing. 35 Yet, its increased speed and availability, in fact, increases our workload, and we are forced to work even faster and more efficiently to clear our ever-filling inboxes. This phenomenon is common. In many workplaces, automation does not eliminate work but merely shifts it and may even increase it. 36 Similarly, AI may speed up many of our tasks but will not leave us with more time as it will create new tasks and force us to do some things faster and more efficiently.
Pope Francis describes this stance affecting the contemporary person as the “technocratic paradigm.” In line with prior authors, it is a stance that “exalts the concept of a subject who, using logical and rational procedures, progressively approaches and gains control over an external object . . .. It is as if the subject were to find itself in the presence of something formless, completely open to manipulation.” 37 It makes “the method and aims of science and technology an epistemological paradigm which shapes the lives of individuals and the workings of society” contributing to “a reductionism which affects every aspect of human and social life.” 38 It has become difficult to use technologies “without being dominated by their internal logic.” 39 Technologies are certainly not neutral instruments in Francis’s understanding, but they concretely shape our way of interacting with the world.
Current implementations of AI embody this technocratic paradigm. They do so by not only embodying a rapacious attitude toward the environment in terms of energy use and the destructive effects of mining for the rare earth minerals necessary for computer chips, 40 but they also embody a view of other people as manipulable resources. For example, Shoshanna Zuboff discusses how AI applications use behavioral information about individual likes and habits (what she calls “behavioral surplus”) to alter what people see in social media feeds in order to manipulate their behavior. 41 She calls this system “surveillance capitalism.” China’s experiments in social credit more explicitly use the broader surveillance potentialities of AI to manipulate behavior. Such systems grow out of a worldview that fails to see people as free actors, and their use reinforces that worldview. 42
This demand for control can even infect our personal engagements. Sherry Turkle describes people’s attraction to AI and social media as arising out of a need to protect themselves from vulnerability to others. For this reason, individuals only engage with those who are like them or with technologies that are designed to respond to their emotional needs. 43 AI can reinforce an extremely manipulative worldview, in which we try to gain control over others, just as powerful institutions try to control us.
This controlling technocratic paradigm contrasts with the culture of encounter that Pope Francis calls us to cultivate. 44 Francis encourages us to meet the reality of the other through intersubjective engagement, especially through the accompaniment of the poor and marginalized. In Martin Buber’s terminology, we should encounter the other as a thou, a person, rather than an it, an object, as done within the technocratic paradigm. 45 Many authors fear that AI may disrupt this intersubjective engagement, with people becoming more enthralled by their screens as AI algorithms present them with a reflection of their own desires. 46 The Dicastery for Culture and Education-sponsored document Encountering Artificial Intelligence reflects on these obstacles to encounter. 47
Structures
A third diagnostic category sees shifts in the subjective paradigm with which people engage the world paralleled by objective structures of sin that materialize situations of injustice. 48 Critiques of technology arising from liberation theology urge us to take into account the standpoint of the poor. For example, the long history of racism in the US has led to lower socioeconomic status (SES) for Black Americans because of factors like lower household wealth due to historical job discrimination and redlining that prevented the accumulation of wealth in real estate. The resulting lower SES and bad neighborhoods are correlated with worse health outcomes, greater likelihood of involvement with the criminal justice system, and other detrimental effects. Historical injustice has led to unjust structures.
These unjust structures may be translated into AI applications and then expanded by them. For example, an algorithm that selected high-risk patients for increased health monitoring ended up biased against Black patients because it had calculated risk in terms of health-care cost rather than health outcomes. 49 Because Black patients have historically had fewer resources for health care, they have not been able to spend as much, and thus, their health care appears to cost less. Hence, AI did not identify them as high-risk. Or, to take another famous example, Amazon created a hiring algorithm that was biased against women. 50 Historically, Amazon has not employed or promoted many women, so when it trained its algorithm to identify features of its successful workers, the AI picked features that selected men and excluded women. In both of these cases, AI recognized an actual pattern in the world and used it to make predictions. In that narrow sense, these were successful applications. The problem is that they reflected and magnified the biased aspects of our world.
Because of this, commentators like Cathy O’Neill fear that AI may intensify injustice and stereotypes, making the discrimination vastly more consistent than humans ever could. 51 Biases could be amplified, but some of these biased outcomes could also be eliminated. For example, the health-care algorithm could better define risk in terms of health outcomes rather than health costs. In other cases, better-selected training data may help. Jay Buolamwini discovered that AI facial recognition systems did not recognize Black faces, especially those of Black women. 52 The problem was, in part, that the databases used to train these systems lacked sufficient pictures of people of color. By changing the database to include a broader diversity of faces, the programs could be improved. It is more difficult to fix problems caused by fundamental injustices in our social system, though.
These biases may be difficult to address in a post hoc fashion for two reasons. First, machine learning algorithms are opaque to human analysis. Their power is in finding patterns that humans cannot recognize in data and analyzing the data over and over until the system can provide reasonably accurate predictions or identifications. Thus, even their programmers may not be able to tell which factors caused them to give a certain result and whether it was due to improper bias or not. Moreover, the whole point of many of these systems is to find factors that identify differential probability in the data—for example, that this person is more likely to watch a certain video than another person. The point is to find characteristics that predict differential outcomes. It is sometimes difficult to determine whether the use of those characteristics is unjust or not. These concerns force us to ask what a just society would look like. Perhaps answering that question would be the only way to improve the justice of AI. 53
These biases are especially disconcerting because they target those who have already been disadvantaged, making their situation worse. Far from advancing a preferential option for the poor, existing biases against the poor may be amplified by AI. The most frightening thing for many of these people is that they might have no one to whom they can appeal when an AI goes against them. Mid-level bureaucrats may be unable to explain (given AI’s opacity) or overturn (given bureaucratic structures) the decision of an AI. More likely, the poor who are denied their benefits will be caught in a Kafkaesque phone tree, itself run by AI. 54
Ethical Responses
Ethicists, policymakers, and theologians have sought responses to these moral concerns, which would ensure that everyone receives the benefits of AI while staving off the worst injustices. Here, I describe three of the most promising and popular clusters of responses: principles, communities, and virtues. Though these are all valuable, they are each only a partial solution, needing to be supplemented with other approaches and novel ideas.
Principles
The most widespread ethical approach to AI seeks to develop principles that can shape laws and policies to govern AI design and implementation. 55 This approach aligns with James Gustafson’s policy form of moral discourse; it seeks shared values that can be concretely implemented by pluralist institutions to address pressing issues. 56 There have been hundreds of lists of different principles of AI ethics released by various organizations, but most of these lists agree on a few principles. 57 For example, given concerns over bias, AI ethics should aim at fairness. Because of the dangers of surveillance, AI programs should prioritize privacy. The opacity of AI demands that a human be accountable for its decisions and that these programs be made as transparent as possible. Fears of an AI apocalypse suggest the need for safety. Thus, some variants of fairness, privacy, accountability, safety, and transparency are usually included on lists of principles to guide law and policy. 58
These are all worthy goals and have the potential to be translated into laws and regulations like the European General Data Protection Regulation or China’s Personal Information Protection Law. Yet, the principle-based approach’s ability to actually solve these problems is limited. First, specifying the principles in such a way that they will be effective is difficult. Though everyone may abstractly support fairness, what it actually means in regard to concrete policy is hotly debated. 59 This difficulty becomes even greater given the problems of translating a structurally unjust society’s data into unbiased predictions, as the last section described. Even one of the most widely used examples of bias, the COMPAS recidivism algorithms, is debated; depending on which statistical analyses one uses, it can appear biased or unbiased. 60
Similarly, society seems conflicted over the extent of privacy a person should have. Some people seem happy sharing the most intimate details of their lives over social media, while others are willing to sacrifice great amounts of privacy for national security or consumer convenience, even if they do not realize the full consequences of their actions. The theological justification of the value of privacy is an area in need of reflection. Though it is recognized by many Christian ethicists, 61 other theologians have noted that privacy has little support in the tradition. 62
Even accountability can become little more than a procedural hurdle. The most common way to implement accountability is to have a human-in-the-loop or on-the-loop. This means that, although an AI program may recommend an action, a human must either make the final decision to proceed, in the former case, or at least be able to monitor and veto the action in order to stop it, in the latter. However, scholars of human-machine interaction have noted the dangers of automation bias: when people consistently use a machine, they come to rely on and trust in it. 63 Therefore, they may trust the machine over their common sense, making human intervention useless as a failsafe.
Further practical concerns emerge with the implementation of AI. Though some good guidelines have been written, 64 many scholars doubt that ethical rules can truly be programmed into machines, given the amount of interpretation necessary to apply moral rules to concrete situations. 65 Even AI laws and regulations face the danger of regulatory capture, as has occurred in many industries. Tech companies sit on the committees proposing and specifying these principles. Therefore, the ethical implementation of principles faces unsolved practical problems.
Still, these principles are helpful general frameworks, even if they need negotiation in detail. A more theoretical concern with principles is whether they get to the heart of the dangers of AI for the human person. If the deepest problems with AI are how it shapes our worldview or how it embodies unjust structures, then these principles largely miss the mark. While some aspects of a principle-based approach may remediate structural injustice, they do not address the questions raised above in regard to paradigms at all. Therefore, further ethical approaches are necessary.
Community
To address the structural problems in a way that goes beyond debates over fairness, some commentators suggest a turn to smaller communities, especially those that have been marginalized in contemporary social structures. This approach is heralded by liberation theology, which calls us to take the epistemological standpoint of the poor in order to see injustice in society more clearly. 66 It is also supported by the emphasis on subsidiarity found in Catholic social teaching. In contrast, the impetus of many recent technological advances has been toward centralization, with AI development concentrated in a few major companies. Concerted efforts must be made to distribute the power and agency provided by technology so that it does not become an instrument of structural domination.
One way to do this is by addressing the concrete needs of the marginalized. Pope Francis suggests avoiding the technocratic paradigm by designing technology that resolves “people’s concrete problems, truly helping them live with more dignity and less suffering.” 67 This strategy has a long lineage in technology ethics, stretching at least back to the Catholic convert E. F. Schumaker’s emphasis on intermediate technologies appropriate for communities in the Global South in opposition to centralized programming for industrialized development. 68
While such a goal could be realized by elite programmers working with communities, some theologians have called for AI technologies to be developed by marginalized communities themselves. For example, Philip Butler has taken a practical approach by attempting to develop a mental health app that is “connected to Black culture, Black tempos, and even to Black modes of embodiment.” 69 It is a chatbot that, through his design and through beta-testing with the Black community, allows it to serve the perspective and ends of that community. Similarly, Kate Ott has supported movements of design justice that involve local communities designing technologies, as well as movements such as one called LoTEK that attempts to use indigenous design principles. 70 Many commentators suggest more training in AI development for youth in these communities; they also advocate for NGOs to provide space and resources for these efforts.
At the very least, many theologians repeat broader calls for more diversity in the tech industry itself. Disproportionately few programmers are women or people of color. The hope is that some of the structural influences or biases of AI technology could be avoided if such programmers were at the table when decisions are made.
While potentially promising, a few aspects of AI raise barriers to these approaches. First, AI is a capital and resource-intensive technology. As currently implemented, it requires accumulating and storing vast troves of data for training. Recent advances in machine learning could not have been achieved until companies had access to the data on the Internet. The computer chips necessary for AI are extremely expensive, and training and running the latest systems cost millions of dollars. 71 The advances in AI over the last twenty years have depended on quantitative increases in processing speed and data, which makes the technology more amenable to centralization rather than subsidiarity.
Second, this approach toward fixing structure by including other perspectives assumes that these perspectives will remain unchanged in their engagement with technology. But we already know that interactions with technology shape character in unintended ways. The danger would be that the technology will transform the perspective of the marginalized rather than the marginalized community transforming the technology. 72 Such an approach must at least be supplemented by attention to concerns raised in the discussion of paradigms.
Virtue
A promising resource for confronting the impact of AI technology on the self’s relationship to the world is virtue ethics. Virtue ethics has long considered the importance of character traits and dispositions for the moral life, so it is well placed to investigate how AI is shaping the contemporary worldview. Its greatest success in this field so far has been in showing how the use of AI can undermine virtue. For example, reliance on machine recommendations could lead to moral deskilling, just as autonomous weapons could undermine military courage. 73 As AI takes over more aspects of skilled activity, people lose the opportunities to exercise the prudence necessary for wise agency. 74 People fail to grow in the virtues of care when robots or screens take over caregiving. 75 In these ways, virtue ethics has provided much illumination to the effects of technology on character.
Constructive proposals stemming from virtue ethics have taken two major forms. The first and perhaps most common approach is to design a virtuous AI. 76 Using virtue ethics as a template, these thinkers try to imagine the capacities and moral training that would allow an AI to become virtuous in order to ensure its safety and morality. However, the virtuous AI approach raises its own problems. First, because of the affective and interpretive nature of prudent decision-making, it would seem that an AI would need conscious intelligence with affective capacities in order to be virtuous. 77 For all the reasons discussed in the personhood section, AI consciousness is unlikely in the near term. More importantly, virtuous machines would further offload human moral responsibility, thereby undermining human virtue and agency.
A second, more promising approach examines the virtues we need in order to use technology well. For example, Shannon Vallor describes a set of what she calls “technomoral” virtues that will allow people to properly engage in the practice of global technosocial life. 78 She discusses how these virtues can be formed through education and other practices. Whatever one may think of the adequacy of the list of virtues, 79 it is an admirable attempt to address the issue. The problem, however, is that her discussion mainly focuses on the dispositions that people bring to their use of technology. It is less explicit on how AI might be continually reshaping our character through our use of its applications. It does not matter what dispositions I initially bring to my use of AI if my daily use of it is continually undermining those dispositions (a problem of which Vallor is aware).
Therefore, a third approach looks to the sorts of practices that will help a person to keep from being malformed by using technology and adopting the technocratic paradigm. Borgmann, for example, discusses the need to continue engaging in communal, embodied, focal practices like the home-cooked, locally sourced family dinner in order to keep from seeing everything in the world as commodities. 80 Luis Vera and I have discussed the need for ongoing spiritual practices in order not to fall under the worldview of augmented reality devices or the reductionist framework governing many scientific technologies. 81 AI remains a site of ongoing tension and danger; in these analyses, people must always be vigilant as to how technology might be warping their relation to the world, much as in classical Christian spirituality, in which one must be constantly vigilant of temptations and first movements.
AI as Tool
These analyses seem to point to an inescapably negative vision for the future. AI threatens to corrupt both society and the self, perhaps even leading to destruction. The constructive proposals of virtue ethics, liberation theology, and principles may not offer enough to counter AI’s formative effects. Even utopians paint dreamworlds in which humans have little to do. Is there nothing we can do to reign in this daimonic power that either threatens apocalypse or promises utopia? 82
Here, we see the problems of completely eschewing instrumentalism. Commentators have correctly seen that AI cannot be handled through a naïve instrumentalism, which views AI as a neutral tool. But that has left them without resources to deal with its structural and psychological problems, aside from principles that are inadequate to the task. It is thus imperative that we develop the resources to use this technology as a tool. After all, Pope John Paul II stated that “as a whole set of instruments which man uses in his work, technology is undoubtedly man’s ally. It facilitates his work, perfects, accelerates, and augments it.” Unless it is used as a tool, it can “become almost his enemy, as when the mechanization of work ‘supplants’ him, taking away all personal satisfaction and the incentive to creativity and responsibility.” 83 AI must be kept as this ally of labor rather than an independent entity, but keeping control of its powers will take effort and ingenuity.
What is needed is a new, more sophisticated instrumentalist approach. 84 As Marshall McLuhan described in terms of media, “we need to know in advance what the effects on the users will be before we build the particular medium.” 85 This knowledge requires examining all the secondary effects on lifestyle and beliefs, seeking to foresee how individuals and communities will be shaped by new applications. Consider AI in the workplace. So far, discussions of human labor have either focused on the threats automation poses to cognitive work (e.g., pathologists replaced by AI that reads MRI scans) or the promise that automation might take away repetitive and unpleasant aspects of jobs (e.g., AI managing routine paperwork) leaving only the creative and fulfilling parts. Probably neither is wholly accurate. We have been beguiled by a myth of automation; in fact, automation rarely makes cognitively complex jobs redundant or easier. 86 If poorly designed, automated systems can increase cognitive labor, such as through the demanding, boring task of monitoring or causing more work during the most stressful portions of a job. 87 For the foreseeable future, humans and AI will work as a team in medicine and other spheres. Focusing on AI as a force operating alone has distracted scholars from questions about human-AI teams, such as how to avoid automation bias.
To confront these issues, ethicists need to start asking questions like: How can an AI application serve the ends of particular practices instead of merely bureaucratic efficiency? How do specific applications shape the attention, agency, and character of the user? A more complex instrumentalism may suggest some implementations of AI that must be entirely foregone as contrary to the virtues and ends of a practice. Such analyses will be difficult, but many of the necessary conceptual tools are already at hand. Studies of human-computer interactions have uncovered many ways that automation can be good or bad for the worker. The field of human-computer interaction attempts to confront these problems through, for example, the design of aircraft cockpits. Science and technology studies have described how technologies shape sociotechnical systems and how moralities are embedded within these systems. 88 Human-centered design has begun to wrestle with some of these problems. 89 Addressing these problems requires an ethical design of the entire sociotechnical system.
To design systems well, though, we have to know what our ends should be, as Paolo Benanti has emphasized. 90 The question of technological ends goes beyond the concerns raised by the alignment problem’s demand that our efforts at optimization not be destructive. It requires a vision of a flourishing life, as well as the virtues and social forms necessary for that life. The field of AI ethics has already described the broad problems of social media’s effects on attention or the consumerism stirred by online shopping algorithms, but the breadth of those problems leaves little space for concrete suggestions of reform, aside from deleting one’s social media accounts. 91 It would be productive for ethicists to focus on specific cases, just as the casuists of old wrestled with early modernity one situation at a time. For example, Encountering Artificial Intelligence analyzed AI’s effects on independent spheres of life such as education, health care, or the family to determine the possible positive and negative effects on each specific area. 92 Perhaps ethicists could begin by examining professions and practices with defined ends and explicit virtues, with already worked-out ethical frameworks like medicine, architecture, or education. By analyzing concrete applications in particular fields, ethicists could help to transform AI into a tool that can serve our ends.
Conclusion
Recognizing the significant problems raised by AI, Catholic ethicists and others have left behind the instrumentalist paradigm of technology. Though considering AI as a person has been less successful, engagement with the structural and character-forming features of AI has identified the significant moral issues that appear when AI forms a relationship between the person and the world marked by exploitation and distraction. However, these critical frameworks have given us few resources for an ethical response. Even if all Catholics were to forgo AI, we would still live in a world shaped by it. More broadly, recent moral theology has struggled to address structures of sin, even while it has made great strides in identifying them. Virtue ethics provides resources for properly forming character, but it is still at an early stage of determining how to deal with the continual onslaught that AI tools can make on character. If moral theology aims to address AI, it will have to develop resources to judge specific implementations of technologies as they arise. It will have to become more sophisticated in its analyses, continuing to engage fields such as social science and technology design that can shed light on AI’s ongoing effect on character and community. Only through such efforts can moral theology help to ensure that AI serves the human good.
