Abstract
Researchers who use artificial intelligence (AI) and machine learning tools face pressure to pursue “ethical AI,” yet little is known about how researchers enact ethical standards in practice. The author investigates the development of AI ethics using the case of digital psychiatry, a field that uses machine learning to study mental illness and provide mental health care. Drawing on ethnographic research and interviews, the author analyzes how digital psychiatry researchers become “moral entrepreneurs,” actors who wield their social influence to define ethical conduct, through two practices. First, researchers engage in moral discovery, identifying gaps in regulation as opportunities to articulate ethical standards. Second, researchers engage in moral enclosure, specifying a community of people licensed to do moral regulation. With these techniques, digital psychiatry researchers demonstrate ethical innovation is essential to their professional identity. Yet ultimately, the author demonstrates how moral entrepreneurship erects barriers to participation in ethical decision making and constrains the focus of ethical consideration.
Extensive evidence demonstrates that technologies relying on artificial intelligence (AI) and machine learning carry the potential to embed bias (Brayne 2020; Noble 2018), violate privacy (Jacobson, Bentley, et al. 2020), and perpetuate and even worsen different sorts of inequality (Benjamin 2019; Eubanks 2018; Obermeyer et al. 2019; O’Neil 2016). There is mounting pressure to enact ethical standards for AI (Crawford 2021b; Johnson 2021; Meskó and Spiegel 2022). Despite the prevalence of these moral concerns, so far there are few formal regulations or settled social norms around ethics in technology. Meanwhile, AI tools are in use everywhere, in domains as diverse as credit scoring, court decisions, and health care. The situation fits a pattern of long-standing sociological interest: technological innovation outpaces social systems of legal and moral evaluation (Durkheim 1893).
AI innovation, and concerns about AI innovation, have also arrived in the field of psychiatry. Recent years have seen the advent of digital psychiatry, a domain of research and patient care that adapts digital technologies to study mental illness and provide mental health care (Berners-Lee 2024; Birk and Samuel 2020; Pickersgill 2019). The field may be roughly divided into digital therapeutics, which includes smartphone and computer-based applications that provide therapeutic care, and digital diagnostics, which describes efforts to use digital data to identify mental states and predict future mental distress (Onnela and Rauch 2016; Torous et al. 2021). Many digital psychiatry projects are powered by machine learning, for example therapeutic chatbots, which use natural language processing to engage in therapeutic, text-based dialogues, and “digital phenotyping” apps, which use machine learning techniques like neural networks to analyze data about how people use their smartphones to tailor interventions in real time (Perez-Pozuelo et al. 2021; Torous et al. 2021). Both academic research in digital psychiatry and private investment in digital mental health grew steadily across the late 2010s. These projects and their funding accelerated dramatically in the first year of the coronavirus pandemic, as mental health care shifted to virtual modalities and mental health care need soared (Jacobson, Lekkas, et al. 2020; Pan et al. 2021; Pierce et al. 2020). Yet as digital psychiatry has expanded, it has encountered many of the ethical questions confronting the technology industry writ large (Costa and Milne 2024; Henwood and Marent 2019; Pickersgill 2019). Questions about how to prevent bias, preserve some standard of privacy, and appropriately manage digital data have no settled answers, leading some to describe digital psychiatry as the “Wild West” (Johnson 2021). This article investigates early attempts to answer ethical questions and settle digital psychiatry’s Wild West.
Using the case of digital psychiatry, this article demonstrates how a nascent field’s pioneers attempt to regulate it by becoming moral entrepreneurs (Becker 1963). In Howard Becker’s formulation, moral entrepreneurs are powerful social actors who wield their influence to instruct others on norms around their cause célèbre. Although Becker’s moral entrepreneurs are often professionals like doctors or police, professionals who seek to regulate digital psychiatry include academic clinician-researchers, health care practitioners, legislators, and technologists working at private companies. Within this field, this article analyzes the moral entrepreneurship of the first group. I refer to the academic clinicians who develop and adapt digital technologies to treat patients and study mental illness as digital psychiatry researchers. I focus on this group because academic medicine is structured around activities that demand an articulation of ethics, as researchers submit study plans to university institutional review boards, consent patients, compose grant applications, and pursue academic publishing. This affords many opportunities to observe how digital psychiatry researchers seek to close the gap between technological innovation and ethical regulation.
I argue that these moral entrepreneurs address moral questions, staking their claim in the “Wild West” of digital psychiatry, through two strategies. The first is a strategy of moral discovery. Digital psychiatry researchers identify new ethical challenges and argue they have discovered territory in need of regulation. The second is a strategy of moral enclosure. Researchers articulate a community of those they argue are licensed to do the work of moral regulation. Together, these strategies of moral entrepreneurship are bids for autonomy and assertions of expertise, building the classic markers of professionalism for digital psychiatry (Abbott 1988; Freidson 1970). More broadly, I demonstrate how moral entrepreneurship to regulate the “Wild West” of digital psychiatry is a project of erecting fences, ultimately constraining participation in moral decision making and narrowing the focus of moral concerns.
This article offers several contributions. First, I extend the concept of moral entrepreneurship. Becker focuses on moral entrepreneurs who are powerful because they are professionals, and he develops the concept to evaluate how moral entrepreneurship constructs outsiders. Here I draw out another dimension of moral entrepreneurship: how moral entrepreneurship constructs insiders as a strategy to professionalize digital psychiatry as a field. Second, I offer an empirical complement to sociological work on AI. Sociologists have been prominent authors of calls to develop an ethics of AI (White House Office of Science and Technology Policy 2022), but to date there are few studies of how ethical questions of AI and machine learning are already being answered on the ground (but see Ali et al. 2023; Berners-Lee 2024; Kiviat 2019, 2023; Orr and Davis 2020). This article is an empirical investigation of that work in progress. Finally, by attending to how social actors construct moral meaning locally, this article introduces nuance to literature that critiques the morality of machine learning projects (Lupton 2016; Sadowski 2020; Zuboff 2019). Rather than suggesting that machine learning tools are amoral or immoral, my analysis investigates the moral work that accompanies technological innovation by exploring researchers’ engagement with complex and open moral questions as they adapt technologies for what they see as the normative good of health care. At the same time, this article ultimately demonstrates how the professionalization of ethics makes moral status determination an elite practice, shaping and potentially constraining how questions about morality are asked and answered.
Background
The concept of moral entrepreneurship originates in Becker’s (1963) analysis of the social construction of deviance. Becker presents moral entrepreneurship as an explanation for how moral reform happens, as a group of people takes the initiative to create or enforce a rule. Rule creators, Becker argued, are typically moral crusaders, people with religious or quasi-religious motivation who seek to set bounds on the acceptability of things like sexual behavior, gambling, or alcohol consumption. Moral crusaders tend to be socially powerful actors, but still they often partner with domain experts like lawyers and lawmakers to actually enact their proposed rules. Rule enforcers, who may also be successful rule creators, have to justify both the continued relevance of the problem their rule solves and the rightfulness of their solution to it. Moral entrepreneurship has been carried forward in analyses of moral panics (Cohen 1972) and scandals (Adut 2004), and it frequently accompanies investigations into medicalization. New disease discovery has often been analyzed as a project of moral entrepreneurship undertaken by physicians, studied in moralized disorders ranging from hyperkinesis (Conrad 1975) to menopause (McCrea 1983) to fetal alcohol syndrome (Armstrong 1998). In the past few decades, it has become more common to signal the concept with terms such as norm entrepreneurship (Adut 2004) and ethics entrepreneurship (Ali et al. 2023) or to think about how moral entrepreneurs construct categories of “good” actors, not just deviant actors (Taylor 2010). Across diverse empirical applications, moral entrepreneurship seeks to describe the work of social actors enacting new concepts, norms, and standards.
I contextualize the work of digital psychiatry researchers amid that of other types of scientists who have worked at this intersection of morality and professionalism. Looking across cases, we see that scientists commonly deploy morality as a signature of credibility (Evans 2021). Studies of scientific labs demonstrate that professionals engage in symbolic acts of moral exclusion to guard social status (Daston and Galison 2007; Shapin 1994), creating boundaries to instantiate a moral order (Gieryn 1995; Lamont and Molnár 2002). Researchers seek to broadcast that they are good as one reason why they should be listened to, and they manage the “moral challenges” of controversial research by adding moral expertise to the purview of their expert authority (Evans 2021). It follows that public trust in professionals is partly a trust that those professionals seek to do good, working broadly in the public interest. Morality is also a tool that professionals wield to distinguish their profession. Medical professionals such as doctors, and psychiatrists in particular, build prestige for their work by styling themselves as arbiters of morality (Abbott 1988; Freidson 1970, 2001; Porter 1995). Modern-day researchers in digital psychiatry work in this tradition of moral entrepreneurship to shore up power and influence. More broadly, the co-construction of moral and professional status designation reflects long-standing themes from science and technology studies on the reciprocal construction of science and ethics, for example studies of the “promissory discourse” of technology, where expectations can be set to justify technological development in the name of preexisting values (Henwood and Marent 2019; Pickersgill 2019; Wehrens et al. 2023)
Scholarship about contemporary innovation in technology, however, rarely focuses on innovators’ moral justifications (but see Neyland 2016; Orr and Davis 2020; Seaver 2021; Wehrens et al. 2023). Instead, analyses of AI and big data technologies tend to criticize the field’s values. Social scientists are prominent critics of new technologies’ potential to introduce bias and impinge on privacy (Benjamin 2019; Eubanks 2018; Joyce et al. 2021; Noble 2018). There are also critiques of the thinning or flattening process by which social data are transformed into quantitative metrics (Lupton 2016). Within the social science literature on digital health, analyses commonly criticize a link between machine learning–fueled self-monitoring and a neoliberal trend of responsibilization, where technology makes individuals feel responsible for their own health instead of being used to build systems for care, prevention, and support (Lupton 2016; Pickersgill 2019).
Morality also appears in critiques of the broader technology industry. These efforts analyze the technology industry’s expansionist tendencies and often describe this work using metaphors about colonization. As Zuboff (2019) wrote in an extended comparison of Google Street View to the colonization of the Americas, “These twenty-first century invaders do not ask permission; they forge ahead, papering the scorched earth with faux-legitimation practices. . . . They build their fortifications, fiercely defending their claimed territories, while gathering strength for the next incursion” (p. 180). Critiques of contemporary technological innovation are often framed as critiques of contemporary capitalism, termed digital capitalism or surveillance capitalism. They regard digital capitalism generally as a project of colonization, a campaign to extract and dispossess resources in the form of data about people’s lives and daily behavior (Couldry and Mejias 2019; Sadowski 2020; Zuboff 2019). The “colonizing impulse” has also been attributed to the field of AI, specifically (Crawford 2021a).
Academic digital psychiatry researchers are removed from formal markets in several ways, but their use of the “Wild West” metaphor raises the question of a connection between their work and the colonization critique. My goal here is to analyze that connection and, importantly, its limits. As I will explain, although the colonization critique suggests technological innovators are immoral actors, when researchers themselves refer their field as the “Wild West,” they describe an unregulated frontier, a space of opportunity for both technological and ethical innovation. Digital psychiatry researchers confront a challenge at the heart of digital medicine: how to capitalize on AI’s potential to provide care while minimizing its harms. In my findings below, I analyze how digital psychiatry researchers become moral entrepreneurs, building a professional identity around moral assessment (Evans 2021).
Data and Methods
My analysis pairs ethnographic observation with semistructured interviews. For this project, I conducted participant observation in the world of digital psychiatry for three years between 2020 and 2022. My ethnographic work began in the style of a classic lab ethnography (Knorr Cetina 1999; Latour and Woolgar 1979). For 12 months in 2020 and 2021, I completed participant observation at a digital psychiatry lab I call the PsyTech Lab, a research group at an academic hospital that develops and tests digital technology for mental health care and research. This lab was made up of a mix of clinician-researchers, mostly psychiatrists and psychologists, and research assistants, working to invent and evaluate digital technology to provide psychiatric care and produce new knowledge about mental illness. The lab’s work often happened in collaboration with engineers, computer scientists, and private companies. In the year I spent with the lab, I observed meetings about data analysis, corporate partnership, and research strategy, and I participated in the daily life of the lab, including onboarding for new staff and the preparation of academic publications.
I supplemented data collection at my primary field site with participant observation of many additional events and phenomena. I attended four conferences on digital health technology and numerous stand-alone academic talks. I followed key interlocutors on social media. I read the academic publications my study participants wrote, read, and cited. Finally, I participated in three health care hackathons, multiday events in which teams of researchers and entrepreneurs develop and pitch technological solutions to problems in health care. In this article, I use pseudonyms for the people I interviewed and people who work at and collaborate with my ethnographic field site, as well as pseudonyms for the organizations they work for. I use real names when I discuss people’s published writing and conference presentations. Pseudonyms are indicated by first or last name only, whereas I use full names for people referred to by real name.
In a second phase of research, I completed 30 semistructured interviews with an expansive network of researchers, clinicians, programmers, and for-profit health technology innovators. Interviews focused on how individuals make decisions about technology to develop and use, how they evaluate other digital health innovators, and how they identify and navigate moral concerns. I completed initial fieldwork in person, but because of safety restrictions of the coronavirus pandemic, most of my data collection took place virtually, across a variety of digital platforms, services, and media.
Analysis for this article followed an abductive process (Tavory and Timmermans 2014). I began by reading through the full corpus of my data, including field notes, interview transcripts, primary source documents like interlocutors’ publications and media interviews, and other assembled images and documents, taking note of emerging themes. I noticed that digital psychiatry researchers frequently described their field as the “Wild West,” in a way that seemed connected to but quite different from the “colonizing impulse” described in theory of AI innovation (Crawford 2021a; Zuboff 2019). I used those notes on themes to produce an initial set of codes to inform subsequent, more targeted rounds of qualitative coding, focused on making sense of the “Wild West” metaphor and its relationship to my question about how ethics catch up to technological innovation. As my findings started to take shape, I iterated between theory and data, reading about moral entrepreneurship and theoretical analyses of AI innovation, adjusting my argument to better encapsulate my understanding of my case. The argument in this article is informed by the full scope of my fieldwork but draws from my data about a subset of projects that engage with machine learning techniques. I define machine learning as a subfield of AI that develops computer systems that can recognize patterns in and make predictions with data. I use the term AI, which refers more broadly to computer systems that perform tasks by simulating strategies of human reasoning, as the “public-facing term” common in discourse about the field, for example to describe topics like the “ethics of AI” (Joyce et al. 2021).
Findings
The Wild West
Digital psychiatry researchers commonly describe their field as the “Wild West.” They often invoke the phrase as a metaphor about moral ambiguity. The Wild West alludes to lawlessness and a lack of direction about what to do, used to describe digital psychiatry as a space beyond the horizon of existing regulation, uncharted territory with risks and opportunities. More broadly, digital psychiatry researchers use this term to cast themselves as their field’s moral entrepreneurs, the people rightly tasked with establishing the field’s moral order. As a historical concept, of course, the Wild West is also a metaphor about colonization. The Wild West was a space where migrants created their own destinies and imposed their own ideologies, displacing and destroying indigenous social orders and land-use practices. Digital psychiatry researchers do not see themselves as colonizers. They see themselves as moral actors on a frontier, acting on values of scientific discovery and patient care. At the same time, as I will elaborate in the “Discussion,” the project of proposing a moral order for an unregulated field is a project of introducing constraints. As they treat the terrain of digital psychiatry as the Wild West, digital psychiatry researchers make consequential decisions about how the field ought to be. When they succeed, these decisions preclude the field from being configured in other ways, and they install these researchers’ perspectives as an exclusive reality.
The Wild West metaphor often described a lack of formal regulation in the field of digital psychiatry and researchers’ lack of clarity around what to do. I first heard digital psychiatry described as the Wild West at a major digital psychiatry conference, an annual summit for researchers, clinicians, and companies working on technological innovation for psychiatric research and care. One year, the keynote speaker was Christine Grady, chief of the Department of Bioethics at the National Institutes of Health, who spoke about what digital psychiatry would change for consent practices. Dr. Grady argued that many elements of digital psychiatry changed the game for informed consent, in particular the new types of data that could be collected, the continuous and passive nature of some digital psychiatry tools, and the known complexities and limitations of working with machine learning tools. “It’s the Wild West,” she summarized. New analytic tools like machine learning, new capabilities of digital data, and new research methodologies like continuous, passive monitoring using study subjects’ cellphones combined to create a new territory that, Dr. Grady argued, existing models of informed consent were insufficient to govern.
The phrase also came up repeatedly in interview conversations with digital psychiatry researchers. Sometimes researchers invoked the metaphor to describe their assessment of the unregulated commercial space of digital mental health. Mobile and web applications purporting to provide mental health care abound, but few have validated their interventions’ effectiveness in clinical trials or pursued U.S. Food and Drug Administration (FDA) approval (Benjamens, Dhunnoo, and Meskó 2020). Existing health data privacy laws do not apply to these apps; data in so-called wellness apps is not subject to the Health Insurance Portability and Accountability Act (HIPAA), which only regulates the disclosure of health information by a finite list of “covered entities” that includes health care providers and health insurers but not apps that have not sought recognition as medical devices (Marks 2021). As Dr. Sugarman, a clinical psychologist, said, I worry a lot about companies making claims that go far beyond what the research says at this point. . . . There’s so little regulation of the digital mental health intervention space. There’s something like five programs out of the thousands and thousands and thousands that actually have been FDA-reviewed and approved. There aren’t really guidelines. . . . It’s kind of a Wild West out there. For the consumer, it’s very hard to know what to trust and what’s likely to actually be helpful.
Dr. Sugarman’s concern was that commercially available digital mental health tools were unregulated, and that because the field is not bound by the evidentiary standards of scientific research to determine tools’ efficacy, consumers are left in a situation where they do not know what tools they can trust. In describing the field as the “Wild West,” she positioned herself in implicit contrast, as a moral person worried about the unregulated space of digital mental health interventions.
The Wild West metaphor often accompanied expressions of emotion like fear and worry, but those negative feelings tended to pair with an assessment of digital psychiatry as a space of opportunity. Dr. Meyerhoff, another clinical psychologist, agreed with the prevailing assessment that the state of regulation in digital psychiatry is “a little bit scary.” But she also perceived lack of regulation as a chance to enact her values. As she said, “It’s the Wild West, what’s happening out in the world. Ethics are [about] wanting to keep pace and be doing really cool stuff while still making sure that things work, clinically.” For Dr. Meyerhoff, the Wild West of digital psychiatry afforded an opportunity to bring her values about technological innovation (“doing really cool stuff”) and clinical efficacy (“making sure that things work, clinically”) together under the label of “ethics.”
As these examples show, researchers invoke the Wild West metaphor to refer to the lack of regulation that characterizes the space they work in, but also as an arena in which to enact their values. As I will demonstrate, these researchers’ approach to developing moral standards for digital psychiatry is an attempt to establish themselves as the field’s moral entrepreneurs, professionals whose domain includes passing moral judgment. Moreover, their work entails a narrowing of what counts for consideration and a specification of the “right” way to work, study, and behave in the world of digital psychiatry. In the following sections, I analyze two strategies by which researchers identify and sequester territory in digital psychiatry’s “Wild West,” fashioning themselves as the moral entrepreneurs of their field as they seek to install its moral order.
Moral Discovery
If the landscape of digital psychiatry is the Wild West, digital psychiatry researchers understand themselves as the field’s pioneers. Researchers engage in a project of moral discovery, a strategy of identifying areas they argue are in need of regulation. In doing so, they behave much like Becker’s rule makers, actors who become moral entrepreneurs through their claim that they have discovered a behavior or situation in need of direction and guidance (Becker 1963). Moral discovery is a process of staking a claim. It is an argument researchers make about a need for regulation, but it is also an argument that they have discovered the terrain in need of regulation. As they engage in moral discovery, researchers start to develop moral entrepreneurship as a professional trait.
Digital psychiatry researchers’ work of moral discovery came through clearly in the PsyTech Lab’s efforts to understand liability faced by clinicians working with massive digital datasets used in machine learning analyses. Concern around clinician liability first arose when a project the PsyTech Lab was working on faced unexpected pushback. The project in question was a study that developed an algorithm to detect “emotionally laden” words in psychiatric patients’ text messages and private messages on social media as a new tool for patient care. Patients who chose to participate in the study agreed to let the research team search their messages. Relevant messages were then shared with patients’ therapists, who could choose to discuss them with the patients during appointments. The clinicians recruited to participate had a flurry of questions, but chief among them was a concern about the implications of collecting so much data. As Melinda, a research assistant in the PsyTech Lab, recalled, One of the clinicians in the study was like, okay, this is great, but I’m worried. We’re collecting all this data. How much am I supposed to look at it? What if I don’t look at it and something bad happens? I’m concerned that entering into this study will mean I now have a responsibility for this data.
The PsyTech team was interested in the answer to these questions, but they were especially interested in the opportunity to answer the questions. They decided to seize on the questions as a chance to engage in moral discovery, writing an academic paper to call out this problem and suggest standards around clinical liability when working with large tranches of digital data.
Spurred by the clinician’s concern, Dr. Alvi, a psychiatrist and the principal investigator of the lab, reached out to Daphna, a colleague he knew from conferences on ethics and medical technology. Dr. Alvi proposed that they collaborate on a paper that would present an answer to this question of clinical liability. Daphna, a lawyer and bioethicist, agreed to help. She added to the team Asta, a lawyer and fellow at the research center where Daphna worked. Melinda, who aspired to attend law school herself, was conscripted to lead the literature review and draft the paper.
“It’s really interesting,” Melinda reflected partway through the process of composing the paper, because it’s like the Wild West. There’s no guidelines out there at all. I think because the technology is so new, it’s really hard to know how to put it into clinical practice. We don’t even really know what it means.
Melinda’s reflection pointed to a key challenge of this work. Digital health care projects often collect much more data than researchers know how to use and, to Melinda’s point, much more than they know how to interpret. Simultaneously a benefit and a potential drawback of digital health data, and of big data generally, is that within the seemingly meaningless noise of billions of data points may hide a signal. What if a clinician failed to act on something that ultimately proved to be important? Could it be considered medical malpractice?
The team met biweekly for several months to draft the paper. With Daphna and Asta’s guidance, Melinda built out an argument following conventions of legal scholarship, citing case law about incidental findings and clinician disclosure requirements. The final version argued that legal precedent from existing cases, which focused on genomic data and medical imaging, provided some general guidance, but machine learning data in health care was a new beast. The paper ultimately argued that clinicians should address liability concerns by producing documentation of patients’ informed consent to have data produced about them, and by sharing raw data with patients directly. Yet although the team was focused on preparing an answer to the open questions they had identified, much of the discussion in their meetings was about the products of this process. For these researchers, the open questions were not just a practical concern, but also an opportunity. As Melinda observed, “Dr. Alvi wants the lab to be a thought leader in this.” Dr. Alvi and Daphna were united in their interest to use this paper as a way to establish themselves as moral actors who might legitimately claim this territory as their space of work. “Anchoring it around the paper makes sense,” Dr. Alvi said to Daphna in one of the team’s meetings. “That way we have something to show for it. And then if we can spin it off into a grant, that’s the best-case scenario.” The team was certainly interested in answering the question about clinician liability. But they were also interested in demonstrating their productivity and positioning themselves favorably to do more of this work, funded by grants. They saw their contribution not as pioneering how to use machine learning algorithms to identify signals in the noise of patients’ personal communications, but as pioneering ideas about how to work with these data ethically. This gap in regulatory guidance was the research team’s chance to stake a claim about how clinicians should handle massive amounts of digital data about their patients. They approached the project as an opportunity to assert moral leadership.
Digital psychiatry researchers also engaged in moral discovery in efforts to identify the “bad actors” of digital health. Moral discovery helped researchers stake a claim in their field as people who could identify open questions in need of ethical answers, but it also helped researchers claim moral high ground on ethical questions they could argue already had answers. To date, one of the most prominent moral discovery projects is an effort supervised by John Torous, a psychiatrist, to evaluate how commercial wellness apps handled their users’ data (Huckvale, Torous, and Larsen 2019). Dr. Torous became curious about popular apps marketed directly to consumers that claimed to help people monitor their health, in particular how their data management practices stacked up against their user agreements. Selecting a set of top-rated Android and iOS apps, Dr. Torous and two colleagues undertook qualitative coding of the apps’ privacy policies. They found that most did have privacy policies, and about half affirmed they shared user data with third parties. Then they hacked into the back end of the apps to track whether and where they actually sent user data. They found that nearly all the apps shared user data with third parties, mostly Google and Facebook for advertising analytics. The apps that shared user data included three apps in the researchers’ sample whose privacy policies explicitly stated they would not. The team wrote up these findings and published them in JAMA.
The language in the publication is measured. In the conclusion, the authors write their goal was to “highlight deficits in the disclosure of data transmission practices” and ultimately to argue for the “continuing need for innovation around trust and transparency for health apps” (Huckvale et al. 2019:7). The paper’s principal contribution was not to decry the designers behind the apps as immoral actors but to argue about the need to keep innovating to make health apps trustworthy. The authors position this needed work as an opportunity for their own colleagues, concluding, “It is imperative for the health care community to respond with new methods and processes to review apps and ensure they remain safe” (p. 8). In this act of moral discovery, Dr. Torous and his colleagues justify their actions—hacking into apps and generating negative publicity for tools that may be efficacious, even if they do share user data with third parties—by articulating a research agenda for academic medicine, identifying apps’ data sharing practices as a topic for which the “health care community” ought to develop guidelines.
Researchers also explored the world of commercial apps in less technical ways, and they talked about them in ways that bolstered their authority to evaluate them. Dr. White, a psychologist, noted “There are tons of apps in the app store that [say they] are treatment apps that are based on absolutely nothing.” Dr. White described a time he had downloaded an app with a similar name to a popular psychological treatment technique. It turned out this app was, in his words, a “fart machine app.” It was an extreme example, but Dr. White categorized it as the same as other apps that make spurious claims to provide therapeutic interventions. “Some people are doing it just for the downloads,” he said, “or they have no empirical basis behind it, or they’re just scavenging data. Those people, they’re not really research groups.” Dr. White framed his more casual investigation of ineffective apps as a project of moral discovery, defining himself in opposition to app developers that were “not really research groups.” Dr. Slade, a clinical psychologist, articulated a related concern, that mental health apps that do appear to offer a therapeutic intervention but have not been evaluated for their effectiveness “may ruin certain brands of treatment that have just reached the public in terms of name recognition. . . . People will say, well, I tried CBT [cognitive behavioral therapy] and it’s stupid.” Moral discovery often included immoral discovery, the pronouncement of other work in the field as illegitimate or deleterious to the researchers’ own moral projects of providing evidence-based, effective therapy.
Moral discovery is a strategy of moral entrepreneurship that casts digital psychiatry researchers as their field’s pioneers. By identifying new spaces of moral inquiry opened by new technology, researchers argue they are the people equipped to both ask and answer digital psychiatry’s moral questions. Moral discovery is also a strategy to make moral status designations through discovery of immoral elements in the field. By staking a claim of moral discovery, digital psychiatry researchers claim ownership of the field.
Moral Enclosure
A second strategy of moral entrepreneurship is the project of moral enclosure. Moral enclosure describes digital psychiatry researchers’ work to articulate a community of their field’s moral actors. Whereas moral discovery has resonance with Becker’s concept of the rule creator, moral enclosure corresponds to Becker’s rule enforcers. In Becker’s formulation, rule enforcers are moral entrepreneurs who have had some success in creating new rules. Their task shifts toward enforcing the rules, rather than arguing for their creation. Becker’s focus on rule enforcers emphasizes their work as the continued construction of deviance. Moral enclosure, I argue in contrast, allows us to focus on how moral entrepreneurship also involves work to draw boundaries of community, constructing and maintaining the moral entrepreneurs as the in-group.
As they issued declarations about moral and immoral actions in the field of digital psychiatry, researchers also made moral determinations about the field’s various actors, designating an in-group and an out-group. Researchers built platforms to amplify the work of those they believed were doing this work right. Chief among these platforms was a major annual conference on mental health and technology, which the PsyTech group helped organize. “Ethics” was a signature focus for the conference, baked into its design: the entire third day of this three-day event was designated “Ethics Day,” featuring a full slate of panels and fireside chat discussions of different dimensions of ethics in digital psychiatry. In its first years, speakers were carefully curated. By 2021, the conference’s fifth iteration, the work of curation had transformed into a process of self-selection. In a planning meeting several months before the conference, Dr. Alvi told the PsyTech Lab that this year, they had filled their session slate with unsolicited submissions. As he explained, “We’ve had 11 submissions, so we actually need to reject some, which is a good problem to have. We don’t have to find all our speakers. They invited themselves.” Dr. Alvi was pleased not just by how many submissions they had received, but also by who they received submissions from. He continued, What’s really cool is it’s people we know and whose work we know. We didn’t have to ask them. It speaks to this conference evolving into a true gathering place. You don’t just want people. You want the right people.
Five years into running this annual conference, the in-group was self-selecting, and Dr. Alvi was thrilled. He had a clear vision of who the “right people” were to opine on ethics in digital psychiatry. That vision appeared to be shared, as the people Dr. Alvi might otherwise have invited to participate in the conference signed themselves up unprompted, proposing themed sessions on topics that met with Dr. Alvi’s approval. Importantly, though these self-selected speakers represented diverse orientations to the work of digital psychiatry, they were united by elite institutional affiliations, including top hospitals, wealthy universities, and major technology companies. Dr. Alvi articulated this vision explicitly: he told his research assistants that traditionally, the mix has been one MIT or high-level engineering person, one federal person. . . . A mix of younger people that are doing interesting work. [This year] that’s Ziad [Obermeyer, a professor at the University of California-Berkeley]. So we have our big boxes checked.
Dr. Alvi sought to build his conference as a who’s who of digital health. The construction of in-groups that constitutes moral enclosure was also a project of moralizing hierarchies of resource access and existing prestige.
Dr. Alvi was particularly forthcoming about his work of moral enclosure. I first met him while conducting pilot interviews to better understand digital psychiatry as a field. Dr. Alvi asked whom else I had been in touch with, and I mentioned Dr. Younger, a young psychiatrist. At the mention of Dr. Younger’s name, Dr. Alvi’s face lit up. He knew Dr. Younger well. The two crossed paths during their training, while Dr. Alvi was a resident and Dr. Younger was in medical school, and they had kept in touch since. They followed each other’s scholarship, spoke at conferences together and peer-reviewed each other’s manuscripts. Almost wistfully, Dr. Alvi smiled and said, “Dr. Younger’s one of the good ones.” The statement got right to the point: Dr. Alvi argued that Dr. Younger was one of the moral researchers in the emerging world of digital psychiatry. He continued, elaborating what it meant to be “one of the good ones”: “Dr. Younger is cautious. He doesn’t overhype things. He emphasizes patient privacy.” Rather than highlight Dr. Younger’s brilliant study design, his skill set as a psychiatrist and programmer, or his knack for placing articles in top journals, Dr. Alvi emphasized Dr. Younger’s commitment to data security and his personal virtues of caution and modesty. In Dr. Alvi’s eyes, Dr. Younger’s goodness centered on his focus on his data. It is also significant that Dr. Alvi cast Dr. Younger not as “good” but rather as “one of the good ones,” a member of a group. Dr. Alvi identified Dr. Younger early in the latter’s career as a promising member of this group. He talked about Dr. Younger frequently in lab meetings, impressing upon the research assistants that Dr. Younger was as an example of what the field should aspire toward.
Dr. Alvi was unusually straightforward with his willingness to talk about the “right way to do things,” the “right people,” and the “good ones,” but digital psychiatry researchers generally tended to hold strong ideas about how their work ought to be done, and by whom. There was diversity among the first. But though they held a range of opinions about specific research strategies, digital psychiatry researchers shared a sentiment that they were the “right people.” For example, digital psychiatry researchers espoused varying levels of wariness around researchers’ relationship to profit. Despite these differences of opinion, all agreed what ultimately moralized their work was they were the ones doing it, because their academic approach ensured a degree of morality. This opinion amounted to moral enclosure because though researchers accepted a diversity of approaches to pursuing commercial viability, they drew boundaries around the morality of researchers’ work in other ways, praising those who were bound by academic research ethics like themselves.
One researcher who fit this example was Dr. Archer, a child and adolescent psychiatrist. When I met Dr. Archer, she was beginning a career transition. After many years in academic medicine, she was retooling her professional life as the chief medical officer for a mental health technology startup. The company’s signature product was a HIPAA-compliant platform that delivered virtual care to children with anxiety and obsessive-compulsive disorder and tracked their progress with data analytics. The platform was marketed directly to consumers but advertised itself as an outpatient treatment program, not a medical device. Dr. Archer hoped the move would allow her to get digital mental health tools in patients’ hands faster. As she explained, “I’ve been trying to do this through academic medicine and I’ve realized that I’ve gotten as far as I can. Being part of a big hospital system, you just can’t be as nimble.” Dr. Archer had some concerns about the for-profit world, but she argued she would retain a moral perspective by carrying forward what she learned in academic medicine. “My hope is that we’ll be able to keep our intentions pure,” she said, “keep our eye on making sure we’re doing right by the patients.” Dr. Archer then doubled down on her assessment that academic researchers were the group poised to make and keep digital psychiatry moral. She said, “I’ve been raised in a very academic center. I’ve been raised to be very conscientious about ethics. I’m never going to feel comfortable doing something egregious or harmful.” Dr. Archer was somewhat wary of the profit motive, but she believed her academic training equipped her with a moral compass.
Dr. Archer’s sentiment that academic training made researchers the “right people” to do moral work in digital psychiatry was shared by Dr. Picker, a clinical psychologist. Dr. Picker often collaborated with private companies to pursue her research agenda. One company she worked with had created a chatbot powered by natural language processing. Like Dr. Archer’s virtual care platform, the chatbot’s platform was HIPAA compliant, marketed directly to consumers, and not subject to FDA regulation as a medical device. Dr. Picker used the tool to test the effectiveness of chatbot therapy for eating disorder prevention. She argued that it was precisely that collaboration that moralized the work. She explained, Early on there was concern about these products not being evidence-based, and companies putting things out there that weren’t backed by science. But my reading is that’s all kind of changed. There are a lot of companies trying to do the right thing, building interventions that are based on science, . . . having scientists on their team.
In contrast to a first wave of products that had no scientific backing, Dr. Picker praised what she saw as the more recent landscape, in which companies sought out scientists to develop scientifically informed interventions. Although Dr. Picker believed it was appropriate for a range of institutional actors to engage in digital mental health interventions, she affirmed that scientists such as herself were the crucial partners for establishing appropriate standards for the products.
Digital psychiatry researchers engage in moral enclosure as they articulate a community of researchers with shared values. Positioning themselves in contrast to actors who they perceive to care less about patient care, scientific evidence, and research ethics generally, researchers construct a cohesive moral group. As I will elaborate in the following section, moral enclosure is also a way of enacting limits on what counts and is prioritized as moral. Successful moral enclosure secures researchers’ status as moral entrepreneurs but also constrains who can be a moral entrepreneur.
Discussion
In this article, I have argued that academic digital psychiatry researchers seek to regulate the “Wild West” of their field by becoming moral entrepreneurs, powerful social actors who assert what morality looks like. They pursue moral entrepreneurship through two interrelated strategies. First, they engage in moral discovery, identifying areas in their field that are not regulated, new moral questions that emerge from technological innovation, and work that violates the moral tenets that moral entrepreneurs hold. Second, they engage in moral enclosure, closing ranks around the topics and approaches they claim as their unique moral contribution and the actors they claim are the right people to do the work. Both strategies seek to build the stature of digital psychiatry researchers as the insiders of their field and endow them with jurisdictional authority and professional autonomy (Abbott 1988; Freidson 1970). But I also argue that these strategies foreclose on other possibilities for how digital psychiatry might be shaped and regulated. In other words, these efforts to professionalize morality also constrain access to moral status determination and narrow the scope of what morality might be.
To understand how moral entrepreneurship can become a constraining force, it is helpful to consider the implications of the Wild West metaphor beyond how digital psychiatry researchers understand it. The Wild West is a space of moral ambiguity, but it is also a space that people of means approach as a blank canvas. Motivated by values of patient care and the protocols of scientific research—and by the political economy of academic publishing, grants, and tenure—researchers impose their own moral ideas on this nascent field, sketching out what it can be according to their own vision. When Dr. Torous publishes about hacking into apps with dubious commitment to their own data privacy policies, he engages in a sort of crime fighting, moralizing an otherwise immoral act (hacking) in pursuit of a greater good. Dr. Alvi, Daphna, Asta, and Melinda recommended a solution to clinicians’ liability when using machine learning datasets would be to provide patients with a copy of the raw data. This solution moralizes a sort of data literacy not broadly attainable; very few people have the resources or skills to make sense of datasets that might include billions of data points of no immediate or individual relevance to their health. Finally, efforts to build digital psychiatry as a community have focused on cultivating an in-group with a shared set of elite institutional affiliations and credentials. Digital psychiatry researchers have made ethics a priority. But a perhaps unanticipated side effect of centering moral entrepreneurship in the project of professional self-formation is a narrowing of what morality might be and who might determine it. Importantly, although I have focused in this article on how researchers make claims about moral authority, it has not demonstrated whether researchers are recognized as moral authorities by other groups like policymakers and technologists. This question will be an important topic for future research.
This article presents three major contributions. First, I invert the lens of inquiry with the concept of moral entrepreneurship, focusing not on how it defines deviance and establishes outsiders (Becker 1963) but rather how it defines belonging and constructs insiders. I suggest that the creation of the moral entrepreneur is a critical project for understanding how moral dilemmas transform into institutional norms and taken-for-granted practices. Second, I suggest that this investigation offers evidence to moderate some of the worst-case-scenario analyses of morality in AI, common in literature on digital and surveillance capitalism (Sadowski 2020; Zuboff 2019). Digital psychiatrists’ moral entrepreneurship demonstrates tremendous local attention to questions of ethics and morality, even if their answers to those questions pose dilemmas of their own. It is not the case that machine learning research in mental health is focused solely on domination, extraction, or profit. Moral entrepreneurship constrains conversations about what morality might be for digital psychiatry, but this is not nearly the same as an assertion that digital technology innovators seek to limit human freedom (Zuboff 2019) or replace health care and public health institutions with self-monitoring and self-care.
A final contribution of this article is its empirical focus. This article presents an investigation into how researchers seek to settle the “ethics of AI” on the ground, an understudied phenomenon even amid calls for the regulation of machine learning tools. To be sure, although the concept of moral entrepreneurship is flexible and may apply to many cases in which moral regulation seeks to catch up to technological innovation, the dynamics presented here may vary across institutions and technologies. In addition to studying variation in moral entrepreneurship strategies, it is important to consider the implications of moral entrepreneurship pursued as a strategy of self-regulation. Questions about professional self-formation and moral innovation are critical dimensions for the sociology of AI, in line with a growing body of scholarship that investigates how AI and machine learning reshape fundamental tasks of social cognition like ranking, ordering, and classifying (Burrell and Fourcade 2021; Fourcade and Healy 2017). The sociology of morality will be a crucial lens for the sociology of AI as we seek to comprehend how these powerful new tools for classification and decision making reorder society.
Footnotes
Acknowledgements
I would like to thank Nick Allen, Miriam Gleckman-Krut, Roi Livne, Jason Owen-Smith, Anna Woźny, Shira Zilberstein, two anonymous reviewers, and the editors for comments on earlier drafts.
