Abstract
To implement an Aristotelian virtue ethics framework to live well with artificial intelligence (as described in Smith & Vickers, 2024), we need teachers who can serve as mentors and role models for the next generation. Finding mentors who can teach both technical expertise and model ethical deployment of that expertise is challenging, and Aristotle provides few hints on how to uncover such mentors. The account of expertise in Plato’s Gorgias seems to align with Aristotle’s vision and provides additional detail. The upshot of the conversation between Socrates and his interlocutors is that an expert should have three capacities: (1) to consistently produce excellent products of their expertise, (2) to replicate their skillset in the next generation, and (3) to use that expertise ethically. A doctor serves as a paradigm case of an expert in Gorgias—while a doctor has the technical ability to both heal and poison, the doctor also has good judgment that will prevent them from using their powers for harm. In Plato’s day, expertise was transmitted through apprentice learning, and so a doctor would also train subsequent generations of doctors. In Gorgias, Plato establishes these criteria for a trustworthy expert who can pass on dangerous skills responsibly. While the dangerous skill in Plato’s day was oratory, we can apply the same criteria to another dangerous form of expertise: artificial intelligence. By corollary, expert engineers should possess the technical skills, understand how to use those technical skills ethically (and actually use them ethically), and be able to replicate both of these skill sets in the next generation of engineers. We can use this account of expertise to develop a clearer picture of the ideal faculty we would hire to teach in all disciplines, but particularly technical disciplines, with the capacity to shape artificial intelligence for good or ill. Plato’s criteria provide us with the criteria required to pick out mentors for a robust virtue ethics-based training of the designers (and even the users) of technology: (1) technical ability, (2) teaching ability, and (3) ethical practices (at least in the scope of their technical domain).
Introduction
Large language models (LLMs) exploded into public consciousness with the release of ChatGPT-3.5 in November 2022. By February 1, 2023, it became the fastest-growing application in history. Many artificial intelligence (AI) companies—OpenAI, in particular—seem to prefer to ask for forgiveness rather than for permission before, for example, ingesting copyrighted data or mimicking the voices of people without their consent. 1 The New York Times is suing OpenAI, and Scarlett Johansson has threatened the same. 2 In addition, technology companies are pushing models out to the general public before they are fully tested (Roose, 2023) and sometimes over the objections of their in-house ethics and safety teams (Basilan, 2023). Moreover, small companies are breaking into markets the tech behemoth companies have agreed should be off-limits, sometimes with appalling consequences, for example, the extra-powerful facial recognition software produced by Clearview AI. 3
In this rapidly changing environment, it is vital that we learn to live well with technology (Smith and Vickers, 2024). This involves asking questions about how we should use (or avoid) technology in various situations, what research avenues engineers and developers should pursue, how we should teach the next generation to be responsible users of AI, and so on. While the dominant approaches to AI ethics are based on rules or principles, we have argued extensively elsewhere that principles are the wrong approach to AI ethics on the whole because they are insufficiently flexible to deal with novel cases (Smith and Vickers, 2024: 8–10) 4
Principles are reactive—as opposed to proactive—and lag far behind technological development. This is especially true now, as we are faced with what Vallor (2016) calls “acute technosocial opacity” (p. 6), the inability to reliably predict the future in the face of rapidly changing technology. In these circumstances, we need an alternative mechanism to ensure the appropriate norms, whatever they may be, can be stabilized and passed on in society. How can we ensure that the outcome of this technological arms race is, in fact, morally good (or at the very least morally neutral)? 5 Approaching technology ethics from a virtue ethics—rather than principlist 6 —perspective allows us to develop a better understanding of how to act well—and live well—with technology, without needing to predict the future or identify a specific set of rules to follow (Smith and Vickers, 2024).
The standard of correct action in Aristotelian virtue ethics is to act as the virtuous agent would in a given circumstance (modulo the skills and knowledge of the actual agent in question). 7 In order for this to provide guidance in the face of an opaque future, we need individuals who possess both a good understanding of the relevant technology and a virtuous character. 8 If there are such people, we can turn to them for advice about or, in rare cases, treat them as moral role models that the rest of us can imitate, so that we can learn to live well with rapidly advancing technology. 9 We must be clear from the outset that we have a thoroughly Aristotelian (not neo-Aristotelian) approach to virtue ethics. This means that when we speak about “role models” or “mentors,” we are not referring to exemplars from today’s exemplarist tradition (q.v. Zagzebski, 2017). Rather, our notion of mentors and role models comes from the Aristotelian notion of the virtuous person who serves as a role model for aspiring virtuous people to imitate and the Platonic notion of a (hypothetical) expert in virtue, who (hypothetically) can train the next generation of virtuous people. 10
In order to produce the experts needed to create future generations of virtuous technologists, education—both formal and informal—is vital. Technological-ethical experts are essential to an Aristotelian virtue ethics approach to living well with AI. We previously proposed using an Aristotelian virtue ethics model to train the next generation of designers and users of technology (Smith and Vickers, 2024). We believe that the best way to live well with AI is to provide its user base and, especially, the designers of future technology with the necessary character training to be able to deal with ethical situations, both quotidian and novel. The future of living well with AI depends not upon technical fields, like computer science, but humane fields like philosophy and education. This means the development and implementation of a large-scale educational program encompassing both moral development and technological skill acquisition, or at least a serious overhaul of existing educational programs. For example, technology education programs would need to intentionally include more content oriented toward the character development of their students. The goal of this program would be to produce at least some genuine (or near) experts in both ethics and AI technology. These experts could guide us in future technological development, policy making, and so on. Since right action in Aristotelian virtue ethics is a matter of doing as the sage—the moral expert—would, the presence of such experts would ensure that we, in fact, acted appropriately with respect to these novel technologies.
To be clear, this is not an article on the aims of education writ large. This does not defend a position on whether or not education should aim to make people ethically better or provide them with the tools to flourish. There is a vast scholarly literature on the aims of primary, secondary, and tertiary education, and we are not attempting to intervene in that conversation. Rather, this article addresses questions about education from a different perspective, considering a possible result of education rather than what education ought to aim for. 11 We have argued elsewhere that a particular sort of education is necessary in order for individuals to live well with AI (Smith and Vickers, 2024). Assuming that we want to live well with AI, this article then considers the types of teachers and mentors that are necessary to successfully create the sort of education that would allow us to do so. While we believe that this sort of education should begin much earlier, the majority of the education that shapes each subsequent generation of technologists (at least in terms of their technical abilities) happens at the tertiary level.
One might wonder what it looks like when this approach works as well as it possibly can, powered by the best possible educators. Aristotle is surprisingly sparse on details about what a virtuous role model looks like. To better understand the relevant sort of teacher required to train virtuous technologists, we turn to another ancient source: Plato’s Gorgias (Dodds, 1959). Gorgias is a useful source because in it, Socrates and his interlocutors are considered experts and expert-teachers.
Nota bene: The use of English terms
A frequent objection we hear is that the characterization of expertise in Gorgias is simply not how people classify experts today; there are lots of experts in society who are amoral or immoral and plenty of experts who cannot teach. 12 This sort of objection fundamentally misunderstands how Socrates leverages the idea of expertise (and how we use it in this article). Socrates’ conception of expertise in Gorgias is not descriptive but prescriptive about what experts should be. We use this set of criteria prescriptively as applied to today’s context. Ideally, professors should be masters of their subjects such that they are able to produce high-quality products of their expertise, teachers who can train students to become subject matter experts in their own right, and have the discernment to use their expertise ethically. We realize that throughout this article, we use the terms “role model,” “mentor,” and “expert” in ways that clash with standard usage in the contemporary philosophy of education and epistemology literature. We apologize for any confusion this might cause. Our purpose is not to try to challenge contemporary uses of “role model,” “mentor,” and “expert,” but rather to use these terms to describe something in our own Aristotelian (and partially Platonic) virtue ethics framework.
A very brief sketch of our Aristotelian virtue ethics approach
The general position that we take for training the next generation to live well with AI is that of Aristotelian virtue ethics. We have discussed this approach in more detail elsewhere (Smith and Vickers, 2024), so we will provide only a brief sketch here. Aristotle argues that virtues and vices of character, which are the virtues necessary for living well, are what he calls “states of the soul” (NE 1106a13). States of the soul, according to Aristotle, are not our emotions themselves or the capacity for emotions, but rather the ways that our emotions are calibrated to motivate us to act well or poorly in relation to our circumstances (NE 1105b20-1107a25). We are virtuous when our emotions are calibrated such that they motivate us to respond correctly to our circumstances (NE 1106a21-24). For example, we are generous when we are correctly motivated to share our resources at the appropriate time, with the appropriate people, in the appropriate way, and in the appropriate amount, such that we help others impactfully without shorting ourselves (NE 1119b22-1122a17). We act with the virtue calmness (πραότης) when we are appropriately angry to stand up against injustice to ourselves or others, but not so angry that we overreact and make a situation worse (NE 1108a4-9, 1125b26-1126b10).
Aristotle argues that we acquire the virtues of character through the process of habituation. Habituation is an apprentice-learning model of emotional training and regulation that allows humans to acquire the skills to successfully navigate ethically charged situations. Humans acquire these skills through an extended process that combines direct teaching with the imitation of role models, practice, and feedback from peers and mentors. Over time, habituation allows one to calibrate the emotional responses that help to guide and undergird the process of deliberation and decision that allows a person to act virtuously in ethically charged situations. As we have previously discussed (Smith and Vickers, 2024), Aristotle’s approach is similar to the apprentice-learning model, which Kim Sterelny describes as the way that new generations learn to act ethically based on the moral norms of their society (Sterelny, 2012). Sterelny’s work provides empirical evidence for the power of Aristotle’s model.
We acknowledge that our approach to ensuring that we live well with AI faces serious hurdles. Perhaps the most serious is a bootstrapping problem. While the program we propose is intended to produce technomoral experts, it also requires that there are people who are of at least sufficient moral quality to be worth imitating (Smith and Vickers, 2024). Thus, one major challenge to implementing a virtue ethics approach is that it requires moral role models and mentors—or at least sufficiently able moral coaches—to be able to train the next generation and model for them how to act ethically. Since, in virtue ethics, the ethically correct thing to do depends on both actor and situation, those learning to become virtuous need virtuous people—role models—upon whose actions they can model their own actions. Moreover, empirical evidence demonstrates that moral mentors or moral coaches are important for moral development. 13
This forces us to ask some questions about how to recruit the appropriate teachers and mentors for this part of the educational process. For example, are universities willing and able to change hiring practices to hire professors with the right sort of moral qualifications? What would those qualifications even be?
Techne in Gorgias, in brief
The account of expertise in Plato’s Gorgias aligns with Aristotle’s vision and provides three criteria for the sort of experts needed to mentor the next generation. Plato argues, in Gorgias, that an expert in a field has three capacities: (1) to be able to consistently produce excellent products of their expertise, (2) to be able to replicate their skillset in the next generation, and (3) to be able to discern to use that expertise ethically (and pass that ability on to one’s students). By this standard, the faculty we hire to teach in all disciplines—but particularly technical disciplines with the capacity to shape the future for good or ill—should be hired with three different criteria in mind: (1) technical ability, (2) teaching ability, and (3) the ability to make ethical judgments (at least in the scope of their technical domain). According to Plato, while the expert doctor has the ability to both heal and poison, the expert doctor also has the judgment, according to Plato, that will prevent them from using their powers for harm. The expert doctor understands medical practice not just as a biological and biochemical enterprise but also as an ethical endeavor and acts in compliance with the aims of medicine. 14
Today, we may not wish to confer the term “expert” on only those who understand the ethics of their fields. Indeed, there is a culture of computer scientists and entrepreneurs, in particular, rising to prominence or putting out game-changing technologies by lying to investors, 15 moving fast and breaking things (and people), 16 and acting viciously in general. However, the persistence of this flagrant disregard for ethical practices and societal impacts will not lead us toward living well with AI. Cultural changes are most likely to happen if there are the correct sort of mentors in place to guide the next generation of technologists to make good ethical decisions. These mentor engineers should possess technical skills, understand how to use those technical skills ethically (and actually use them ethically), and be able to replicate both of these skill sets in the next generation of engineers. 17 This provides us with an understanding of what it would take to provide a robust virtue ethics-based training of the designers (and even the users) of technology.
Aristotle and Plato both describe the acquisition of virtue as a rigorous form of apprenticeship and believe that it requires appropriate role models. Aristotle’s description in Nicomachean Ethics of how children learn to act ethically corresponds well with recent anthropological studies about how humans from our deep past to the present acquire the skills to act ethically in community. 18 Aristotle argues that individuals acquire their ability to act ethically from a combination of guided, scaffolded practice, role models, community standards, and fine-tuning done by more advanced members of the ethical community (NE 1179a34-1181b24; Smith and Vickers, 2024). Aristotle, however, provides little in the way of details, especially in the selection of mentors and role models. Plato provides significantly deeper discussions of these issues. 19 Plato’s Gorgias provides an illustration of the problems with insufficient mentorship as part of the dramatic setting of the dialogue, as well as providing an explicit commentary about the nature of appropriate mentorship in the content of the conversation.
Plato’s Gorgias illustrates the requirement for appropriate expert mentors in any technical field. The dialogue showcases a dramatic contest between two supposed experts (Gorgias and Socrates) and the students who follow them. Gorgias claims to be an expert orator and teacher of oratory (449a). Both of his students—Polus and Callicles—are forceful speakers; both are also avowed amoral hedonists. This is somewhat ironic, as Gorgias claims that his students should come to him already virtuous or that, in the shocking case that they do not, he reluctantly agrees that he would be able to teach them virtue. 20 Socrates claims not to be an expert, but he clearly possesses exceptional skill with elenctic questioning; he is accompanied by a young friend, Chaerophon, who imitates Socrates’ questioning when he interacts with Polus. Indeed, Polus, Callicles, and Chaerophon all serve as understudies to their respective role models.
Gorgias raises two interrelated problems with the kind of teaching and mentorship that Gorgias undertakes by examining Gorgias’ students. First, Gorgias’ teaching is purely technical; Gorgias says he only trains the character of his students in rare circumstances (459d-461b, esp. 459d-460a). His students, showcased in the dialogue, are the result; Polus and Callicles are unrestrained, power-hungry hedonists who profess interest primarily in short-term gain (e.g. 466bff., 482cff.). Second, Gorgias’ students seem to view Gorgias as a role model primarily for his skill rather than his character. While Gorgias certainly claims to be an expert in oratory (449a), he leads a much more congenial conversation with Socrates than either of his students, even when Socrates appears to be winning the argument. Polus and—even more clearly—Callicles lack Gorgias’ skill, but more importantly lack his good character traits and his humility (e.g. Polus: 461b-463d, 466a-482c; Callicles: 505d-523a). The comparison between Socrates’ friend and Gorgias’ students illustrates something important about the nature of role models and ethical learning. Humans learn not just by imitation, but by overimitation; in other words, when humans learn from others by imitation, they copy even unnecessary steps (at least to start with). This differs from learning in other animals, where even in early trials, animals will omit imitating steps that seem superfluous (Lyons et al., 2007: 19791). Overimitation is one of the reasons that humans are uniquely capable of high-fidelity, high-bandwidth learning (Lyons et al., 2007).
In Gorgias, Socrates and Gorgias discuss what an expertise is and differentiate expertise from other sorts of competence or good outcomes. They characterize an expert—someone with techne— as someone who possesses three features (448e-449e): (1) domain-related expertise, in other words, the ability to consistently make good products of the expertise (449d-e), (2) the ability to teach the expertise to others (449b), and (3) the ability to use the expertise wisely (449e). Without teaching how to use the expertise wisely, students may—and do—end up using their skills unwisely. Gorgias makes this plain early in the dialogue; he says that competitive skills like physical training and oratory are used aggressively and in wrongdoing (456d-457c). As Socrates points out, for an expert to train someone in the same skill and have that person use the skill unjustly would be an unjust (and therefore unwise) usage of the expertise (459c-461b, esp. 460d-461b).
The way in which experts pass on their technai is important for how their students turn out. Socrates and Gorgias agree that there are two different mechanisms by which one can transfer information and skills to others; they refer to these two mechanisms as two types of persuasion. This distinction is important for understanding why embedding ethics modules—taught by someone who is a technical but not a moral expert—is insufficient for teaching the next generation of technologists.
The first type of persuasion Socrates terms “teaching,” and it produces “learning” in the students (454c). “Teaching” and “learning” in this case are technical terms that do not correspond to something like lecturing, but rather the type of direct mentorship that can only happen in small groups where students have direct access and guidance to instructors (455a). It is not simply the method of transmission that is important in this more technicalized meaning of “teaching.” There is also a way in which students and teachers must approach the learning process. Teaching and learning also appear to require a joint undertaking between the participants and a genuine desire to investigate, rather than a competition (457c-d). Teaching and learning require the attitude that it is better to be refuted when saying something untrue than to believe in or convince someone of a false belief (458a). This, according to Socrates, is the correct method for producing experts. Someone who has learned carpentry is not simply someone who can build something but is a carpenter; someone who has learned medicine is a doctor (460b). In the same way, someone who has learned justice is a just person (460b). This means that ethical learning, just like these technical skills, occurs through apprentice learning from ethical experts. Ethics, just like any technical domain, requires practice and guidance, rather than simply a few lectures or activities embedded in a technical course, but proper coaching and mentorship from someone with the requisite skill set. All of these sorts of learning—as described in Gorgias—require not simply success but also mastery, so that one becomes an expert by learning. This mastery, importantly, involves not only the skill but also the ethical use of that skill.
The second type of persuasion Socrates terms “convincing” results in conviction or believing (πεπιστευκέναι, 454c). Socrates and Gorgias agree that this is the only type of persuasion that can happen in large groups and through testimony alone. Practice takes time, and a speaker before a large crowd will not be able to provide the requisite time, guidance, and one-on-one or one-on-small-group attention for teaching to occur. Convincing is faster and does not require that all participants genuinely seek truth (458e-459b) or spend extended lengths of time undergoing apprentice learning. Convincing, unlike teaching, is effective on those who are not trained in the subject area (459b) and do not plan to be practitioners. Another factor that distinguishes the two is that convincing can produce both true and false beliefs (454d), but there is no such thing as false episteme (454d). Here, Socrates uses the word episteme as the result of the learning process (454d-e), showing again that episteme is an epistemic state produced by teaching. He then makes the results of these two processes explicit: convincing produces belief (πίστις) without knowing (εἰδέναι), while true teaching produces episteme (454e).
In Gorgias, we can see that what we now call “teaching” (as opposed to the more narrow use of the term in Gorgias), we might divide into two different types of transmission of information: what Plato terms “teaching” and “convincing.” Convincing is somewhat shallow and can come from a purely testimonial transition. What is unique about teaching (in this technical sense, that is something like apprenticeship) is that it is high-fidelity transmission; it transfers a stable epistemic state, episteme, from one person to another. In other words, with appropriate time, practice, and pedagogical skill, an expert can create another expert. This is a high bar; expertise also comes with the ability to use the skill wisely—and ethically—and to teach the next generation to do it as well. Teaching, in this technical sense, requires a certain sort of educator who brings with them not only the ability to create expert products but also the ability to lead by example and serve as an ethical role model for their students.
Why is Plato’s Gorgias a useful model?
Turning to Gorgias allows us to deepen our account of the role of academic instructors in moral education and habituation. We have argued that teaching future generations to live well with AI relies on their having virtuous character in conjunction with technical expertise. In order to become technical experts with virtuous character, our students require role models that they can imitate. Since so much learning occurs by imitation, it makes sense to ensure that the technical experts tasked with educating the students—experts that we can reasonably expect the students to imitate—are ethically as well as technically good. Gorgias, unlike other accounts of expertise, provides an account of experts and expertise that encompasses both the moral and technical dimensions.
As the goal of living well with technology requires us to create technology thoughtfully, with potential harms in mind, the technical and ethical aspects of education must, at least to some degree, come together. In other words, we can’t rely on students being exposed at one point to technical experts, at another to moral experts, and to eventually fuse these two in such a way that they come to understand what it is to use technology virtuously.
We hold that technical and ethical expertise are not two separate things that should be taught in isolation from each other. Many technical breakthroughs generate ethical issues alongside the technical advances. New technology frequently displaces human labor. For example, advancing military technology in the First and Second World Wars created the need for a large amount of human labor to do mathematically complex calculations. By virtue of this, large groups of women with mathematical ability and mathematics degrees were able to work at the forefront of mathematics. Subsequently, engineers built machines that could do calculations accurately and quickly, replacing the need for human computers; as a result, women were quickly edged out of technical jobs. In addition, as technology developed and smartphones, laptops, and other portable devices proliferate—and just as quickly become obsolete—a host of new (often miserable) jobs have been created in the extraction of lithium and rare earth minerals, as well as jobs reclaiming and repurposing the metals from decommissioned machines. Each of these technological changes comes with ethical implications—positive, negative, or some mixture of the two. Irrespective of debates around whether technology itself is ethically neutral, technologically driven change is rarely, if ever, ethically neutral. Designers, engineers, and any adopters of technology must be educated in a way that allows them to make informed ethical decisions about their design, creation, and use of that technology.
One potential counterargument would be that we can provide students with ethics instruction separate from their technical instruction; thus, we need not worry about the character of those teaching the technical skill. As we see from the examples of physical training and other competitive skills in Gorgias (456c-457c), those who are not taught how to use such skills in the appropriate contexts may not use them appropriately. Gorgias begins by arguing that the trainer is not to blame (rather the trainee is to blame) when the skills are misused (456c-457c) but later admits that anyone trained in such competitive skills must learn to use them virtuously (459b460a). If one attempts to become a practitioner in a field (e.g. a technologist), one’s role models tend to be expert practitioners, rather than practitioners from other disciplines (e.g. your philosophy professor). Far too many students consider their general education courses or core requirements as annoying hurdles instead of legitimate training for their future careers. 21 Moreover, we cannot reasonably expect students to reliably distinguish the different character traits of the adults around them and to build a sort of composite moral model when they are still in such a complicated stage of moral development. It is important that students have technologically competent, ethical role models to follow.
Putting this into practice
The aim of amplifying our Aristotelian, apprentice-learning account of living well with AI is to make it, hopefully, more achievable. With a more thorough account of the sort of expert educators needed in order to create virtuous technological experts, we are one step closer. However, this also makes the potential challenges apparent. When considering how to implement the kind of educational framework we consider, there are clear theoretical and practical hurdles.
The first matter to consider is whether the kind of expertise we described is possible in the technical domain. The ability to create a high-fidelity reproduction of one’s technical skill and knowledge in the next generation, combined with the ability to provide a moral foundation for the use of said skills and knowledge, is a high bar. One difficulty here is that, as Vallor has noted (2016: 6), there is a great deal of opacity regarding the future of this technology. This technology changes rapidly, and in ways we cannot reliably predict. Further, the more complex AI tools are black boxes; even the technologists who create them cannot describe or predict their inner workings with precision and accuracy. AI tools are also proliferating and becoming ubiquitous across fields and businesses. Within the short time since the release of ChatGPT-3.5, LLMs now write first drafts—or even final drafts—of cover letters and marketing materials, proofread materials, provide critical feedback, and generate images. Some scholars are even using LLMs to write their academic articles or peer reviews. Our inability to predict the future use of AI may prevent high-fidelity reproduction of one’s knowledge in the subsequent generation of scholars; one may wonder if technical expertise is possible. Simultaneously, technological opacity seems to demand—or at least encourage—a virtue ethics approach; principlist approaches lack the necessary flexibility to deal with novel situations, and consequentialist approaches require the ability to future-project risk and consequences in a way that seems to stymie all but the most brilliant technologists.
We argue that the process of identifying the technomoral experts required is possible but will require dedication and changes in hiring practices, especially by institutions of higher learning. Hiring committees in universities and technological research firms currently look at the published and peer-reviewed research or successful patent applications of prospective applicants, interview them, observe their lectures and teaching, and so on. These practices allow these institutions, on the whole, to recruit individuals who consistently produce excellent products of their expertise and thus fulfill the first criterion for the sort of expert described in Gorgias. However, the second two criteria tend to be minimized or, even, outright ignored in hiring at institutions (at the very least those with traditional Carnegie rankings of R1 and R2).
Institutions vary widely in how committed they are to recruiting technologists with pedagogical excellence; this pedagogical skill is necessary for being able to produce the next generation of experts (the second criterion). Several factors influence these decisions. One factor is that even those who serve as excellent teaching mentors may be lured away from teaching-focused roles by significantly higher salaries in industry. Another factor is that institutions gain both capital and prestige from researchers who bring in large grants and those whose research serves as the basis for patents. It is frequently in the interest of institutions to hire, tenure, and promote faculty with middling-to-poor pedagogical practices but impressive research accomplishments for the sake of funding, prestige, and publicity. Funding, prestige, and publicity tend to kick off a virtuous spiral, begetting more funding, prestige, and publicity, as well as improved relations with donors. Even institutions that purport to be teaching-focused are trending toward recruiting faculty with impressive research records and demanding more research productivity over time. While this trend is understandable from a financial perspective, it is likely to endanger our ability to produce virtuous technologists in the next generation.
In addition, few institutions include ethical practices as part of their hiring qualifications (Gorgias’ third criterion). Of course, institutions may generally shy away from hiring those who have violated serious ethical norms, but this is insufficient for recruiting role models for the next generation. Even institutions that have tailored their engineering curricula based on virtue ethics and character education approaches shy away from including these criteria in hiring decisions. One concern about bringing these criteria into hiring decisions is that they would prevent institutions from filling positions. Indeed, one might grant that there are at least some technical and moral experts, but far from enough to provide the volume of education needed; there simply are not enough people with technical and moral expertise to produce the number of experts we will need in the future. While this is a reasonable concern, it seems possible that we can grow the number of experts available to teach over time—especially if we expand hiring decisions to consider all three criteria—and thereby increase our capacity to make technomoral experts. An additional concern may be that these criteria are insufficiently fine-grained for hiring decisions. In the following section, we attempt to demonstrate that these criteria can be operationalized when we have sufficient information about a technologist.
Example and counterexample: What we can learn from the case of real technologists
Aristotle was in the habit of using real individuals—either his contemporaries or those from the previous few centuries—when he pointed out concrete examples of moral sages. Both authors of this article are skeptical about the prospect of following in Aristotle’s footsteps by identifying virtuous role models in the real world. It is challenging to find one person who meets the high standards Aristotle puts on virtue. However, we believe it is important to show that, although it might be challenging to find and recruit appropriate mentors for the next generation of technologists, it is entirely possible to make the kind of character analysis of a purported expert according to the criteria laid out above. In short, it is possible to operationalize Plato’s criteria.
Instead of holding up a single individual who embodies all of the qualities of a virtuous technologist, we will instead examine a notable figure from the history of computing, Norbert Wiener, with the aim of showing that the kind of character analysis we propose is possible. Wiener does not embody all three criteria but seems to clearly embody two of them while failing the third. To be clear, we are not arguing that Wiener is either virtuous or that he is an expert in the sense described. We do not think that people can be partially virtuous; Aristotle makes it plain that a virtuous person displays all of the virtues, including the metavirtue, phronesis, which directs one to use the appropriate virtue for the circumstances, and we hold a hard line on the unity of the virtues. 22 Further, we don’t claim that Wiener was actually an expert in the sense we discuss in this article; he fails to fulfill one of the three criteria. Indeed, Wiener fundamentally lacked the ability to replicate his knowledge in the next generation of potential experts.
The purpose of evaluating Wiener on these criteria is to demonstrate that we have the ability to determine what sorts of characteristics might make a technologist capable of mentoring future technomoral experts. In other words, we show that analyzing the character of a real person in the way we would need to when seeking virtuous technology educators is possible, and not abstract to the point of impossibility. Wiener has a fascinating—and somewhat tragic—life story. We will not relate the details of Wiener’s life here. Instead we focus on specific aspects of Wiener’s character; we highlight where we believe he conforms to and deviates from the virtuous technologist as described above. Recall that Gorgias provides the following criteria for the true expert: (1) to consistently produce excellent products of their expertise, (2) to replicate their skillset in the next generation, and (3) to use that expertise ethically.
Norbert Wiener
Wiener is a useful example—and counterexample—of the virtuous technologist for several reasons. First, he demonstrated two out of the three of the criteria we laid out: he consistently produced excellent products of his expertise, and he used his expertise ethically. This is an impressive achievement. Moreover, Wiener had prominent and well-documented character failings that prevented him from fulfilling the second criterion and may have contributed to the current disregard for ethics among AI innovators. Indeed, one biography described him as “the dark hero of the information age”. Third, there are several practical reasons that render Wiener a useful subject for analysis. Wiener is no longer alive, so his life can be assessed as a whole. In addition, Wiener left extensive writings, both personal and academic, that can be consulted; he wrote technical and popular writings, a two-volume autobiography, and his notes are archived at Massachusetts Institute of Technology (MIT). Wiener is also the subject of biographies 23 and appears as a character in histories of computing, technology, and AI 24 and is the focal point of a collection of essays about AI (Brockman, 2020). While secondhand information is insufficient on its own to fully evaluate someone’s character, when taken together with his writings, it offers substantial material for the examination of his character. The purpose of this exercise is illustrative rather than judgmental; through it, we demonstrate the sorts of qualities that we hope for in the moral coaches and those that hamper students’ moral development. Finally, we can see, through Wiener’s example, that these criteria are less challenging to operationalize than they might initially seem. We divide the section on Wiener into how he fulfills or fails to fulfill each of the criteria we extract from Plato’s Gorgias. We describe the criteria in a different order; we first describe the two criteria that Wiener fulfills (Criteria 1 and 3) and then move on to the one he fails to fulfill (Criterion 2).
Criterion 1 (success): Wiener produced excellent products of his technical expertise
Throughout much of his life, Wiener consistently produced excellent products of his technical expertise. The excellent products of Wiener’s expertise are legion, and many serve as the basis for the technology we use today; Wiener was deeply embedded in the creation of information theory and developments in quantum mechanics. Unlike many of the early innovators in the field who became interested in the work through tinkering with hardware, Wiener was a theorist who never had the engineer’s touch. Moreover, Wiener was interested in big, pathbreaking mathematical ideas, rather than the kind of fine-tuning and precision problem-solving that sometimes absorbs the more practically minded members of these fields. This meant, in many cases, that Wiener’s ideas were not practically implementable until many years after he conceived them.
Wiener’s mathematical work on missile guidance and fire control during the two world wars demonstrates his prodigious mathematical abilities. During the First World War, Wiener wanted to do his patriotic duty to protect the United States and end the war. He was so nearsighted that he was completely unable to aim a gun himself and proved to be a highly ineffective soldier (Conway and Siegelman, 2005: 40–44). His contribution to the First World War was, instead, that he created a new set of mathematical procedures that could more accurately compute values between known coordinates to aid in the calculation of missile trajectories, which was better than the available methods of the time (Conway and Siegelman, 2005: 43). On a more profound level, Wiener felt compelled to help the war effort for the Second World War. His heritage was Jewish, and he was appalled by the Nazi project. 25 In the Second World War, he was part of an interdisciplinary team and continued to improve his work on guidance systems, this time for the fire control apparatus for anti-aircraft guns. The work that Wiener did with Bigalow, which he described in an article known as the “Yellow Peril” 26 was a radical step forward not only in weapons design, but in communication engineering more generally. The article, however, was classified and reached only a small audience while it was under a classified status. Moreover, the article was too technologically advanced to be built and implemented in the Second World War and ended up being shelved in favor of a more practically implementable project (although Wiener continued to leverage the insights into important developments in cybernetics) (Conway and Siegelman, 2005: 116–125). Wiener was frustrated that the technical innovations of his research were not shared widely so that other researchers could critique, improve, and build upon them (Conway and Siegelman, 2005: e.g. 127–128).
Criterion 3 (success): Wiener was committed to using his technical expertise ethically
Wiener clearly demonstrated his commitment to ethics; he did not shirk responsibility for designing weapons technology. After the Second World War, Wiener was disillusioned with military work; Vannevar Bush severed him from the war effort in 1944, and Wiener ceased to work for the military from then on. Wiener distanced himself from the military, in part, because of a belief in open science: Wiener believed that scientific research ethics required the sharing of information for peer review and the honing of ideas. Once the war was over, he believed society was best served by innovative ideas being shared globally. In addition, he was horrified by the use of the atomic bomb in the war. He began to speak against the US military during the Cold War. His public antiwar and antinuclear stance, which he proclaimed in an op-ed, landed him on a list of potential communists, and McCarthy’s cronies investigated him for communist ties (Conway and Siegelman, 2005: 237–271). This serves as one among many examples of Wiener using his expertise ethically and to publicly showcase potential ethical problems that might arise.
Wiener was also acutely aware of the implications that basic research in mathematics could have on both war and society. Wiener was well aware of the potential harmful impacts—along with the potential benefits—of the technology he designed; he coupled his technology with raising public awareness about potential impacts. 27 As Hiems argues in the introduction to recent editions of The Human Use of Human Beings, “if shorn of Wiener’s benign social philosophy, what remains of cybernetics can be used within a highly mechanical and dehumanizing, even militaristic, outlook” (Hiems, 1989: xx). Wiener was interested in the same project as the two authors of this article, asking questions like “how is the machine affecting people’s lives? Or still more pointedly: who reaps a benefit from it? Wiener urged scientists and engineers to practice the ‘imaginative forward glance’ so as to attempt assessing the impact of an innovation, even before making it known” (Hiems, 1989: xx). With his “imaginative forward glance,” Wiener predicted a variety of scenarios, like the rise of the algorithm-driven gig economy (manifesting in, e.g., Taskrabbit, Uber) and emergence of the low-wage, low-skill jobs to modify data for training technology (manifesting in, e.g., Mechanical Turk).
Criterion 2 (failure): Wiener (mostly) failed to replicate his expertise (both technical and ethical) in the next generation
While Wiener was not the only figure to see the ethical implications of his mathematical work, he serves as a useful example of both good and bad practices for ethical mentorship. Wiener appears to fulfill the first and third criteria, which is a significant achievement. However, Wiener clearly failed on the second criterion. His most shining moment as a teacher was not about his pedagogy at all—instead, it showcases two other character virtues of his: his ability to innovate technologically and his interest in combating prejudice on the basis of race and ethnicity (particularly anti-Asian bias). In the 1930s, Wiener worked with a Chinese graduate student named Lee Yuk-Wing. 28 Wiener was interested in devising a new approach to electronic circuit design, based on the statistical models he developed over the previous decade. Lee liked Wiener’s idea, but realized that Wiener’s design had flaws; he significantly reworked Wiener’s design to create a streamlined network design, which had a variety of applications to telephone networks, radio signals, and improving electronic sound recordings. The two collaborated on a patent, and Lee formalized the design methods for his doctoral dissertation. When Lee defended his doctoral dissertation, the examining MIT faculty did not understand the work. Their questioning was hostile, and Lee shut down. Wiener intervened and told the examining faculty to go home and study the document until they understood it. Within two weeks, the examining faculty granted Lee his doctorate without further comment (Conway and Siegelman, 2005: 76–77). The work that Lee and Wiener did together was revolutionary, but Wiener’s contribution was more as an interdisciplinary collaborator and, eventually, a protector of Lee, rather than as a pedagogue. 29
Although he often fostered and nurtured other technological innovators—he mentored great figures in the history of computing and mathematics, such as Claude Shannon and Walter Pitts—Wiener cannot be said to have replicated his skillset in the next generation. Wiener lacked even the most basic pedagogical skills, aside from the ability to inspire and generate fascination among his students. For example, Wiener reportedly had such an intuitive grasp of mathematics that he often skipped steps in proofs or worked them out completely in his head, making it impossible for his students to follow (Conway and Siegelman, 2005: 83). Wiener also failed spectacularly to provide structure in his courses. As an illustration, Wiener once “walked into a packed lecture hall . . . wrote a large ‘4’ on the blackboard and walked out. Only later did his students figure out that he was leaving town for four weeks” (Conway and Siegelman, 2005: 84).
In addition, Wiener’s lack of emotional regulation caused him to alienate, quarrel with, and harm the well-being of his students and colleagues. Indeed, there is an argument to be made that the lack of ethics as an integral part of the AI revolution today can be traced back to a quarrel between Norbert Wiener and several other members of the technological community that created the basis for computing and AI (including John Von Neumann and Warren McCulloch). At the root of most of Wiener’s personal failings was that he lacked the ability for emotional regulation. This caused him to tax relationships with friends, family, and acquaintances. One of his emotional storms caused him to sever relationships with his closest collaborators and students and likely contributed significantly to the death of his protege, Walter Pitts. As Conway and Siegelman recount, there were a variety of contributing factors to Wiener’s abrupt rupture from some of his closest colleagues and mentees. According to the Conway and Siegelman (2005) account, when Wiener was simultaneously suffering an emotional blow because publishers were uninterested in his autobiography, his wife convinced him that the young men in his research group had seduced his daughter and that his favorite student, Pitts, was not taking his research seriously (p. 213–234). Pitts was a mathematical prodigy with a tragic backstory of abuse and homelessness (Conway and Siegelman, 2005: 138–143). Prior to this incident, Wiener had helped Pitts create a life at MIT, cultivated relationships on his behalf, and mentored and collaborated with Pitts up until the breach (Conway and Siegelman, 2005: 141–143). After Wiener severed the relationship, Pitts refused to fill out the paperwork required to finish his dissertation and steadily drank himself to death (Conway and Siegelman, 2005: 227–234). Other students and collaborators with whom Wiener quarreled fared better than Pitts, but many focused purely on the technology and were less interested in the ethical implications of their work, subsequent to the rupture.
For Aristotle, emotional regulation is the bedrock of virtue. The reason Aristotle argues that children must be raised well before they dive into ethics as an intellectual subject is that they must have acquired the basic abilities for controlling their emotions through habituation before they undergo the sort of fine-tuning required to help them act virtuously on a consistent basis (NE 1181a11-1181b23). 30 Some of the character virtues are more reliant on emotional regulation than others. For example, calmness (πραότης) is the virtue that allows one to act appropriately with respect to anger, being angry the appropriate amount, the appropriate way for the circumstances, for the appropriate amount of time (NE 1125b26-1126a3). In order to be mild, one must be sufficiently motivated by anger at injustice to act (rather than being simply insensate or passive). However, one must not be so angry that it clouds one’s judgment about how to handle the situation at hand and not bottle up that anger and lash out at others. Wiener’s lack of emotional regulation—alongside his pedagogical ineptitude—prevented him from being the sort of virtuous technologist that we desire to serve as mentors to the next generation of technologists. And without his guidance and influence, the field of cybernetics, which was concerned with both technology and its ethical implications, became the field of AI, which is often considered to be a purely technical field.
In short, Wiener serves as a partial model for the sort of technologists needed, as well as a reminder that the sort of teachers and mentors that we provide to aspiring technologists has a massive impact on their futures. The example of Wiener demonstrates that these criteria are possible to operationalize and that it is possible to evaluate those whom we designate as mentors against these criteria. Wiener manages to achieve two out of the three criteria we lay out, which is impressive. He made his own technical innovations; even a fraction of his output of major innovations would qualify a technologist by the criteria we use. Further, he genuinely considered the ethical implications of his technical work in a deep and meaningful way. He did not merely write the first book on computer and AI ethics, but he made ethical choices at considerable cost to his own career and legacy. Indeed, these choices barred him from working with the top-notch scientists of his day, since they prevented him from receiving government clearance to work on innovative projects and barred him from working on military projects. His private and public declarations about peace and research ethics placed him under government scrutiny and surveillance. Despite the many impressive personal qualities demonstrated by these choices, he seems to have failed at creating other experts and at passing on his care about ethics to the students he trained. This is, to be sure, a big failing, and it is one that possibly set research in computer and AI ethics back by decades.
Although Wiener is not a great mentor or role model, in this section, we demonstrate both the possibility of and an approach to evaluating the virtues of technologists. In looking at Weiner biographically and over the course of a whole life, considering both his character and skill, we provide a more extensive version of the sort of evaluation that hiring, rank, and tenure, and similar committees might do when evaluating technologists. Such committees likely cannot access quite so much and such rich material. Yet, we here show that the type of evaluation for which we advocate is feasible. Our suggestion is, of course, far more onerous than reading a CV and cover letter, but it is a possible—and ethically responsible—route for committees of this sort to take. In addition, we show that we can make some progress on determining appropriate mentors even if we lack perfectly crisp, clear standards by which to judge technologists. A combination of knowing the basic area in which we’re looking, plus the criteria from Gorgias, gives us the ability to seriously consider both the mentors we are able to hire right now and how we might aspire to educate and cultivate increasingly better mentors over time. We cannot expect to find virtuous technologists in every hiring search, but it is worth taking these criteria seriously and beginning to implement them as we move forward into a rapidly technologically infused future.
A quick note on how our approach differs from Zagzebski’s approach to exemplars
One of our anonymous reviewers suggested that what we are proposing is an exemplarist moral theory, like that of Zagzebski (2017), in which moral concepts and terms are defined by direct reference to exemplars. Taking Zagzebski’s view as paradigmatic, the basic structure of exemplarist theories is that we have some more-or-less reliable way (in Zagzebski’s approach, this is done by the emotion of admiration) of identifying exemplars of certain moral properties (e.g. “Someone who is brave is like THAT” “Someone who does their duty is like THAT”). In exemplarist theories, the identification of the exemplar is basic; one can investigate the deeper natures of our exemplars in order to provide detailed analyses of moral concepts. There are certain elements that we do share with Zagzebski.
While there is some similarity between Zagzebski’s work and ours, we are not proposing or working within an exemplarist theory, particularly in its contemporary form. We are adamantly working within an Aristotelian virtue theoretic framework (q.v. Smith and Vickers, 2024). We have argued elsewhere (Smith and Vickers, 2024) that living well and morally is a matter of eudaimonia and that eudaimonia is determined by reference to what is in some way the characteristic function of the human person, in the Aristotelian sense (NE 1097a20-1098a20). 31 Living well and being moral is a matter of being like an exemplar in the sense that the role model or mentor is, to us, an instance of someone who lives a eudaimon life by consistently engaging in virtuous activity (in this case, by possessing and acting according to virtues of character). 32 We see the role model as someone to be imitated and learned from; we do not take the exemplar’s basic method for defining moral terms, nor someone who can be identified as excellent prior to a theory that explains why they are, in fact, good. As the prior section demonstrates, we, like Zagzebski, are concerned with the ability to analyze the character of actual people. And, like Zagzebski, we think that exemplars play an important role in moral education. People learn a substantial amount about how to be moral by imitating people who are in some way morally better than they are (NE 1180a1-b28). However, our biographical analysis of Wiener should be understood not as an investigation of an exemplar to discern their deeper nature, but as a proof of possibility for analyzing flawed individuals as nonetheless suitable for imitation in some respects and in the way we suggest one might evaluate faculty as technomoral mentors.
Objections considered
Objection 1: Is the kind of expertise Gorgias describes possible?
One potential objection to the position laid out in this article is that there are both technical and moral issues with the type of expertise described in Gorgias. One might think the standards are far too high for creating the sorts of moral mentors needed. This is true for three reasons: technical opacity, sociotechnical opacity, and the difficulty of being a moral mentor.
First, the technology itself is opaque, which makes it somewhat difficult to anticipate the ethical issues that may arise. Many of the common AI tools that are available are black-box AI; we do not know exactly how it works. Indeed, one of the major projects that Anthropic is undertaking currently is to gain a greater understanding of how LLMs work (e.g. Templeton et al., 2024). This article was groundbreaking in its ability to help us understand the way in which LLMs map semantics. However, it simultaneously shows how far we form a deep understanding of this technology because this was the first article to map how monosemantic features can arise from polysemantic neurons in a neural network.
Second, one might worry that the Acute Technosocial Opacity that Vallor (2016) considers (p. 6–10) makes problems for the kind of expertise we have in mind. Vallor denies that we can anticipate future technological development sufficiently to design the technical and character education necessary to cultivate the skills to live well. She argues that technological change is happening in new and unexpected ways, and we can no longer rely on future problems to resemble past problems. If Vallor is correct, then the kind of expert we have in mind may be near impossible to find or create.
Third, finding moral mentors is incredibly challenging. One reason is that there just may not be many such people. It is very hard to find moral people, especially those who rise to the level of being genuinely virtuous. If there aren’t many of them to begin with, then our chances of finding them shrink. Furthermore, looking for them within the realm of technical experts only serves to shrink the possible pool. If few possess the requisite skill set, we cannot reasonably expect to design the education necessary to equip the next generation to handle future ethical problems.
Reply to objection 1
While technology, technological development, and technosocial changes may be somewhat opaque, virtue ethics responds well to novel situations. Unlike deontology, virtue ethics does not require a set of static principles, and unlike teleology, it does not require accurate methods for calculating risks and outcomes. Instead, in virtue ethics, someone attempting to act correctly must extrapolate what a virtuous person might do in the situation in which they find themselves. Although it may be challenging to determine what a virtuous person would do in a novel situation, especially in the absence of an unambiguous technomoral expert one might consult, our best tool is to uncover the proper analogies to other situations in which it is clear how the virtuous person would act. Fortunately, the technological change that we see today is not nearly so radical as Vallor might suppose. In fact, some of the luminaries of the early history of computer science predicted many of the major changes we see today. In particular, Norbert Wiener’s book The Human Use of Human Beings anticipated many of the major changes of today, including the algorithm-driven gig economy and the rise of a technological underclass, who spend their monotonous days doing the sorts of work that allow corporations to improve their AI models (e.g. Mechanical Turk workers or those who work to label driving videos for Tesla). Indeed, studying the fundamental mechanisms underlying today’s technology can lead one to anticipate future ethical problems. Vickers emphasizes this in her ethics of technology courses, providing students with a thorough understanding of the history of technology and the ethical issues that grew out of technological change.
Moreover, technological and technosocial opacity does not rule out moral role models. Serving as a moral role model is a high bar in any era. Yet, there remain some moral mentors to whom the rest of us can look as role models. In addition, we have reason to believe that there is a ratchet mechanism for improving technomoral expertise over time. Using figures as partial role models, as we do in this article with Wiener, can serve as a form of moral guidance. In addition, Aristotle provides a useful roadmap for ethics pedagogy (as we described in Smith and Vickers, 2024). As technology ethics courses and modules proliferate at universities, professors have the ability to use a cognitive apprenticeship model and model for students how they pinpoint the ethically salient features of situations involving technology and make decisions based on those features. These instructors can serve as moral coaches, helping students develop the skill sets needed for dealing with novel situations.
Objection 2: Is it possible to create hiring criteria that would capture this sort of expertise?
Another potential objection to this position is that it is difficult to identify those who might serve as moral mentors—or even ethical coaches—for students, and it will therefore be difficult to recruit and to hire faculty that fit this profile. Indeed, Plato is enigmatic about how we should determine who is virtuous; he thinks that we are often wrong about who we call virtuous (e.g. Pl. Meno 87cff.). Indeed, this sort of misidentification is a major theme of Gorgias. Socrates calls attention to it most clearly near the end of the dialogue, when he discusses Pericles, Cimon, Miltiades, and Themistocles (Grg. 515c-519d). These were some of the most renowned and beloved Greek politicians. Socrates argues that these famous men cannot have been virtuous—despite the fact that they are widely acclaimed for their virtues—because in each case, the people whom they purportedly governed, instead of becoming more virtuous, became more mercurial and unruly and took out their emotions on those very politicians. 33 Aristotle is equally enigmatic, but in the other direction: he assumes we will be able to pick out who the virtuous people are and are likely to agree on role models, yet provides no guidelines for doing so. For this reason, it seems challenging to use character as a factor in hiring decisions.
Reply to objection 2
We agree that it is challenging to identify suitable ethical coaches—and even more difficult to identify moral role models. Yet, giving up on ethical sensitivity and insisting that no assessment of a faculty candidate’s character can be made is the wrong approach. While we acknowledge the difficulty, we argue that it is not hopeless to judge the capacity for ethical sensitivity, even in the academic hiring process. Already, faculty assess their potential colleagues for likability and fit during on-campus interviews. Almost everyone involved in hiring would acknowledge that making evaluations for fit and likability is an important part of the hiring process, despite the fact that it requires a complex and often poorly-specified set of intuitions, gut reactions, and best guesses based on a small number of interactions and carefully curated behavior on the part of the candidate. Although the process is highly imperfect, it allows hiring committees to make better choices about hiring and, in some cases, to dismiss candidates from consideration. 34 We think there are several indicators that there is some hope for assessing character during the hiring process. First, we think that there are some clear ways to assess ethical sensitivity, which is a prerequisite for suitable ethical coaches. Second, humans assess their peers and famous figures and hold up those they deem worthy as role models for their children and students already. This means that with sufficient data, we can make at least some determinations about an individual’s character. Third, we believe that Gorgias provides us with an operationalizable set of integrated moral and practical criteria by which we can assess faculty and which fits well with our broader virtue ethics view. While it is difficult to operationalize these criteria, we think that our ability to assess Wiener via these criteria provides hope in choosing appropriate mentors for the next generation.
One of the major factors in identifying those who may be able to serve as suitable mentors is those who are sensitive to the ethical questions at stake in technological development, even if they are not technological-ethical experts themselves. In working closely with engineers and computer scientists, Vickers has noticed that one feature that often signals someone as a suitable mentor is someone who shows an intellectual curiosity and humility about the ethical issues that might be at stake. Engineering professors, students, and professionals sometimes approach Vickers with a variety of different questions, spanning from asking about the possible ethical implications of their projects to asking how they can help their students learn to anticipate possible ethical harms of their future work. While this may sound like a small step, these engineers sharply contrast with those that Vickers sometimes encounters, either in person or as an ethics reviewer for their articles, who try actively to insulate themselves from considering the ethical implications of their work at any deep level. Moreover, from Vickers’s personal experience, the students of engineering professors who are sensitive to ethical issues also often demonstrate a similar intellectual curiosity and humility about the ethical issues that might be at stake in their work and a deep engagement in ethics modules or courses that they take. There are simple interview techniques that provide faculty candidates with the space to talk about how they grapple with the possible ethical implications of their work, which can help universities determine what sort of candidates have the appropriate potential to be ethical coaches.
In addition, we hope that individuals have at least some ability to pick out virtuous behavior and agents who often behave virtuously. 35 Whether or not we are genuinely capable of picking out virtuous people, parents regularly reference role models for virtuous action when teaching their children, and teachers reference role models when teaching their students. 36 Usually, we have a lot of data on those individuals we hold up as role models, either because they are people we know personally and well or because they held up as heroic figures from history. For those we know, we live with people, observe them intensively (albeit informally), test what it is like to behave the way they do, and so on. For historical or public figures, data may be readily available in the form of biographies or other sorts of public information. This is, in part, why it feels almost like a betrayal when one suddenly discovers that one of the people whom one admires is a sexual predator or has some other vicious proclivities. Leveraging data about virtuous individuals, either from those we personally observe or from historical figures we dissect, into a measurable metric for hiring is challenging, especially since our point of view leans heavily on the noncodifiability of ethics.
In the section describing Wiener above, we describe how the criteria from Gorgias can aid in evaluating technologists, even if it is difficult to imagine a rubric to operationalize this evaluation in hiring decisions. The Wiener example demonstrates, in particular, how specific character virtues and vices can influence whether or not one serves as a good mentor and role model. Clearly, Wiener was mathematically brilliant and showed deep sensitivity to the ethical implications of his work and modeled this for his students and for the public at large. He also served—at least for some period of time—as a mentor and champion for students at MIT who were vulnerable, including Pitts and Lee. However, Wiener’s lack of emotional regulation wreaked havoc on his life, the lives of his students (especially Pitts), and demonstrated his failure as both a mentor and a role model for the next generation of technologists.
While Wiener’s virtues and vices are much easier to see in hindsight, there are some ways in which hiring committees might get a glimpse at the characters of faculty candidates. As we described near the beginning of this section, we believe that there are some questions interviewers might ask that can help determine the ethical sensitivity of candidates. 37 Also, hiring committees can create interview questions that allow candidates to describe how they think through ethical issues that might arise. In addition to asking faculty candidates to give a teaching demonstration, a university might provide candidates with an opportunity to coach students through ethical case studies. Another assessment mechanism that hiring committees might wish to employ is asking for character references in addition to academic reference letters. None of these methods is foolproof, and a variety of factors may influence performance on these character measures. Still, it is worth considering how the hiring process might expand to allow institutions to more fully gauge individuals based on the criteria we lay out for technomoral expertise. Certainly, this augmented hiring process requires exploration. Presumably, different institutions would try different approaches; some approaches succeed and propagate while others fail.
Despite these difficulties, we demonstrated in the section on Wiener that it is possible to assess individuals according to the criteria that we drew from Gorgias. In addition to a general sense that some individuals are closer or farther away from being moral role models, we showed in our analysis of Wiener that in at least some cases, we’re able to see whether a purported expert is (1) capable of great intellectual acts, (2) can produce other experts and pass their technomoral skills along to the next generation, and (3) appropriately considers the ethical implications of their technical work in a suitably deep way. Thus, with sufficient information, we can engage in the kind of thinking that finding and hiring true experts requires. Further, while it would be wonderful to be in a position to identify and imitate only moral experts right from the start, this is not necessary. We believe that proper training and scrutiny of those who can serve as partial role models can effectively incrementally increase the general moral quality of at least some technical experts through the appropriate educational processes. If we find the best people we can to serve as moral coaches and mentors, the hope is that the next generation will contain at least some people who are as good as, or even slightly better than, their mentors. Over time, employing such methods could increase the number of moral mentors and provide a greater variety of role models for each subsequent generation.
Footnotes
Acknowledgements
We would like to thank several people who read and commented upon drafts of this article, including Avery McDowell, Dr Brandon Richardson (Chapman University), Dr Dylan Popowicz (American River College), and Dr Steve Tammelleo (University of San Diego). We would also like to thank the audiences of the PESGB 2025 and NAAPE 2025, where we presented this article and the anonymous reviewers from Theory and Research in Education.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
