Abstract
This paper follows the reaction of the radiology profession to artificial intelligence (AI). We examine the effort of radiology as a powerful medical specialty to maintain its professional jurisdiction while allowing AI's disruption. We study the discursive work of radiologists as evident in their academic publications. Our results suggest that radiologists hold simultaneously multiple perspectives in regard to AI, which allow them to be both conservative and innovative in their relations to it: accept it, subordinate it, reject it and surrender to it, all the same time. These perspectives are: (a) to integrate AI tools and skills into the radiology profession by cooperating and coproducing with AI experts while preserving the core values and structures of the radiology profession; (b) to absorb AI into radiology as (yet another) technology, subordinating it to radiologists’ authority; (c) to fight-off the threat made by AI by undermining the legitimacy and capabilities of AI in radiology and strengthening professional boundaries and (d) to assimilate the radiology profession into the field of AI. These perspectives enable radiologists as a powerful medical specialty to engage in a rhetorical dance with the equally powerful AI specialty and challenge techno-optimistic approaches to innovation.
Introduction
The entrance of artificial intelligence (AI) 1 into expert labor is becoming a major topic in the study of professional work (Brayne, 2017; Browder et al., 2022; Christin, 2017; Faulconbridge et al., 2021; Goto, 2022; Lange et al., 2019; Lebovitz et al., 2022). In low and middle-status occupations, workers are more easily subjected to algorithmic management (Kellog et al., 2020; Möhlmann et al., 2021). However, in high-status professions, experts have an advantageous position (Suddaby and Viale, 2011), where they benefit from their competitive abstractions (Abbott, 1988), their tacit knowledge (Polanyi, 1966; Lebovitz et al., 2021) and sophisticated resistance power (Christin, 2017; Goto, 2022). All these complicate techno-optimistic projections of AI automating or replacing expert-professional labor (Susskind and Susskind, 2015). The medical profession is of special interest in this context as it is both powerful and territorial regarding its expertise domains, even in the advent of commercialization and increasing power of other stakeholders in healthcare (Parry and Parry, 2018; Timmermans and Oh, 2010). In this article, we focus on the reaction of the medical profession of radiology as AI enters its territory and aims to disrupt it.
The theory of institutional work, and specifically the aspects of maintenance work in professions (Currie et al. 2012; Dacin et al., 2010; Scott et al., 2000; Zilber, 2009), can contribute to the discussion of AI's disruption of expert-professional labor. The notion of maintenance in this framework refers to the “supporting, repairing, or recreating” of professions as institutions (Lawrence and Suddaby, 2006: 230) and is perceived as essential for the stability over time of social structures (Dacin et al., 2010). Institutional theory has repeatedly shown that professionals are well-versed in preserving their powerful positions in organizational and institutional environments (Adler and Kwon 2013; Murray, 2010). Specifically, Suddaby and Viale (2011) argue that in many cases, professionals are the ones who shape new fields of action, create identities, rules and standards, redefine a field's boundaries and reproduce its hierarchies and social capital (Currie et al., 2012; Malsch and Gendron, 2013). Hence, even in the face of the wide institutional disruption generated by AI, the power of radiology as a medical profession to reshape the imaging jurisdiction to its benefit should not be underestimated.
As studies of AI's disruption of expert labor show, maintenance work varies between different professional fields and is highly dependent on regulatory environments (Brayne and Christin, 2021), the level of professionalization of the profession, the particularized sensemaking of the technology (Goto, 2022; Lebovitz et al., 2022) and organizational factors (Brayne, 2017; Goto, 2022). In addition, the type of task at hand is also important. While quantitative “number crunching” and high-frequency trading have led to AI dominance in finance expert labor (Hansen, 2021; Lange et al., 2019; MacKenzie, 2021; Pasquale, 2015), meta-quantitative professions, which require non-numeric high integration, assembling, and cognitive skills based on associations, such as language comprehension (Goto, 2022), remain less vulnerable to current day AI technologies. However, even meta-quantitative and professionalized professions cannot reject AI completely due to its reported improvements and the institutional pressures for its acceptance (Brayne, 2017; Ribes et al., 2019).
In radiology specifically, improvements in computer vision based on deep learning (DL) algorithms have been described by prominent AI experts as a case of complete automation of a medical task by the technology (Hinton, 2016). AI experts have argued that the technology they produce will be able to replace radiologists’ work and diagnose imaging scans autonomously, eliminating the need for an expert radiologist performing this task in hospitals (Kim et al., 2021). This claim has understandably ensued a professional struggle (Abbott, 1988) between radiologists and AI experts over the imaging professional jurisdiction. The institutional pressures to use AI in expert labor, forces radiologists to engage with this disruptive technology while maintaining their expert position and traditional power structures.
In these circumstances, the institutional order is an ongoing process and thus “unfinished” (DiMaggio, 1988: 12). Therefore, institutions in general and professional projects in our case, require ongoing work and exchanges based on support, corrections or reactions among stakeholders (Zilber, 2009). Such mechanisms are needed for dealing with change and uncertainty and are crucial for recreating professional stability (Lawrence and Suddaby, 2006).
The maintenance work of radiologists presented in this article is thus dualistic and complex (DiMaggio, 1991); it is conducted through spinning a web of scientific arguments regarding AI experts and technologies, creating a rhetorical dance of integration, subordination, rejection and assimilation at the intersection of the two types of expertise. The dance metaphor (cf. Ortmann and Sydow, 2018; Wallenburg et al., 2021) encapsulates the constant movement of attraction-rejection dynamic between radiology and AI, the efforts to decipher the impact of the technology on radiologists’ work, and the containment of various voices in the professional community in the face of uncertainties involved in change. As previous studies of professional maintenance show (Murray, 2010; Wright et al., 2017, Zilber, 2009) intersections at the boundaries of powerful professions require sophisticated maneuverings and movements, both rhetorical and practical. For radiologists, the rhetorical dance in their scientific argumentations enables them to integrate the AI expertise with their expertise, while maintaining their professional boundaries.
The profession of radiology and its use of technology
Radiologists are responsible for reading and interpreting graphical imaging scans, produced by several types of technological instruments, mainly Roentgen, computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound. In the modern hospital, almost every admitted patient requires the opinion of the radiologists (Jalal et al., 2019). Radiologists have been in the forefront of medical-technological change since the late 1960s (Barley, 1995). Compared with other medical specialties, they have little patient contact (Francis, 2008), and they face less malpractice claims (Chockley and Emanuel, 2016), which allows them the high status of a technological-medical, yet clinical, sub-specialty.
The social sciences focus on radiology mainly as a medical sub-specialty that relies heavily on technology (Barley, 1986; Burri 2008). Barley (1986) has shown that the introduction of a new technology into a radiology department in two hospitals may result in the restructuring of the division of labor between physicians-radiologists and technicians, and that the professional use of technology is a localized matter, not a universal one. Burri (2008) has shown the radiologists use visualization technology as a means to demonstrate professional skills and power, increase one's reputation, and renegotiate identity. This use, Burri argues, allows radiologists to achieve symbolic capital in their boundary work, namely their distinction from other medical professions. Rystedt et al. (2011) showed that the radiological expertise is inherently practical, highly situated and domain-specific. Yu and Levy (2010) demonstrated how the power of radiological associations and their ties to national institutions, is determinant in halting or submitting to globalization ambitions of private companies. These accounts of radiological expertise, its situated and socially constructed nature, point to the central role played by radiologists and their professional institutions in shaping the use of AI in their profession.
Radiology is now at the forefront of AI in medicine (Pesapane et al., 2018), as most FDA-approved AI devices (over 70%) are targeted at radiology (Zhu et al., 2021). Recently, Lebovitz et al. (2022) have observed radiologists using AI devices in hospitals and showed that opaque AI impacts routine radiology tasks to become non-routine, increase uncertainty and lead radiologists to use AI devices differently than their producers had anticipated, frequently dismissing their results. Lebovitz et al. (2021) also point to the tacit knowledge embedded in radiological expertise, missing from the labeled data used in AI devices. This missing knowledge results in AI's underperformance in clinical settings. Therefore, even in the face of AI institutional disruption, the agentic role played by the profession of radiology challenges the feasibility of complete automation of radiology tasks, in spite of AI experts’ rhetoric (Hinton, 2016) and managerial desire for algorithmic control (Kellogg et al., 2020). We therefore argue that despite engineering and managerial attempts to automate radiology tasks, as a medical profession, radiology is well-equipped to harness AI to its benefit and maintain its professional boundaries, privileges and power structures.
While ethnographic fieldwork provides us with on-the-ground assessments of technology-in-use (Bader and Kaiser, 2019; Brayne, 2017; Christin, 2017; Goto, 2022; Lange et al., 2019), the expertise-related reasons and justifications for the use (or dismissal), acceptance or rejection, of AI are mainly evident in the scientific discussions of the profession. As the ultimate legitimacy-generating mechanism in expert labor (Abbott 1988), peer-reviewed scientific journals provide professions with the thoroughly thought of reasoning and justifications, backed with empirical research, data and theoretical reasoning, which direct professional conduct. These scientific discussions are then used by practitioners as abstractions to justify their daily practices (Stevens et al., 2018). We therefore turn to the scientific publications in radiology to study this medical profession’s reaction to AI.
Research design and methods
Our study of professional maintenance focused on published reactions of radiologists that appeared as articles in top radiology academic journals. As the venue of formal discussion in established professions, peer-reviewed academic articles are an institutionalized arena to investigate professional perspectives. Read widely by the members of the profession, these publications institutionalize new knowledge and perspectives regarding professional boundaries. In addition, top journals have a high level of quality assessment and peer-review selection procedures. This assured us that we can sample the reactions and perceptions of top professionals in the field of radiology. Thus, our study examines the radiology profession's complex reactions to AI as they appear in 60 academic publications in the field of radiology.
We used two databases to select our sample. First, we collected articles with Google Scholar using the search words: “machine learning” and “radiology.” A Google Scholar search provides a bird eye's view of academic publications. Figure 1 below shows the significant increase in publications containing the search terms “machine learning” and “radiology” over the last decade.

Number of articles with the terms ML and radiology for 2010–2020.
At the time we searched for these data (December 2020), there were about 355,000 results for this search. From this list, we reviewed the first 250 results sorted by relevancy, focusing on articles published in leading radiology journals, such as American College of Radiology, British Institute of Radiology, Canadian Association of Radiologists Journal, European Radiology, Pediatric Radiology, Abdominal Radiology, Clinical Radiology, Current Problems in Diagnostic Radiology, Journal of Magnetic Resonance Imaging and more. This range of journals allowed us to sample radiology journals from different countries and various sub-specializations. Based on their abstracts, we distinguished between articles reporting only technological applications of AI in radiology with no discussion of their justifications for choosing ML for to perform radiological studies (n = 184) and articles discussing opinions and justifications (n = 66), with or without an empirical study to back these opinions. Of the articles discussing opinions we randomly selected 30 articles as our first sample set.
Second, we consulted a librarian and the JCR (Journal Citation Report) ranking of academic journals, to distinguish prominent venues of publication in radiology. We then focused on the journal Radiology, (IF 29.16 for 2021) as one of the highest-ranked radiology journals and a host of the ongoing professional debate on AI. There were 50 results for the search “machine learning” in this journal since 2016 and we randomly sampled another 30 articles and added them to the Google Scholar sample as a second sample set, giving the two databases, Google Scholar and JCR, the same number in our sample data.
We viewed our dataset of radiology articles discussing AI as discursive arena where authors and their readers shape their views of AI. Following grounded theory principles and an interpretive approach (Creswell and Miller, 2000; Saunders et al., 2018; Strauss and Corbin, 1997), we engaged in close reading of the 60 articles, focused on 171 quotations, which are paragraph-level arguments radiologists make regarding AI, and identified categories in these arguments (Pope et al., 2000). We identified the main arguments independently and then compared our classifications to reach inter-judge reliability (Leiva et al., 2006). We labeled segments of articles first by their topic, for example, when radiologists discussed the interpretability of ML models, we categorized “Explainability,” or when they discussed the reaction of patients to automated systems, we categorized “Patients.” Next, we compared elements within general categories to discern the different types of discursively constructed relations between radiologists and AI. For example, in the category “Ethics” the phrase: “AI is based on unethical data agreements and privacy breaches” was compared with “AI will allow for more ethical service to patients.” In the general category “Patients” the phrase “patients would object to an algorithm reviewing and diagnosing their imaging scans,” was compared with “patients would behave as consumers, navigating between machine and human diagnosis and choosing their preferred option.” This type of comparisons allowed us to discern different themes in the perspectives of radiologists regarding AI (Gioia et al., 2013). As relational perspectives (relations of radiologists to AI), these themes represent perceptions of both “us” (radiologists) and “them” (AI experts/ AI technology), the relations (and power relations) between the two groups, and radiologists’ ideas regarding the appropriate responses of their profession to AI.
The below outline of these institutionalized perspectives is by no means exhaustive; more perspectives may be forming in other venues, not covered by the data presented here, and new outlooks may also evolve over time, when radiologists are better acquainted with AI, and as AI develops and proves (or disproves) it is worth and influence in radiology. These perspectives are also not completely distinct or mutually exclusive. Rather, they are a prism through which the different types of institutionalized reactions to AI can be observed (Oliver, 2004).
Results
Results reveal a complex web of perspectives of radiologists regarding AI. This complexity suggests that professional maintenance in the field of radiology entails a constant maneuvering between interrelated perspectives, even in a single article, by a single author. This rhetorical dance of argumentation suggests that radiologists’ maintenance work is not necessarily strategic or intended, as no author wishes to appear inconclusive. Rather, the complexity of arguments suggests that maintenance work is done through an ongoing discussion, that enhances professional boundaries. As in prior studies of maintenance work in professional fields, this complexity is typical (Brayne and Christin, 2021; Goto, 2022; Malsch and Gendron, 2013; Micelotta and Washington, 2013) and is aimed at preserving traditional power structures while allowing the field to enjoy the prestige and benefits of innovation and technology.
It is possible to crudely divide the approaches of radiologists into “positive” (in regards to AI) versus “negative” (in regards to AI) (Kim et al., 2021); “cooperate” versus “compete” (Anteby et al., 2016) or “connect” (with AI) versus “protect” (from AI) (Noordegraaf, 2020), as has been suggested in recent discussions in the sociology of professions. However, we offer a more nuanced understanding of radiologists’ positions regarding AI and show that in their institutional work, radiologists not only simultaneously perform boundary work and networking with the AI field, but a dance-like maneuvering between relational perspectives. Thus, the results below show that interprofessional relations, as all social relations, are a prismatic dance, never merely a movement of protection/competition versus connection/cooperation (Alvehus et al. 2021).
Below are the four themes that emerged in our analysis of the data: (a) integrate—radiologists aim to work together with AI experts and technologies, viewing both specialties as essential and equal; (b) absorb—radiologists work with AI experts and technologies but view them as subordinate and as (yet another) limited technological tool; (c) fight-off and delegitimize
Examples of the different perspectives of radiologists regarding AI.
The integrate and fight-off perspectives were the most frequent, with 54 quotes in each. We found 37 arguments for absorb and only 26 which argue to assimilate. Each argument entails explanations by its author(s) on how and why radiology should integrate with, absorb, fight-off or assimilate into the AI expertise. These varied justifications exemplify the sociological complexity of embedding AI in radiology, the distinctiveness of these two professional cultures, and the attraction/ rejection dance radiologists perform with the AI technology. In the following lines, we discuss each perspective and the interactions between them.
Integrate
Integration means the continued cooperation of two professional groups while each group preserves its core values, training, skills and interpretations of tasks and problems. In the case of AI in radiology, rather than closing-off the boundaries of the profession, some radiologists seem to embrace the new technology and expertise viewing them as exciting and beneficial in terms of knowledge production, standards, efficiency and service to patients. In this framework, AI in radiology is perceived as reducing workload (Wong et al., 2019), standardizing and promoting quantification in radiology (Cochon et al., 2019; Hricak, 2018). This opening of boundaries in the profession is done specifically by adjusting the knowledge base of radiology to include and assist the AI expertise. We will discuss here three aspects of this integration: (a) learning the language and methods of AI; (b) integrating medical knowledge into AI systems and (c) establishing new shared institutions for knowledge development.
Learning the language, methods and skills. In their academic discussions, radiologists make an effort to learn the language of AI, read into it, as their new “dance partner” (Ortmann and Sydow, 2018). They teach each other about AI methods. For example, Erickson et al. (2017), Moore et al. (2019) and Nichols et al. (2019) review definitions, terminology, approaches and methods in AI, in what seems as a genuine effort of the radiology community to learn about AI. Even critical voices (e.g. Pesapane et al., 2018; Al’Aref et al., 2019) first present in their articles the definitions, terminologies, approaches and methods. This effort to learn points to the willingness of radiologists to integrate AI into their profession. However, this integration does not include an internalization of the AI's community ethos of machine superiority over human experts; radiologists hold their fortress of experts in imaging.
Integrating medical knowledge into AI systems. A second form of integration effort is done by finding ways radiologists can contribute to the development of AI in radiology. Ideas about cooperation and integration are many. Radiologists can help AI become less opaque (Akselrod-Ballin et al., 2019), annotate images for the training of AI (Chan and Siegel, 2019), and help create databases for AI (Tsai et al., 2021). In cancer screening, radiologists offer to contribute their knowledge to the systems: Other groups have reported the inclusion of qualitative semantic features such as nodule location, cavitation, and calcification… Hence, additional work is required to integrate these radiologist-crafted features to analyze their importance in our cohort… clinical translation as a cancer screening tool will require careful planning to integrate the human and machine interpretations together in decision support mode. (Beig et al., 2019: 791–792)
Building shared institutions. A third form of integration is promoted by radiologists when new institutions are established in both the professional and the organizational levels. Shared professional associations for radiology and computational methods are being founded, while hospitals establish AI centers. One example for this institutional integration is the efforts of the European Society of Medical Imaging (EuSoMII), formally established in 2016, to “connect radiologists, radiology residents, data scientists and informatics experts” (EuSoMII website).
Institutional cooperation also includes the establishment of “data centers” in hospitals (Dreyer and Geis, 2017: 715). These new institutions reflect a field-level effort to establish hybrid organizational forms, where a space is created for new tasks and power structures to emerge. When radiologists willfully establish shared institutions with data scientists and other IT workers, and cooperate with them for the development of AI, they seem to create a new field of medical-radiological data science, radiomics, with its related identities and careers. However, the establishment of such organizational forms does not eliminate the tension between the two work cultures, which use different criteria to establish “ground truth” (Lebovitz et al. 2021, 2022).
Finally, efforts to integrate radiology and AI knowledge are done in shared innovative competitions, where radiologists create annotated databases and release them to the online AI community: RSNA and Society of Thoracic Radiology (STR) collaborated to develop the RSNA International COVID-19 Open Radiology Database (RICORD). This database is the first multi-institutional, multi-national expert annotated COVID-19 imaging dataset. It is made freely available to the machine learning community as a research and educational resource for COVID-19 chest imaging. (Tsai et al., 2021: 204)
In sum, integration efforts with the AI community in the form of learning, knowledge exchange and shared institutions promote an opening of professional boundaries. At the same time, this integration is done with consistent distinction of the two scientific cultures.
Absorb
The absorb perspective means that radiologists view AI as a technology and expertise that will be incorporated into their profession, but not in a disruptive or revolutionary way. Rather, some radiologists view the incorporation of AI into radiology as a slow, gradual process, similar to the incorporation of previous technologies, and as subordinated to radiologists’ authority. Here, radiologists preserve their authoritative position by viewing AI as a narrow and limited tool. In this perspective, thus, AI is understood as a technology that should be supervised by radiologists. This subordination is done as follows:
Framing AI as narrow. First, radiologists point to AI's known Achilles’ heel, namely the absence of “general AI” and the fact that thus far computer scientists are only able to build “narrow AI.” In this vein, radiologists frame AI as performing partial and fragmentary tasks and criticize these limitations. The majority of (if not all) deep learning techniques developed to date are unitaskers. They address a single type of image or modality and a single disease entity. There are likely situations where this is perfectly appropriate and useful, but in the long run we cannot have a plethora of different independent schemes providing a multitude of “opinions” that the radiologist needs to somehow sift through to make sense of. (Krupinski, 2018: 854) Board games such as “Go” focus on a very “narrow” artificial intelligence task where a winning vs losing status can be assessed, whereas medical imaging is associated with far greater amount of ambiguity, and a larger variety of features, classifications, and outputs. It is also likely that thousands of “narrow” algorithms based on separate large, well-annotated databases will be required for a computer to begin to compete with a radiologist for comprehensive diagnostic assessment of even a single modality covering a single anatomical region of the body… Although possible in games with simple defined rules such as Go or Chess, analogous self-reinforcement learning is not so easily attainable in radiology, given the lack of a simple set of rules of the “radiology game” to allow this sort of self-play. (Chan and Siegel, 2019; 1–2)
Framing AI as good for games, and then ironically referring to the “radiology game” rhetorically contrasts the frivolous and immature tech culture with the serious and responsible medical profession. In this perspective, radiologists are open to integrating AI tools in their work, but view them as inferior, narrow, game-like tools.
Subordinating AI as a limited technology. When viewed as a tool, rather than an alternative expertise, radiologists welcome AI into their profession, mainly to assist in the simple cases. They aim to incorporate AI into their profession as yet another computer aided detection (CAD) limited tool (Chan and Siegel, 2019), and do not view it as an exciting, superior technology or a threat. Instead, radiologists subordinate AI by criticizing its performance. As this senior radiologist comments: Current data indicate that in general even the best machine learning systems do not yet perform at the level of a radiologist… The use of radiomics and machine learning to predict patient outcomes is still in its infancy. (Summers, 2019: 1987–1989)
Similarly, from an authoritative position, radiologists aim to learn AI in order to know its limitations and supervise it: As ML algorithms become more embedded in clinical practice, radiologists will need to expand their understanding of these methods and what functions the models were created to perform. More importantly, it will be imperative to understand the limitations of these tools and models in the patient care continuum. (Halabi et al., 2019: 503)
In addition to arguments about performance and expertise, radiologists preserve their authoritative position also in terms of ethical considerations. They argue that since they (and other physicians) are the responsible and liable agents, they must supervise AI: As machine learning enters state-of-the-art clinical practice, medicine thus has the immense obligation to ensure that this technology is harnessed for societal and individual good, fulfilling the ethical basis of the profession. (Darcy et al., 2016: 552)
Fight-off and delegitimize
While they allow AI into their profession, radiologists also directly challenge AI and “disrupt disrupters” (Hargrave and Van de Ven, 2009: 120). They protect the boundaries of their expertise, separate their core knowledge and skills from those of AI experts and undermine the legitimacy of AI in radiology. We will discuss here five major arguments radiologists make against the incorporation of AI in radiology:
Standardization. First, radiologists attack the lack of standardization of AI in radiology. For example: Because of the rapid growth of this area, numerous published radiomics investigations lack standardized evaluation of both the scientific integrity and the clinical relevance. (Pesapane et al., 2018: 5) While sensitivity, specificity and ROC comparisons between algorithms and clinicians on test data sets certainly add validity to algorithm performance, the gold standard for any new methodology applied in a clinical setting relies on comparable or superior performance in a randomized clinical trial. (Nichols et al., 2019)
The authors require that AI will standardize according to the clinical trial practice in medicine. The clinical trial gold standard is in striking opposition with the above-mentioned open competition standards of the AI community: while randomized clinical trials require long-term evaluation of sensitivity and specificity of tools on a well-defined, randomly assigned cohort of patients, open competitions use different accuracy measurements, and mainly compare algorithms to physicians on given labeled datasets, returning to the “ground truth” problem of labeling (Lebovitz et al., 2021), and without clinical long term follow up on patients. Thus, while participating in competitions, and even providing labeling for algorithms, radiologists cast a shadow on the validity of these endeavors in clinical settings.
Lack of explainability and a “black-box” technology. A second argument against AI is that it is a “black-box” technology, which does not produce explanations for its decision-making process: One of the biggest issues in deep learning is the so-called black box problem (i.e. in deep learning the algorithm finds the rules itself, but often without leaving an audit trail to explain its decisions)… Especially in medicine, where accountability is important and can have serious legal consequences, it is often insufficient simply to have a good prediction system. This system should be able to articulate itself in a certain way tracing a report to explain its decisions. (Sollini et al., 2018: 7)
This type of erratic and opaque behavior of AI, while viewed as a shortcoming in radiology and management literature (Lebovitz et al., 2022) and as a reason for radiologists not to clinically trust AI, is glorified in the AI literature (Breiman, 2001) and is seen as a positive, trans-human feature of this technology, not as a bug. This difference in the two professional-scientific cultures, although allegedly mitigated by attempts to produce “explainable AI,” is actually persistent, and represents a deep scientific divide between the creators of AI and the rest of the scientific community (Cox, 2001; Lange et al., 2019; Vesa and Tienari, 2020). Thus, the powerful capabilities of AI to perform expert labor, such as radiological diagnosis, are constantly challenged by its inherent opacity.
Ethics. Third, radiologists doubt some of the ethical standards guiding the implementation of AI in their profession. For example, the following excerpt presents doubts regarding the legal standing of database generation in one case involving DeepMind, a famous AI company owned by Google: Developing AI requires access to large volumes of data. In the UK, this has been challenging particularly within the NHS. The panel found the agreement between DeepMind and the Royal Free Hospital Trust to be illegal and contained “deficiencies” as it did not safeguard patient data. (Wong et al., 2019: 142) There is also a major ethical concern with the development and use of augmented intelligence systems. How should consent be obtained from patients for the use of their imaging data? Should permission be obtained from the patients at the time of imaging that their imaging data may be used to train an algorithm? Further, how should data privacy be addressed? Where should this data be stored, when data hacking is an ongoing problem for some of the most secure systems worldwide? (Jalal et al., 2019: 11)
Informed consent is the basis for medical ethics since the Nirenberg trials after World War II; forsaking it in big data research is a concern among medical practitioners and ethicists (Rothstein, 2015). Other authors are worried about biased databases that will lead to biased AI: Machine learning, despite being data-driven, can often be riddled with traditional biases. While harmless in most commercial settings, these can become problematic in the high-stakes healthcare space, where precision is of the utmost importance. (Al’Aref et al., 2019: 1985)
The authors, well acquainted with medical research, raise the concern of unmitigated bias in AI. They point to sampling, observer and database bias, and to the alleged absence of procedures to mitigate these biases in AI, as opposed to medical clinical trials.
These ethical considerations are part of a wide concern in healthcare regarding the implementation of AI systems into work practices (Siala and Wang, 2022). Specifically, they point to a growing problem in the ligations between the two professional cultures, medical and big data research, namely how to distinguish legitimate research practices from professional misconduct. Therefore, efforts to integrate and open the boundaries of the radiology profession to AI tools are done with a persistent delegitimizing of their ethical compatibility with medical standards.
Statistical inadequacy of ML-based AI. When fighting-off AI, radiologists point specifically to the statistical inadequacy of many AI implementations. As one review article summarizes: Feature type, selection, and classifiers vary among studies, patient sample sizes tend to be relatively small, test/validation datasets are lacking in most cases, and the image types used to extract features are variable. Moreover, it is unclear which indices should be used, what they represent, and how they are related to the underlying biological mechanism. (Sollini et al., 2018: 7) Given any task, it is then merely a matter of experimentation to determine the optimal model with the greatest potential for generalizability. Theoretically, the lowest attainable error is known as the Bayes error rate. This ceiling on performance exists because most phenomena studied in the natural world is permeated by noise. Consider, for example, two patients characterized by identical clinical parameters; while it may be reasonable to assume that such individuals will, on average, experience similar clinical outcomes, it is impossible to make such a claim with absolute certainty because the system being approximated is probabilistic rather than deterministic. (Al’Aref et al., 2019: 1978)
Specifically, another known Achilles heel of AI technologies, namely the problem of overfitting a model to the training data, is mentioned by radiologists as a statistical limitation of this technology (Pesapane et al., 2018; Moore et al., 2019; Kahn, 2017). One review article even argues that the prevailing over-fitting phenomena in AI prevents it from giving reproducible results: Our results demonstrate performance inconsistency across the data sets and models, indicating that the high performance of deep learning models on one data set cannot be readily transferred to unseen external data sets. (Wang et al., 2020: 796)
Again, while participating in the development of AI in the integrate perspective, radiologists doubt the statistical validity of AI and point to the disconnect of this technology from biological mechanisms.
Cost-effectiveness and efficiency of AI. Fifth, radiologists question the cost-effectiveness of AI in radiology. Considering the attempts to de-professionalize medicine in contemporary hospitals, and transforming it into a corporate profession (Ritzer and Walczak, 1988), this argument carries special importance; it means that physicians “go after” AI on grounds of cost-effectiveness, and not of expertise. For example, radiologists preach administrative wariness of more computing costs: … more complex tasks require more computing power. Uncertainty remains surrounding the degree of processing required to run these advanced programs and hospitals may not be prepared for additional network requirements. (Wong et al., 2019: 142–143) Physician skepticism of health care informatics is justifiable given the poor design of so many electronic hospital records systems, the additional clerical burden assumed by physicians, the barrier created between physicians and patients, and the contribution of EHRs to physician dissatisfaction and burnout. (Moore et al., 2019: 515)
These challenges to the ethics, expertise and effectiveness of AI point to the rhetorical dance and elaborate maintenance work of radiologists, based on distinct values of the two scientific cultures. Reliable standardization versu chaos; explainability versus opacity; ethical and responsible public service versus market-driven, capitalist approach; trustworthiness versus overfitting and modest saving versus technological extravagance are at the core of these distinction efforts, which are done while establishing shared associations, hospital data centers or participating in competitions and providing labels for AI.
Assimilate
While in the larger part of their discussion, radiologists view their expertise as superior, powerful and enduring, and AI as an exciting partner or a limited tool, on the discourse margins, automation of radiology is considered a prospective option, a future that should be discussed. Assimilation means that radiologists are anticipating a profound change in their profession and expertise—in the way knowledge and practices are produced and the professional identity of those who are “allowed” to practice image diagnosis. Although none of the radiologists viewed their expertise as redundant at present time, when they discuss automation as a possibility, they acknowledge an upcoming change in the radiological expertise in the face of AI's disruption. For example, some argue that the human expert may soon be replaced by an AI system: “The idea to dismiss mammograms that are categorized as very likely normal without any human reader interpretation is the logical next step” (Geras et al., 2019: 251). Affiliated with both radiology and data science centers, the authors embrace full automation as a viable option in radiology, yet pointing to AI's limitation at the end of their article and reserving clinical evaluation for physicians. These rhetorical maneuvers suggest a centrifugal force operating in the radiology profession (Montgomery and Oliver, 2007), initiating outward-directed networking with the AI community's set of practices, norms and believes of machine superiority, due to an understanding of the role of AI and its supporters in twenty-first century expertise. However, as Montgomery and Oliver (2007) show, these efforts of assimilation may actually reinforce domain claims, as the profession stretches its boundaries to include new and innovative approaches, techniques and technologies.
This is a challenging maneuver to make. Faced with the capabilities of AI, and the wealth, vigor and youth of the institutional forces driving it, some radiologists view “resistance as futile”: Machine learning technologies are now deeply embedded in our medical information systems. These methods will ultimately be pervasive in the digital realm of radiology. Resistance really is futile. Despite substantial impediments, there are also tremendous forces in its favor, including powerful and wealthy IT companies such as IBM and Alphabet, along with the great equalizing force of data clouds, high speed networks, and most of all, brilliant young minds born into the digital age. (Choyke, 2018: 139) ML and related data science initiatives for medical imaging will succeed with greater access to accurately curated and publicly available data sets. The broad availability of the data set allows individuals with different backgrounds to explore nontraditional solutions, accelerating discoveries at a pace that is more rapid than that of the traditional scientific method. (Halabi et al., 2019: 501)
Halabi et al. (2019) reported an imaging diagnosis competition organized by the Radiological Society of North America (RSNA). While they “read” the way knowledge production is organized in the AI expertise, they call for a change in the way scientific research in radiology is organized; instead of an individual researcher working alone on a database, they open radiological research to non-radiologists and non-experts following the machine learning community's preference for open competitions on open datasets. While these efforts have been mentioned above as integrative, they might carry a profound change in the professional identity, knowledge and skill, and daily practice of the persons performing research in medical imaging, and ultimately, change the way medical scans are diagnosed in clinical settings.
Discussion
The reaction of radiologists to AI's disruption reflects the complexity evident in other expert fields in reaction to AI (Brayne and Christin, 2021; Goto, 2022) and the well-documented complexity of change in professional fields (Malsch and Gendron, 2013; Murray, 2010). Our motivation was to explore the reaction of a powerful and territorial medical profession to the potential major disruption posed by AI expertise. Our results, not surprisingly, depict a range of reactions, all directed toward maintaining medical dominance, even in the face of an equally powerful expertise, such as AI. Our contribution to the literature of institutional maintenance in professional fields lies in the concept of rhetorical dance. Radiologists are able to contain AI's penetration into their profession through maneuvering between different, cooccurring perspectives, each with its own internal logic, but all are a part of larger dance of maintenance work.
In the first perspective, we found, radiologists argue for integration of their profession with the AI expertise—they learn “to read” AI methods and terminology, integrate their knowledge into AI systems, and establish shared institutions with the AI community in what seems as a productive cooperation with an exciting and transformative new technology and expertise.
At the same time, radiologists make certain that AI technology is not seen as automated procedures that may be considered a replacement for their expert labor or discretion for complicated cases, as other experts in other fields do (Goto, 2022; Christin, 2017). They demonstrate their professional power in the second perspective, absorb, when they frame AI as yet another technology, a mere tool for the expert radiologist, and not as disruptive or transformative. Advancing the technology while subordinating it, supports the status of radiologists as technology-orientated physicians, masters of both medicine and technology.
Concurrently, radiologists also engage in a typical professional struggle over the imaging jurisdiction, with their fight-off and de-legitimize perspective. Framing AI as inaccurate, unstandardized and unreliable, opaque, unethical and not cost effective, the radiologists ensure that they maintain their legitimate dominant position in the imaging jurisdiction. They clarify that no other party is able to judge AI's performance and be liable for its decisions and that public opinion, state regulation and hospital management consider radiologists’ discretion as more accurate, standardized, efficient, ethical and responsible than that of AI experts.
Out of the four rhetorical perspectives we describe in our study, the fourth perspective, assimilate, is of special interest in the context of professional maintenance as it depicts an alleged lost battel between radiology as an expert profession facing AI. Here, radiologists open the boundaries of their profession to the AI community's logics of algorithmic abstractions and trust in machine judgment. This way, they can affiliate their specialty with wealthy IT companies and young, enthusiastic AI experts. Radiologists do so because they are aware of the benefits of joining these heterodox forces (Malsch and Gendron, 2013). In assimilating into the logics of the AI culture, radiology gains the image of an innovative profession, inside and outside the medical field, a coproducing partner in the development of advanced technology.
These cooccurring four perspectives allow the radiology profession to undergo a change in its knowledge base and expertise while aiming to maintain its current power structure (Micelotta and Washington, 2013; Montgomery and Oliver, 2007). Following Farjoun (2010), we see these four perspectives of radiologists as interdependent expressions, a rhetorical dance for professional maintenance work. Unlike other concepts related to institutional maintenance, we depict a rhetorical dance as a spectrum of relational perspectives that emerges when looking closely, through an interpretive prism, on the scientific discourse of the profession (Oliver, 2004). Rhetorical dance differs from previous concepts of institutional maintenance such as experimentation (Malsch and Gendron, 2013) that views institutional work as a fragile process, subject to trials and tests in experiments, while trying to extend the professional jurisdiction and apply institutional reproduction to enhance legitimacy. It differs also from the notion of conflictual hybrids (Murray, 2010), where hybrids are emerging from conflict. Rhetorical dance differs as well from the conventional notion of boundary work where only protection is used to maintain the resilience of professional logics and from the concept of micro-level ambiguity in managerial logics, where ambiguity is performed on the ground, but not in formalized in scientific publishing (Currie and Spyridonidis, 2016). Finally, rhetorical dance differs from boundary-spanning activities and networking (Montgomery and Oliver, 2007), where professional boundary maintenance work is based on dual and intermittent centripetal and centrifugal forces. In our paper, as others who use the dance metaphor (Ortmann and Sydow, 2018; Wallenburg et al., 2021), we show that radiologists are able to spin a range of cooccurring arguments, of both attraction to AI, rejection of it, and the moves in between, allowing them to decipher the boundary crossing of external experts that offer alternative expertise, and contain the uncertainty and disruption caused by AI.
The introduction of AI into professions has been described as a vision of innovative automation of expertise. However, a close reflection on experts’ reaction to AI reveals the active and agentic role played by professionals across expert fields, their evaluations of technology and their maintenance efforts, as an important force in processes of innovation (Christin, 2017; Faulconbridge et al., 2021; Goto, 2022; Stevens et al., 2022). We hypothesize that this “one-step integrate, two-steps fight-off, spin and subordinate, turn and assimilate” dance that radiologists perform with AI will persist, and more so, that it is not unique to radiology. While AI experts are aiming to enter powerful professions, and specifically medicine, the battle to maintain the integrity of existing fields will continue.
Future research in the study of AI in expert labor should compare different expert fields’ reactions to AI's disruption and offer a systematic review of resistance strategies. In radiology, research should study the profession's perspectives over time, to examine whether radiologists continue to adhere to mixtures of perspectives or congregate to one.
One limitation of this study is the lack of distinction between radiologists’ perspectives regarding the AI technology itself and the AI experts producing it. This lack of distinction stems from the absence of such distinction in the data; the radiologists do not make this distinction. However, analytically, there is a difference between the technological tools and the experts developing them. This difference should receive research attention. Another limitation is that we present a sample of reactions that appear in the academic journals of radiology. Obviously, additional arguments may be gained through interviews or observations of radiologists conducting their work. Such additional data sources should be applied in future research.
Footnotes
Acknowledgements
The first author would like to thank the Hebrew University of Jerusalem for hosting her as a postdoctoral fellow at the time of the research.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
