Abstract
In a multi-institutional National Science Foundation artificial intelligence grant application, we generated ideas for an institute that can leverage intelligent agents for cybersecurity. My own role, as co-principal investigator, concerned communication involving risk, safety, ethics, and inclusion (broadly diversity, equity, inclusion, and accountability, or DEIA) based on my resilience scholarship and prior National Science Foundation funding in ethical development, ethical design, and professional formation of engineers. This essay does not present the grant itself (that is being revised for resubmission), but outlines some considerations with regard to risk, resilience, and ethics in artificial intelligence.
In early 2020, the world faced the emergence of the COVID-19 pandemic with unknown origins and long-term consequences as well as questions about who could suffer most in health, relational, and economic terms. Millions died. Lockdowns eroded supply chains for consumer products and commerce. Isolation contributed to individuals’ lower mental and physical health among other difficulties. U.S. COVID-deniers, anti-vaxxers, and those who prided themselves on self-reliability and personal control over health risks expressed their feelings on Twitter: Chinese Weibo users tended to frame their responses more communally, provide more positive content, and follow collective rules, such as avoiding posts about contradictory political or personal accounts (Luo et al., 2021).
The case of the COVID-19 pandemic displays the many complexities and evolving nature of risk and safety when conditions are disputed and/or ambiguous and when seemingly competing ethical dilemmas, such as the preservation of individual lives or of livelihoods, come to the forefront. Similarly, ChatGPT, the Generative Pretrained Transformer developed by OpenAI, presents cybersecurity and other opportunities to search for and remedy system vulnerabilities. But ChatGPT also has undetermined threats for community health, education, businesses, and other sectors in society (e.g. Biswas, 2023). In COVID-19, ChatGPT, and other cases, greater understandings of risks and resilience are necessary. These understandings involve the need to delve into power dynamics and unequal access to resources and to have transparent procedures and processes with regard to artificial intelligence (AI) and safety concerns on micro, meso, and macro levels.
The call for greater understanding can be met by media and communication scholars in conjunction with other experts in specific contexts. To describe where priorities about understandings regarding AI and other issues, my essay moves from (a) a discussion about how people understand, conceptualize, measure, and implement findings in risk, safety, and resilience in different contexts in which there are crises (Buzzanell, 2023); to (b) ideas concerning risk and resilience that are embedded in a funding proposal for a multidisciplinary AI institute on which I am co-principal investigator (co-PI) for the diversity, equity, inclusion, and accountability (DEIA) and broader impacts portions; and (c) questions concerning ethics, AI, and surveillance. I conclude with strategies for fairness and inclusion in AI.
Understanding risk, safety, and resilience
Risk, safety, and resilience are socially constructed and locale-specific, meaning that they are socio-politically, economically, historically, and culturally situated. Risk is the “potential for loss” (e.g. deaths and financial losses during the pandemic; fears about unbridled AI autonomy and negatively affected interpersonal relationships and military uses) that becomes actualized via triggers (e.g., virus spread, lockdowns, fear of vaccines; introduction of ChatGPT) which also is how resilience—often considered to be an adaptation and capacity to cope with changes and threats—is activated (Tierney, 2014). Risk and resilience are two sides of the same phenomenon that become apparent during crises. In other words, risks are unknown until after crises erupt (Seeger et al., 1998). For example, the isolation associated with reduced human interactions and other challenges experienced by people worldwide during the pandemic exacerbated anxiety and depression, especially among women, leading to mental health crises globally and in China (Jiang et al., 2022).
When crises occur, they prompt action. They lead to studies of what can constitute risks and safety in particular contexts (e.g., supply chain disruptions might lead to food insecurity and inflated prices on scarce commodities). These investigations can result in new and revised laws, policies, oversight and standards, and reevaluation and/or procedures for preventing and handling crises. For example, pandemic-caused mental health difficulties led to interventions such as the use of Replika, an AI-powered chatbot companion, that provided different forms of empathy and support to its users (Jiang et al., 2022). Chinese governmental interests in poverty reduction led to infrastructural policy changes and investments in fiber optic broadband capabilities so that those in the poorest counties in rural China could access tele-education and tele-healthcare as well as e-commerce (Gu et al., 2023). Ironically, these internet changes to improve living conditions also facilitated rural-to-urban migration (Gu et al., 2023).
Crises also expose the underbelly of societal infrastructures. Crises reveal groups, institutions, ideologies, and material structures that are vulnerable. For instance, there are people in need who cannot afford the costs of internet access, software, and hardware for Replika. Crises reveal multidimensional vulnerabilities in resources and power (Limantė and Теrеškinas, 2022) but also whether and how stories about particular vulnerabilities are represented in the media. Sometimes researchers, media specialists, and policy makers fail to anticipate or strategically omit the groups that might be most affected by risk and crises (e.g. children isolated in homes with adults under extreme strains suffered violence during the pandemic; the elderly have been swayed more easily by scams and digital propaganda; ChatGPT and similar AI models have not been regulated sufficiently; Hacker et al., 2023; Lillywhite and Wolbring, 2023; Waymer, 2020). Furthermore, underresourced communities, such as those living in coastal areas affected by climate change, might not have personal funds or access to governmental grants to rebuilt using the latest materials and technologies to withstand the next disaster. In other words, what might be codified as a risk by one stakeholder group might be a necessary or desirable lived condition and/or might be depoliticized for and/or by other groups. For example, tourists climb steep mountains as adventures in which presumed safety protocols mitigate the risk of injury or death, whereas Sherpa assess risk differently since climbing the Himalayas is an occupational necessity. In the global engineering design team I co-instructed, Ghanaian rural village residents drank contaminated groundwater because there often was no other water available. The risk of not drinking water outweighed the costs of illness and disease that might occur from unsanitary conditions.
Finally, crises and risk assessments also display deep and entangled layers of problems in society, problems that become “wicked” insofar as they are linked with poverty, lack of education or electricity and other resources, and disenfranchisement or different disparities like those minority employees exposed to the COVID-19 virus in the early pre-vaccination period because they were designated as essential workers in service, manufacturing, and transportation industries (Buzzanell, 2023; NASEM, 2022). Similarly, those without AI and media literacy might not know how to assess the information they receive. Crises and risk assessments also are fraught with knotty ideological assumptions and socio-political tech development (e.g. depoliticizing AI ethics in “ethics washing” or exaggerating businesses’ altruistic interest in ethical AI while such corporations preserve or increase returns-on-investment; van Maanen, 2022; Vesa and Tienari, 2022). In other words, risk, safety, and resilience in AI and other phenomena are constantly evolving, multi-level, and socially constructed and codified in ways that can cause additional issues or problems with regard to other layers, meaning that what seems to be individual problems often are linked to macrostructures nationally and globally. These kinds of contestations and opportunities are built into human–AI interactions.
Designing a multidisciplinary AI institute
I have been co-PI on a U.S. National Science Foundation (NSF) $20 million multidisciplinary multi-site proposal for an AI institute. Broadly speaking, this institute has been designed to make visible AI risk, safety, and resilience in cyber defense systems by synergistically integrating foundational and use-inspired AI (i.e. research and innovations mutually driving each other within and across contexts like health, media, education, and manufacturing). In this way, design ideation, prototyping, and implementation become virtuous cycles that can achieve goals. Issues with which I and others have been grappling are the (a) DEIA considerations for AI; and (b) translating and communicating AI–human interactions and impacts.
Engaging theoretically and practically with DEIA considerations for AI
In my work with engineers over the last couple of decades, I have been intrigued by how they ascertain what risk and safety mean in projects and when using criteria that appear objective, measurable, and professionally or legally determined. Risk and safety are not all-or-nothing but are measured by who is involved, what spatio-temporal and material matters are considered, and how AI can assist and/or hinder people and goals in a variety of processes. AI risks and opportunities are contested in areas of medicine and healthcare, disaster and security breach forecasts, organizational membership such as recruitment and hiring, smart city designs, agricultural equipment, and forecasting, responsible teaching and learning, and much more. For AI, coverage of the social and cultural, particularly for marginalized populations and with respect to DEIA, is lacking (Lillywhite and Wolbring, 2023).
The AI institute proposal addresses safety, or hedges against risks associated with AI and other areas, in decision-making, security leaks, and other crises (see Braman, 2009; Gandy, 2021; NASEM, 2021) locally and trans-nationally (Bajaj et al., 2021). Working through different issues systematically can offer revisions in the ways individuals and collectivities enact resilience as anticipatory and/or reactive to disruptions and challenges as well as how they can change or transform their lived experiences online and offline (Buzzanell, 2010, 2018). In considering how to harness risk and safety to improve conditions for people in diverse contexts, the AI institute plan integrates DEIA throughout, not as an add-on to the technical research, but as united thoroughly in every step. In doing so, attention is paid to the different disparities that can cause social and economic harm and that can delay or prevent meaningful decision-making and recovery for vulnerable groups as well as economies at large (Betts and Buzzanell, 2022).
Translating and communicating AI–human interactions and impacts
Besides engaging theoretically and practically with DEIA considerations for AI, researchers struggle with how to translate data into, and communicate, decisions and guidelines for broad impact. Translating and communicating AI–human interactions as well as the decisions that result requires that transparency, explainability, and ethics be embedded throughout the start-to-finish design and implementation processes. In 2017 and 2018, the Chinese government and the European Union released development plans that included the need for explainable AI so that all stakeholders, including potential users, affected people, and AI system developers, can understand how, why, and for whom AI decisions are being made (e.g. AI-enhanced medical diagnoses and treatment plans; Xu et al., 2019). Without efforts to bridge the social and the technical through transparency in the entire design process, the implementation or impact parts of AI institute plans cannot be achieved.
For impact, matters that are consequential include different kinds of data, metrics for assessment, and criteria explicitly earmarked for risk, crises, and resilience in different contexts (Buzzanell, 2023; NASEM, 2021). Metrics for assessment include breadth, utility, comprehensiveness, scientific merit, feasible use by relevant stakeholders, and so on (e.g. Johansen et al., 2017). Regarding AI, researchers expand these ideas for explainable, transparent, safe/secure, and trustworthy AI in computing, methods, and other design aspects (NASEM, 2021; Wing, 2021), knowing full well that without attention to the entire design process, there can be unanticipated harms such as “AI algorithms run amok” (Thomas and Uminsky, 2022: 1).
Productive design for guidelines and harm reduction requires thoughtful action. Thoughtful action demands inclusivity in stakeholders, such as the research teams themselves (Hattery et al., 2022; NASEM, 2021; Thomas and Uminsky, 2022), with attention to empathic resilience in human–AI interactions (Jiang et al., 2022). Communicating guidelines entails the construction of narratives from diverse perspectives and crises (NASEM, 2021, 2022; Thomas and Uminsky, 2022), expertise (Barley et al., 2022; Izumi et al., 2019), and constant questioning of conditions under which models and communication are developed (Leonardi et al., 2021).
Taking these points on translating and communicating AI–human interactions and impact together, the NSF grant on which I am co-PI involves different kinds of professionals and potential users for whom we would design our institute's educational and social media deliverables. These experts are computer scientists, engineers, social and behavioral scientists, educational psychologists, and communication researchers who have constructed logic models for formative and evaluative assessments, among others. We have 28 experts from 14 different institutions. Future work would be developing an external advisory board and conducting external algorithmic audits (Thomas and Uminsky, 2022). These practices can address evolving threats with continuous monitoring and anticipation of risks, safety, and resilience.
In summary, our institute aims to integrate human and non-human agents across the entire design process in several contexts mentioned earlier (e.g. manufacturing, automotive) to examine socially aware AI in our different metrics and aims (i.e. explainable, transparent, proactive as well as reactive, inclusive). In doing so, overall, we are putting together foundational AI in terms of basic research and use-inspired AI which would not only be the application of AI but the constant (re)generation or co-production of knowledge through diverse teams and inclusionary logics. In this reciprocal knowledge production, we anticipate direct impacts on society. We would proactively anticipate code that might inadvertently disenfranchise particular groups or use language that might discriminate against those groups. However, we would develop algorithmic systems that would constantly regenerate for inclusionary design.
Questioning ethics, AI, and surveillance
Finally, I return to a basic question about whether it is even possible to engage in ethical AI and surveillance. Media and communication scholars presume that we can, but this assumption means that researchers and practitioners need to consider the opposite, namely, the impossibility of ethical AI in design and human–machine interactions. Throughout this essay, I have noted some ways to examine these (a) ethical challenges. However, (b) further ethical communication considerations surrounding AI can draw from Gunkel's and Monahan's scholarship.
In pulling ethical challenges forward into this last section of my essay, our AI institute would incorporate several phases: embedding DEIA in all design aspects to develop algorithms that continuously learn how to combat algorithmic biases focusing on data, method, and human capabilities (Akter et al., 2021); auditing databases (algorithm creation, target parameters, objectives, and confounds; Chi et al., 2021; Imana et al., 2021); including broad competencies; and integrating DEIA into human–AI teams (Oliver and Elwell, 2018). These phases would mean that institute members engage in horizontal design holistically for knowledge co-production and implementation (Gallagher et al., 2021; McCabe et al. 2021; Schemmer et al., 2021). These phases incorporate constant surveillance of risk, safety, and resilience as both adaptive and transformative (Buzzanell, 2018) in meaning and in practice.
Second, in examining ethical challenges around surveillance risks and AI more generally, for Gunkel (forthcoming), ethical considerations circulate around a series of questions. These questions involve the changing roles and functions of technology, the nature and identity of legitimate social subjects, and the consequences for humans and other entities such as what happens after technologies are designed and delivered. In these areas who and/or what are considered responsible or moral agents and subjects are negotiated. Questions with profound implications surround the assignment of culpability when communication is deceptive when authorship is murky, and when people continue to impose binary moral categories, that is, human or thing, on experiences with AI.
Taking a different tactic, Monahan (forthcoming; see also Noble, 2018) encourages examination of the code itself and the agents that introduce bias into the data and design when he defines and explicates algorithmic surveillance, referring to surveillance performed by and hidden in computer code. Furthermore, human–AI teams are prone to biases and errors (NASEM, 2021). Examining technological “glitches” and conducting risk-resilience analyses can reveal deep, persistent discriminations (Crawford, 2021; Tierney, 2014).
To conclude on the impossibility of ethical surveillance, Monahan reminds media and communication researchers both that “the world needs to remember that who codes matters, how we code matters, and that we can code a better future” (Algorithmic Justice League, 2022) and that “when [AI's] production and deployment include diverse perspectives and deliberate processes it can be beneficial” (Distributed Artificial Intelligence Research Institute, 2022). To these ends, current research (e.g. Birhane et al., 2022; Thomas and Uminsky, 2022) provides analytic schemes and metrics for emphasizing concrete and potential harms or risks that move media and communication scholars as well as our AI institute from abstract discussions about fairness in, potential marginalized users of, and debates about the impossibilities of surveillance and ethics in AI into spaces where safety and resilience are possible.
To conclude, this essay poses some ways in which media and communication scholars can constantly question risk and resilience. It also presents challenges for a proposed multidisciplinary AI institute and for media and communication researchers in general to design and implement research teams, study procedures, and impacts incorporating DEIA and underrepresented stakeholders. Finally, the ethics of AI human–technological interaction and algorithmic justice to create sustainable systems is discussed–not to present solutions that might be outdated even before this essay is published–but to provide guidelines and central ethical issues that underlie all the work that scholars, policy makers, and potential users must continuously question in the inaugural and subsequent issues of Emerging Media.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
