Abstract
Big Data and Artificial Intelligence have a symbiotic relationship. Artificial Intelligence needs to be trained on Big Data to be accurate, and Big Data's value is largely realized through its use by Artificial Intelligence. As a result, Big Data and Artificial Intelligence practices are tightly intertwined in real life settings, as are their impacts on society. Unethical uses of Artificial Intelligence are therefore a Big Data problem, at least to some degree. Efforts to address this problem have been dominated by the documentation of Ethical Artificial Intelligence principles and the creation of technical tools that address specific aspects of those principles. However, there is mounting evidence that Ethical Artificial Intelligence principles and technical tools have little impact on the Artificial Intelligence that is created in practice, sometimes in very public ways. The goal of this commentary is to highlight four interconnected areas society can invest in to close this Ethical Artificial Intelligence publication-to-practice gap, maximizing the positive impact Artificial Intelligence and Big Data have on society. For Ethical Artificial Intelligence to become a reality, I argue that these areas need to be addressed holistically in a way that acknowledges their interdependencies. Progress will require iteration, compromise, and transdisciplinary collaboration, but the result of our investments will be the realization of Artificial Intelligence's and Big Data's tremendous potential for social good, in practice rather than in just our hopes and aspirations.
Calls for Ethical Artificial Intelligence (AI) are increasingly urgent. Yet even after a first wave of responses coalesced around Ethical AI principles (Hickok, 2021), a second wave cultivated Ethical AI technical tools, and now a third wave is motivating litigation and advocacy (Kind, 2020), published Ethical AI principles and technical tools still have limited impact on the daily practices of AI users and producers (Schiff et al., 2021; Vakkuri et al., 2020). I propose that focusing research and resource investment in four key areas would meaningfully shrink this “Publication-to-Practice” gap.
Investment area 1: Incentivizing the dissemination and integration of technical tools
Many technical tools are available to help mitigate AI-related ethical challenges, but AI product teams report not having adequate access to them (Rakova et al., 2021; Schiff et al., 2021; Vakkuri et al., 2020). Technical tools are not sufficient to close the Ethical AI Publication-to-Practice gap, but they are one of the most scalable mechanisms for narrowing it. Therefore, it is critical that the Ethical AI community expand not only the
One way to reduce the impact of these hurdles would be for professional AI organizations to create open-source platforms and funding opportunities for disseminating and citing practical Ethical AI tools that can be recognized in academic settings. Another approach could sponsor competitive opportunities for Ethical AI researchers to work directly with AI product teams to gain appreciation for implementation gaps and co-create strategies for bridging them. Yet another strategy could launch training programs for technical contributors who want to specialize in Ethical AI, so that more workforce members are adequately prepared to implement technical Ethical AI tools. Overall, the Ethical AI community needs ingenuity and commitment from both within and outside of research communities to bring structural barriers to closing the Publication-to-Practice gap to light, and eventually, to overcome them.
Investment area 2: Committed organizational leadership and resource development
Almost all people who contribute to AI creation do so through a parent organization, such as a corporation, university, hospital, or governmental agency. AI teams repeatedly report that their organizations’ financial requirements, expectations, and compensation structures are not compatible with the investment necessary to fully grapple with the ethical problems their AI products pose (Findlay and Seah, 2020; Rakova et al., 2021). In fact, Ethical AI work is often done on a volunteer basis, because there is not support for it to be completed as part of formal job descriptions (Rakova et al., 2021). Until the people who are creating, packaging, applying, scaling, and monitoring AI feel confident that the time Ethical AI requires is consistent with what their organizations want them to prioritize, ethical issues are likely to go largely unaddressed, regardless of what technical tools are available.
Thus, one part of this investment area should be to cultivate funding for research and experimentation in how organizational leaders committed to Ethical AI can be recruited and given a competitive advantage in today's social and financial ecosystem that typically prioritizes shareholders (Harrison et al., 2020). One of the most helpful steps in this regard would be to develop metrics and external committees for assessing Ethical AI commitment and performance that could be incorporated into CEOs’ reviews and compensation contracts.
Another part of this investment area must be to build and disseminate evidence about how organizations and leaders who are genuinely committed to Ethical AI can proactively ensure their culture, processes, and structures facilitate ethical uses of AI. Translating Ethical AI principles to an organization's daily practice is a deceivingly formidable challenge, beyond the perceived tension between ethical and financial bottom lines. One requirement is an organizational culture that fosters productive ethical deliberation (Buhmann and Fieseler, 2021). Deliberation is required when issues do not have obviously “right” answers or moral “owners,” as is often the case when implementing currently available Ethical AI tools (Lee and Singh, 2021). Ethical deliberation is most successful when it engages contributors with different backgrounds and organizational roles (Thompson, 2007). However, asking diverse people to debate about ethical issues can be counterproductive and divisive if not approached skillfully (De Cremer and Moore, 2020). Trained facilitators can help, but it is unrealistic to require their involvement in all ethical product decisions, especially those implemented at the same cadence as agile software development. Even when facilitators are available, for the deliberations to be instructive, participants have to be given resources before, during, and after the process to help them develop interpersonal skills supporting introspection, openness to feedback, and participation in sometimes emotionally-charged disagreements (Smith and Kouchaki, 2021). The success of such efforts are deeply impacted by whether the participants feel psychologically safe and are empowered to take actions based on the results of the deliberations (Findlay and Seah, 2020; Smith and Kouchaki, 2021). These types of issues have been largely left out of Ethical AI conversations, allowing them to thwart Ethical AI Publication-to-Practice efforts (Rakova et al., 2021).
Moving forward, we need qualitative and quantitative research that maps out how organizational aspects of moral learning impact the translation of Ethical AI tools in production settings, and clarifies how those impacts should be managed. A good way to facilitate this would be to commission university corporate relations teams to match organizations looking for Ethical AI guidance with interdisciplinary academic researchers who can conduct rigorous research in organizational settings. Cultivating such collaborations would not only help organizations receive evidence-based guidance about how to meet their Ethical AI objectives, but also make it more likely that the lessons learned while doing so will be widely shared through publications.
Investment area 3: Career-long training in ethical problem-solving within social systems
One proposal for manifesting Ethical AI is to include trained ethicists in all AI teams (McLennan et al., 2020). While this “embedded ethics” approach is laudable, it may be cost-prohibitive. Further, many Ethical AI issues require technical knowledge to identify or address appropriately, such as when nuanced training data characteristics lead to biased AI models or when modeling choices make AIs particularly susceptible to adversarial privacy attacks. Given the pace at which AI products are created, it is unreasonable to expect ethicists to garner this type of deep technical understanding of every problem, even when they are integrated into AI teams. Such an expectation seems particularly unreasonable when third (forth and fifth) parties become involved with AI products through AI-as-a Service offerings, because relevant technical information will need to be integrated from highly siloed contributors who may not be used to communicating with non-technical collaborators. A more feasible model would allow those who create technology to play an active role in analyzing and addressing ethical problems as they arise, alongside embedded ethicists when they are available (Gogoll et al., 2020). However, most AI contributors are not prepared for this role because they have had few opportunities to analyze the social impact of a requested piece of technology and have had little exposure to analytical discussions about ethics (Borenstein and Howard, 2021).
To prepare AI teams, we must invest in cultivating frequent opportunities for AI contributors at all levels of seniority to practice and receive feedback about how they identify, navigate, and pursue societal implications of AI products. This type of training needs to be incorporated into technical academic programs (Borenstein and Howard, 2021), but since most AI products are created by contributors who have already completed their formal training, must be available on the job as well, including for managers and executive decision-makers. Ethical decision-making skills—such as identifying relevant moral issues, correctly analyzing the moral impact of possible courses of action, and implementing choices that are most consistent with praiseworthy values (Tanner and Christen, 2014)—should be one focus of these training exercises. Less appreciated, the training opportunities must also cover the interpersonal and communication skills needed to debate and introspect in interdisciplinary teams described earlier, as well as the practical problem-solving skills that allow teams to secure buy-in for ethically acceptable, but imperfect, decisions in real-life settings with conflicting pressures from interconnected stakeholders (Boyatzis et al., 2017). This type of proficiency requires emotional intelligence, knowledge about organizational constraints, and facility with “systems thinking” (Silva et al., 2018). Courses and workshops that teach these skill sets must be developed and the most promising educational experiences should be made widely accessible. Simultaneously, organizations using AI should be coached to cultivate learning cultures that reward curiosity and growth, and appoint “transformer” chief learning officers to work with AI, ethics, and business leaders to design, iterate, and track training experiences tailored for each organization that develop these skills at all levels of seniority (Lundberg and Westerman, 2020).
Investment area 4: Agile public policy and regulation
The initiatives described in the first three investment areas will not meaningfully motivate organizations or leaders who do not believe Ethical AI is a priority. Therefore, we must also invest in public policy mechanisms that provide at least minimum ethical guardrails on the AI products that are created. There is wide consensus that AI needs some kind of government regulation, even among leaders of well-resourced AI-heavy organizations like Microsoft, Google, and IBM (Kharpal, 2020). Nonetheless, like AI product teams, policymakers report that Ethical AI principles have had disappointingly little impact on the creation of policy in practice (Stix, 2021). Enforced Ethical AI public policy could reduce the Publication-to-Practice gap in at least three important ways. Firstly, of course, it would reduce the likelihood that AI product teams would make choices that are inconsistent with the societal goals of the policies. Secondly, it would reduce the extra bandwidth AI teams have to muster to address ethical issues by reducing the number of ethically-laden judgments they have to make without guidance. Thirdly, it would disincentivize “races to the bottom” where teams cut ethical corners to release products faster in a market they want to control.
The dominant challenge with AI policymaking is that public policies and laws take a long time to enact and there is valid fear that incomplete or suboptimal policies will be difficult to correct. To make progress, then, we need to invest in nimble policy mechanisms that can be used to test and improve AI public policies before they are proposed through more permanent mechanisms. Examples of this kind of “Agile Policy” include the “regulatory sandboxes” Singapore created to allow autonomous vehicles to be tested on their streets without changes to national laws and adaptive regulations that manifest time-limited decisions through instruments like “sunset clauses” (Bennear and Wiener, 2019). A different approach could sponsor experimental regulatory markets where private corporations compete to provide high-quality regulatory AI services (Clark and Hadfield, 2019). If implemented, governments could then focus on regulating the regulators instead of the entire AI industry, and AI regulation would benefit from the speed of private innovation. Agile regulations might seem objectionable from ethical perspectives that assert regulations should perfectly reflect normative moral principles, but society will make faster progress towards Ethical AI if we have imperfect AI public policies in place than if we have no Ethical AI policy incentives or regulations at all (Stix and Maas, 2021). Successful Ethical AI policy will not only make it more likely that AI product teams create AIs that align with our collective moral principles, it will help society trust the AIs that can truly make our lives better.
Ethical AI progress will require transdisciplinary collaboration and iteration
AI contributors feel there are more barriers preventing them from addressing AI ethical concerns than there are resources to help them, despite the proliferation of Ethical AI principles and technology (Rakova et al., 2021; Vakkuri et al., 2020). Society will need to invest research, debate, and funding in all four of the areas highlighted here to break through the invisible barrier that separates published Ethical AI tools from the AI that affects society's lives. The challenges in these areas are interconnected, even though discussions of them are traditionally siloed. To overcome them successfully, researchers will need to transcend traditional disciplinary boundaries, work closely with nonscientific stakeholders, and develop strategies that address the four investment areas as an interdependent set (Pohl et al., 2017). The Publication-to-Practice bridges that are created through this process will almost certainly be imperfect and require iteration. We should be prepared for that imperfection to feel uncomfortable and maybe even insufficient at times. Nonetheless, the benefits of our progress will outweigh the costs of imperfection. AI's exciting potential for societal good will only be realized if AI is used ethically. Together, let's expedite our investments in the areas described here so that AI reflects our moral principles in reality, not just in published ideals and assumptions.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Templeton World Charity Foundation and Duke Bass Connections (grant number TWCF0321).
