Abstract
Artificial Intelligence (AI) is the notion of machines mimicking complex cognitive functions usually associated with humans, such as reasoning, predicting, planning, and problem-solving. With constantly growing repositories of data, improving algorithmic sophistication and faster computing resources, AI is becoming increasingly integrated into everyday use. In healthcare, AI represents an opportunity to increase safety, improve quality, and reduce the burden on increasingly overstretched systems. As applications expand, the need for responsible oversight and governance becomes even more important. Artificial intelligence in the delivery of healthcare carries new opportunities and challenges, including the need for greater transparency, the impact AI tools may have on a larger number of patients and families, and potential biases that may be introduced by the way an AI platform was developed and built. This study provides practical guidance in the development and implementation of AI applications in healthcare, with a focus on risk identification, management, and mitigation.
Introduction
Artificial Intelligence (AI) is the notion of machines mimicking complex cognitive functions such as reasoning, predicting, planning, and problem-solving. With growing data repositories, improving algorithmic sophistication, and faster computing resources, AI is becoming increasingly integrated into everyday use.
In Canada, healthcare spending as a percentage of Gross Domestic Product (GDP) has been rising for decades, 1 and AI represents a potential opportunity to reduce burdens on overstretched healthcare systems. In recent years, progress has advanced to where AI systems can exceed human performance in certain tasks. 2 In Canada, AI applications are being developed for clinical and non-clinical settings, with examples including AI-enhanced peer review of radiological images 3 and tools for improving supply chain management efficiency. 4 While AI may hold great promise, the actual number of sustained implementations in healthcare settings presently remains limited. 5 As applications of AI continue to develop, the need for careful risk management becomes increasingly important.
This study proposes a risk management framework intended for use by boards, senior leaders, and risk managers in Canadian healthcare organizations when adopting and implementing AI.
Risk management in AI
The introduction of new technologies, particularly in healthcare settings, requires careful planning and risk management. To support the development of the proposed framework, a keyword search for articles providing guidance and risk management advice for healthcare administrators was conducted on prominent Canadian healthcare journals. Canadian healthcare journals were included due to the unique nature of the Canadian healthcare legislative system and accountabilities. A diagram outlining the search criteria is included in Figure 1. These articles were reviewed to understand the current state of risk management guidance for a Canadian healthcare audience. Literature search parameters employed in this study from prominent Canadian journals.
Ethical risks
Artificial intelligence applications in healthcare must be compliant with evolving regulations and ethical guidelines. As of the time of writing this article, Canada does not have a regulatory framework specifically for AI applications. 6 In November 2020, the Office of the Privacy Commissioner of Canada presented recommendations to the Personal Information Protection and Electronic Documents Act (PIPEDA), which are intended to help enable the benefits of AI while maintaining the rights of individuals to privacy. 7 Recently, the Government of Canada has proposed Bill C-11 which would impact the use of data in automated decision-making. 8 Canadian healthcare organizations would also be required to be compliant with provincial legislation as appropriate.
Efforts to adopt AI should also address inequities that may create further gaps in the quality of, access to, and delivery of healthcare services. McCradden et al. argue that biases in health data may represent a significant threat to the ethical adoption of AI. 9 Therefore, a critical step is ensuring a meaningful problem of interest is being solved using appropriate and representative data. Furthermore, while AI systems can perform certain tasks faster and more accurately than humans, health leaders must carefully consider and define the boundaries that would be placed on these systems to protect against the introduction of biases.
Suresh and Guttag proposed a framework for the distinct sources of bias which can arise from the use of AI systems, including
10
:
Governance risks
Wiens et al. argue that comprehensive governance and stakeholder engagement is essential to the success of any AI initiative.
11
In AI initiatives, stakeholders may include internal and external parties from varied backgrounds and roles. Moreover, stakeholders in healthcare-based AI projects should include representatives of patients and families. Stakeholder groups in AI initiatives include
11
:
Successful implementation of AI systems requires collaboration between all three stakeholder groups.
Governance risk can be demonstrated by considering the scenario where an organization is developing an AI system to support clinical decision-making. The development of AI systems generally benefits from larger amounts of data; however, this organization only has a small number of patient records for the problem of interest. The organization could consider using only its internal data, entering into data sharing agreements with other organizations, or creating synthetic data. Synthetic data are created via algorithms rather than being created by actual processes or individuals and may help improve performance of certain AI systems. 12 Moreover, the organization is interested in partnering with AI knowledge experts for the development and deployment of the system. In pursuing this AI application, ownership of each aspect of the solution, including where decision-making, accountability, and liability will reside, must be carefully defined and agreed to.
Performance risks
Effectively building an AI system requires the clear definition of a solution objective. Thomas and Uminsky highlight the importance of choosing appropriate performance metrics and the business risks that may arise from incorrect choices. 13 Consider an AI application built to interpret radiological images. If, hypothetically, abnormal findings are only present in 1% of the images, a system that always declared no abnormal findings would be wrong 1% of the time and would yield a misleading accuracy of 99%. Moreover, despite significant advances in recent years, no AI model presently produces perfect results. 14 Artificial intelligence systems may produce false positive predictions and false negative predictions, and health leaders must carefully consider these impacts what performance thresholds are tolerable.
Forcier, Khoury, and Vezina present three scenarios where organizations could be liable stemming from the performance of an AI system. The first is when damages are claimed by a patient against the company that created the AI system, the second is when damages are claimed by a patient against a hospital or healthcare organization, and the third is when damages are claimed by the physician or healthcare organization against the company that created the system. Moving forward, organizations will need to be aware of evolving Canadian case and civil law and their potential impacts to liability in the use of AI systems in all scenarios above. 15
Recently, deep learning has emerged as a technique capable of processing large volumes of data to generate predictions. However, these systems may contain millions of parameters that govern how they function, and it can be difficult to trace how a particular decision was made. Interpretability of AI systems and the ability to clearly explain and trace its decisions is critical for enabling their adoption, as well as for their continuous improvement both pre- and post-deployment. 16
Implementation risks
Since the 1960s, thousands of AI models have been developed; however, very few have been implemented in practice. Utsun argues that this may be due to
Interactions between humans and AI systems must also be carefully considered. The intent may be to create more time between clinicians and patients, but AI systems could lead to unintended consequences such as increased time spent between humans and computers. Greater involvement of end-users and patients and families is a key enabler of the viability of AI systems.
In 2021, the European Commission published the
Organizations adopting AI would need to consider which category a proposed application belongs to and consult appropriate with subject matter experts and stakeholders to determine satisfactory performance thresholds or boundary conditions in which the system would operate.
Security risks
Artificial intelligence has the capacity to impact many patients and may have been built using numerous data sources. Therefore, careful consideration must be given to protecting these systems against vulnerabilities, cyberattacks, and unauthorized access while maintaining the integrity and confidentiality of personal health information.
Artificial intelligence systems are highly dependent on the availability of high-quality, reliable datasets. Even slight perturbations to datasets provided to AI systems can significantly alter their predictions or recommendations. In a controlled environment, Jiawei et al. demonstrated that modifying a single pixel on images ingested into to an image recognition system greatly altered what it believed it was seeing. 19 This example demonstrates the critical need for strong cybersecurity strategies to protect from external threats and breaches.
While AI has the potential to produce innovative improvements, a careful and deliberate assessment of security risks must be taken prior to the start of each new project and updated throughout the lifecycle of the initiative. Organizations can seek information and guidance cyber security measures from several organizations including the Canadian Centre for Cyber Security, which has issued over 2,000 resources since 1998. 20
Risk management guiding principles
The principles presented here are intended to support health leaders in risk management from concept development to implementation and monitoring of AI systems. The research team developed this list by using the Osborn method to identify as many recommended actions as possible. This method involves identifying as many possible answers to a question of interest. The list is then validated against any known standards or guidance documents and refined to remove any redundant items. The team consulted existing risk management guidance documents for validating the list of guiding principles proposed in this study.18,21-23
Clearly define the value proposition of AI systems
Consult widely with stakeholders, including clinicians, patients, and families to develop meaningful questions to be answered. Consider applications aligning with one or more dimensions of quality, which include accessibility, appropriateness, effectiveness, efficiency, equity, integration, patient-centredness, a population health-focus, and safety.
24
Define problems based on areas of need and then identify data requirements, as opposed to selecting problems based on available data. Obtain feedback from independent stakeholders on the utility of proposed applications. Consider frameworks including the Learning Health System for sustainably transforming processes with data and knowledge.
25
Establish comprehensive governance and oversight
Consider creating a standalone AI Steering Committee with oversight on initiation, planning, execution, and monitoring of AI projects. Steering Committee membership should include but not be limited to clinicians and other end-users, patient and family advisors, ethicists, risk managers, policy-makers, administrative leaders, and information technology professionals. Carefully evaluate user-system interactions to understand where processes and functions may change post-deployment. Apply project management tools including governance charts, terms of reference, and accountability agreements for all AI initiatives. Conduct assessments to identify potential unintended consequences from the use of AI systems.
Apply rigorous methods in building AI systems
Establish minimum data quality specifications for AI solutions. These specifications should consider accuracy, completeness, consistency, credibility, accessibility, compliance, confidentiality, efficiency, precision, traceability, understandability, availability, portability, and recoverability.
26
Define comprehensive use cases and acceptance plans pre-implementation. Pre-deployment, conduct validation trials with clinicians, experts, and other end-user groups. Create monitoring plans including outcome, process and balancing metrics to ensure the system is performing as intended post-deployment. Develop strategies for the tracking and analysis of errors, near misses, and overrides post-deployment.
Apply change management tools and processes
Develop comprehensive communication plans for stakeholder engagement. Create a training and communication plan outlining how the system functions, makes decision, and generates recommendations or decisions. Develop feedback loops for monitoring performance and usability of any solution. Created dedicated functions to act on feedback loops to implement post-implementation improvements. Develop escalation plans and continuously update them via post-deployment feedback.
Create strict privacy and security protocols
Establish data security plans for AI initiatives, which at a minimum include: a. An inventory of data assets. b. Access permission and controls. c. Computing software and hardware controls. d. Protocols for transmitting, storing, and accessing data. e. Data retention and destruction processes. Deliver frequent training and communication, focussing on threat identification and response plans. Establish data sharing and access agreements which document data access, usage, and ownership policies. Define business continuity and disaster recovery plans for the AI system, including end-to-end infrastructure and resiliency controls. Conduct regular simulations to ensure the appropriateness of continuity and recover plans.
While AI holds potential to improve healthcare systems, its realization is dependent on the careful application of risk management principles to ensure sustainable and effective implementations.
Applied framework
Key questions to consider when establishing policies and procedures for risk management in healthcare organizations
Abbreviation: AI, artificial intelligence.
Conclusion
Artificial intelligence is a complex activity that requires careful risk management and oversight, particularly in Canadian healthcare organizations. While AI holds potential to improve many aspects of healthcare service delivery, risks must be carefully mitigated to prevent unintended consequences. A combination of administrative and technological solutions must be employed by healthcare organizations—and even when those steps are employed, organizations must remain vigilant about keeping protective measures current and viable. In this study, we provide a summary of major risks and present a framework to serve as a starting point for risk management in the adoption and implementation of AI in Canadian healthcare organizations. The utility of this work is in supporting boards, senior leaders, and risk managers to develop appropriate internal processes and controls for managing risk in AI initiatives in which they participate.
