Abstract
Contemporary healthcare at all levels increasingly uses Artificial Intelligence (AI). However, since the various levels involve different tasks, have different data needs, and different ethical obligations, the AIs that are used have to be differently structured. Also, since healthcare construed as a commodity involves different ethical parameters from healthcare construed as a right, and different ethical systems entail logically distinct considerations, this also necessitates the need for differently structured AIs. This column sketches how and why this is the case. It concludes with a brief look at why AIs programmed into quantum computers would not change this.
Introduction
Contemporary healthcare, whether delivered at the hands-on level or organized at the institutional or policy level, is data intensive and increasingly involves the use of Artificial Intelligence (AI). Given the differences in these spheres, it is not surprising that different data needs are involved and that this is reflected not merely in the construction and technical functioning of the AIs that are used but also in the ethical aspects of their operation. Thus, the fiduciary obligation that hands-on healthcare professionals have toward their patients conditions their data needs differently from the data needs that healthcare administrators have when organizing and running healthcare institutions, and the data needs and ethical considerations that are relevant at the policy level differ from both of these. This means that the types of AIs that are used by each of these involve somewhat different parameters. However, not even at the policy level does one-size-fit-all, because there are ethically relevant differences between how different societies consider the ethics of healthcare and to what ethical systems they subscribe. What follows sketches some aspects of this with respect to the AIs that are used in these various ways.
A definitional preamble
However, before doing so, it may be appropriate to preface the discussion with a brief statement about how the term “AI” will be understood, because this also has ethical implications.
To put it first in general terms, the expression “AI” will be understood as referring to an electronic calculating device that is designed to access data and data bases and even to control the tools that are used by their respective operators and—if they contain appropriate subroutines—that can computationally identify patterns, trends and associations among the data that they handle. Thus, so-called deep learning AIs can distinguish complex patterns in pictures, text, sounds, and other data to yield accurate analyses and to make accurate predictions. 1 AIs may also be designed to function independently of direct human input once they are employed or to modify their own subroutines in order to produce more accurate results as more data become available to them or in order to optimize their internal functioning and achieve more desirable outcomes—where this, of course, would be determined by the values that have been programmed into them. 2 They would then be self-modifying expert systems. However, this notwithstanding, they are issue-specific in their functioning and do not act independently of the operators who employ them on specific occasions or for specific types of tasks. While their operation may be momentary or temporally extended, ultimately they are nothing other than complex electronic calculating devices.
A further clarificatory preamble that may be appropriate concerns the use of the term “ethics.” Thus—to mention but a few more well-known types—it may be understood as referring to so-called virtue ethics, feminist ethics, religious ethics, agapistic ethics, teleological ethics, communitarian ethics, and deontological ethics. In what follows, however, it will be understood more generally: as referring to the system of moral principles and rules that govern a person’s conduct or activities, where these rules and principles are independent of individual or societal decision and have objective validity even when this is not recognized as such. Consequently, it immediately follows that when talking about the ethics of AI in healthcare, the epithets that are used in this connection do not—indeed, cannot—directly apply to the AIs themselves because AIs are not persons but machines. The relevant terms directly apply only to their users and designers, and only derivatively to the AIs themselves.
It may also be worth mentioning at the outset that the term “person” is inherently ambiguous: it may refer either to what is traditionally called a natural person such as a human being, or to a legal person such as a corporation, society or agency. 3 The reason this is important is that ethical considerations directly apply to persons irrespective of whether they are natural or legal entities. Therefore how AIs are used by healthcare institutions and committees is just as much subject to ethical consideration as how they are used by hands-on healthcare professionals.
AIs at the hands-on level
At the hands-on level—and speaking purely informatically—a patient presents to a healthcare professional as a totality of possibilities of distinctions. Which of these the professional acknowledges depends on their orientation, training, the instrumentation that they use, and why the patient seeks their services. This essentially limits the domain of patient data that hands-on healthcare professionals standardly access. This domain may be called their data need domain. However, a professional may have occasion to go beyond this: for instance, to investigate the possibility of a new diagnosis or a new method of treatment, and to correlate the patient’s data with the information that is contained in external data bases. AIs are increasingly used to do so. This, however, raises privacy concerns because using several patient data together may itself identify a patient. Consequently, ethically speaking, the AIs should contain safeguards against inappropriate informatic disclosure. Moreover, since data validity is an essential requisite for valid diagnosis and prognosis, the AIs should also be structured to prevent unauthorized data alteration or modification of their subroutines when they access external data bases.
AIs at the institutional level
The ethical issues that arise for health leaders derive from their responsibility to establish and manage the protocols of the institution they administer. Therefore, they must manage the types of services their institution provides, the number and types of professionals whom it employs, the number and types of patients it admits, and the technical modalities that will be available, as well as the support services and personnel that are necessary for effective institutional functioning. Moreover, they must incorporate all of this into an overall protocol framework. The ethical considerations that govern their activities therefore span both healthcare related issues as well as issues that fall more into the realm of business ethics.
This, in turn, means that any AIs they use must be able to handle not only healthcare, administrative, and financial data but also operational considerations that are grounded in the diverse mandates of the distinct types of individuals who work in the institution. While privacy and security also issues arise in all of these contexts, what these issues are and how they arise is different. Consequently, the AIs they use have to be differently structured to deal with these distinct types of concerns.
Furthermore, health leaders have to be able to access the qualifications of the resident professionals and compare them with current objective standards since this may affect staffing, and must also be able to consider the professionals’ practice profiles since this may have legal implications when it comes to patient satisfaction. Consequently, any AIs they use have to be able to deal with administrative, financial, professional, and healthcare data, and not merely in an efficient but also in an ethically and legally acceptable manner.
Moreover, healthcare institutions that are situated in a society that construes healthcare as a right differ from institutions that are situated in a society that construes healthcare as a commodity. Therefore, their institutional allocation protocols will be ethically different. In the first case, access to institutional services will be on the basis of need, and allocation considerations will be in keeping with the fiduciary obligation that the institution incurs when a prospective patient enters its premises. In the second case, access and allocation protocols will essentially be on an ability-to-pay basis. This difference, in turn, would have to be reflected in the structures and functionings of the AIs that the administrators may use. At the same time, however, and irrespective of whether an institution is situated in a right- or a commodity-oriented setting, professional codes of ethics also have to be acknowledged in the institutional protocols. These may be distinct, which means that a logical conflict might arise when programming the subroutines of the AIs that are used by the administrators. Arguably, that can only be avoided if the AIs are structured in distinct segments and contain subroutines which, when activated by the administrators, switch their calculations to the relevant segments. That would also be necessary if the institution existed in a society that subscribed to a mixed right-commodity orientation.
AIs at the policy level
Health leaders who develop policy the societal level develop the overall structure of healthcare delivery in their respective societies. The provisions they put in place determine whether healthcare will be construed as a right, as a commodity, or as a combination of both. In order to do so, their deliberations have to integrate a variety of distinct considerations such as the financial capabilities of their society, as well as what proportion of the society’s resources to assign to the other types of services that the society is obliged to provide: Infrastructure, defense, communication, and education are but some of the other services that are implicated. While to some degree this is paralleled by the issues that face an institutional administrator, societal policy-makers function on a much larger scale and their data need domain extends beyond the healthcare, legal, and financial fields. Therefore, any AIs they use have to be able to access and integrate healthcare, demographic, and economic data as well as defense, communication, and education data, and also data that project the impact of industrialization and research: all this while safeguarding relevant privacy rights and shielding their calculations from foreign intrusion and influence.
Moreover, their AIs have to be structured in accordance with the ethical perspective to which the society subscribes. However, different ethical principles are involved in distinct ethical systems. Consequently, the AIs that policy-makers in one society use cannot be used by policy-makers in a society that differs in this regard—to say nothing about the differences in the legal provisions of the societies in question. Moreover, all of this has to be consistent with how the ethics of healthcare itself is understood: whether on a right-, a commodity-, or a mixed right-commodity basis. Of course this issue also arises at the hands-on and institutional levels, but here it has different ethical dimensions.
Conclusion
Arguably, the emerging technology of quantum computing will effectively deal with many of these issues because AIs that are programmed into quantum computers—into computers that utilize qubits—will be able to accomplish in a fraction of the time what it takes classical computers to do much longer.
However, that is to misconstrue the issue, for what is at stake is not calculative speed but logical consistency. That is to say, the determining factor when engaging in ethical considerations is the nature of the ethical system that is used. However, the tenets of distinct ethical systems are logically incompatible. Therefore logically distinct types of algorithms are required to reflect the logically distinct tenets of the different ethical systems. This means that distinct ethical systems cannot be integrated into a single operating system. The fact that qubits can be in two superposited states at once does not change this because the necessity of logically distinct types of calculations still remains. The actual calculations would therefore require distinct subroutines, and it would be necessary for an AI to switch from one to another as the ethical parameters changed. Therefore, while the relevant programs might be located in one and the same quantum computer, that would be a matter of location, not of operation.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Ethical approval
Institutional Review Board approval was not required.
