Abstract
This paper advocates for a constructivist approach to symbiosis to restore human-centredness in the governance of Symbiotic Artificial Intelligence (SAI). Challenging rigid, deterministic foundational methods warns against the risk of divorcing ethics from mere adherence to moral principles. Instead, it calls for a shift towards a distributed, contextual, relational, and dialectical structure to embody human-centredness. Through an analysis of the SAI landscape and its interplay between social and technological factors, the paper argues for a reconceptualisation of theoretical foundation and human responsibility within the socio-technical perspective. Chapter 2 delves into foundational issues of SAI, questioning the application of biological categories and proposing patterns of SAI based on definitions of intelligent life. Chapter 3 explores the potential of a constructivist approach, emphasising flexibility and context awareness, and presents a framework for understanding and evaluating SAI systems, components of an evolving methodology.
Keywords
Introduction
The European Commission (EC) has long advocated the certainty of a
Collectively, these documents underscore the European Union’s (EU) political agenda not only to coordinate an integrated approach for maximising AI opportunities and addressing associated challenges but, more importantly, to position itself as a global leader in the development and deployment of AI that
This approach underscores several pivotal strengths, notably encapsulated in the HLEGAI’s endeavour to leverage a dual normative function of fundamental rights. Indeed, these rights encompass both the legal protections enshrined in the constitutional frameworks of nations (actual legal rights) and the inherent rights of individuals grounded in the intrinsic moral status of human beings (ethical values that may not be legally binding but are pivotal for ensuring the trustworthiness of AI on a global scale).
Another salient strength is recognising that every normative orientation constitutes a structured, albeit not always systematic, array of moral assertions interlinked in diverse manners. In essence, all these assertions can be traced back to a primary moral principle (monistic ethics) or can stem from multiple principles (pluralistic ethics) [57]. The EU’s ethical framework for a “good AI society” [25], particularly as formulated within the HLEGAI, adopts a pluralistic ethical standpoint contingent upon interrelated principles for the betterment of society.
In this paper, we discuss the challenges posed by applying a human-centric normative orientation to
Against this backdrop, research at the University of Bari, conducted within the NRPP-funded Future AI Research (FAIR) project, focuses on designing new interaction paradigms to amplify human performance while ensuring system reliability, safety, and trustworthiness. Acceptability of SAI systems, particularly concerning value alignment between AI and humans, is a crucial focus of research within FAIR. This involves interdisciplinary studies to address issues such as privacy policies, security, and fundamental freedoms. Philosophical perspectives are also explored to understand the epistemological and ethical aspects of SAI.
In this paper— continuing the work presented at the workshop BEWARE@AIxIA 2023 [6]— we contribute to the understanding of human-centred symbiosis with AI by introducing a methodology for assessing the impact of SAI systems using a human-centred approach.
In Chapter 2, we address some foundational and philosophical challenges of SAI by analysing first the relationship between life and artefact (Section 2.1) and then the one between intelligence and symbiosis (Section 2.2).
Chapter 3 will explicitly address the potential of a constructivist approach in redefining the foundation of SAI and its alignment with human-centred ethics (Section 3.1), advocating for flexibility and context awareness. We will propose a constructivist framework for understanding (Section 3.2) and evaluating (Section 3.3) SAI systems, delineating the outline of an evolving methodology on which we are working in the project FAIR – constructivism is a social sciences theoretical perspective asserting that knowledge and meaning are constructed through social interactions and experiences, emphasising the role of culture and context in shaping human understanding [2, 53].
In Chapter 4, we conclude the paper with final remarks and outline directions for future research endeavours.
A last clarification before proceeding: We use the terms ‘human-centric’ and ‘human-centred’ synonymously, referring both to design and research approaches that prioritise human needs, behaviours, and experiences as the central focus, ensuring that the resulting systems or solutions are optimally aligned with user requirements and usability principles.
Foundational and philosophical challenges of SAI
The concept of ‘Symbiotic Artificial Intelligence’ introduces an additional layer of complexity to the already problematic notion of ‘Artificial Intelligence.’ Incorporating the adjective ‘symbiotic’ further complicates the scenario yet represents one of the newest boundaries in AI research. The term ‘symbiosis’ inherently suggests the coexistence of two or more entities that collaborate or, at the very least, derive mutual benefit from their association. Thus, a genuinely Symbiotic AI would foster an interaction between humans and machines that transcends the traditional dynamic of controller and control, evolving into a reciprocal relationship between two agents potentially equal in decision-making, capable of mutually influencing one another. These agents, though not identical, are both intelligent, with the former ‘controller’ – the human – entering a partnership where the machine may assume control by adjusting, amending, or directly intervening in decisions in real-time towards a shared objective, leveraging a vast and precise understanding of data and procedures necessary for the task. It’s important to acknowledge that despite the potential asymmetry in this interaction between intelligent entities, a degree of control always remains, exercised mutually, ensuring the symbiotic relationship’s continuity.
A visionary example could be Elon Musk’s Neuralink. Let’s imagine that one day, some of us, for both therapeutic and creative purposes, could equip ourselves with brain implants designed to allow us to use our neural signals to control external technologies. Naturally, the AI guiding such implants would be fitted with an
Against this backdrop, envisioning SAI raises questions about the nature of the technology required and whether current forms of human-machine interaction hint at a future symbiotic relationship. Kai-Fu Lee and Chen Qiufan [37] speculate about a near future where smartphone applications, through real-time data on our actions and behaviours, could suggest changes to our decisions or behaviours based on an intricate network of app interactions. Is this merely a matter of data availability and technological capability, or is something more profound at play in achieving a symbiosis between natural and artificial intelligence?
To consider AI in this context is to imagine machines or software capable of not just ‘live’ learning and response, as seen with the IoT, but also of discerning behavioural regularities, intervening in real-time with a specific ‘authority’ to alter human conducts that would otherwise occur differently. Symbiosis, in this sense, implies a learning-driven agents’ relationship where both entities coexist and learn from one another: the machine from observing human actions and the human from insights provided by the machine based on its observations. In a recent project attempting to use symbiosis between machine and human beings to build AI systems capable of understanding the nuances of the real world, precisely this point is emphasised: “While the computer’s role has shifted from being a passive ‘executor’ to an active learner, the role of the human is still that of an actively involved teacher, because it is the humans that create clean, consumable data for the computer to analyse and hence train itself. However, what if there was a way in which the computer’s role remains that of an active learner, but we humans can passively sit back and go on with our lives while the computer learns from our actions?” [52, p. 4].
The shift here is from instructing machines to learning from them, not just acquiring new information but receiving guidance on enhancing our actions to achieve desired outcomes. This form of HCI is widely discussed in scientific literature, but the accuracy and implications of applying ‘symbiosis’ to AI warrant further examination and pose foundational philosophical questions. This section addresses these questions, exploring the definition and feasibility of SAI and its compatibility with a human-centric approach.
Life + artefact: how is symbiosis possible for machines?
At the heart of considering Symbiotic Artificial Intelligence (SAI) from a foundational perspective is the interplay between organic life and manufactured artefacts, traditionally viewed as diametric opposites. The essence of a living organism differs markedly from that of an artificial construct, yet in the context of SAI, these distinct entities are envisioned to coalesce. The historical backdrop for differentiating natural beings from artificial ones dates back to Aristotle’s
According to this traditional paradigm of the life sciences that goes from Aristotle up to modern biology, an artefact or a machine (an entity that is a product of
The modern era, especially with the rise of the mechanistic model, will reject such a view of life by coming to conceive even living beings as machines whose movement is always the result of a complex chain of external impacts (think of Descartes, Hobbes, and Spinoza). But a revival of Aristotelian conceptuality will occur in the nineteenth century, with
This conception, whereby a living being is one that possesses an autonomous principle of movement, a self-finality (
One might ask: What distinction would remain if an artefact (a device or technology devoid of organic components) shared all characteristics typical to organisms composed of organic material? Would they both be deemed living entities? [44, p. 55; p. 56]. Is it possible for an artefact to transform into an autopoietic machine? This avenue has been explored since the mid-20th century (for instance, by Von Neumann with his self-replicating machines) and later during the 1980 s in the field of artificial life (A-life). A-Life operates on the functionalist premise that life can also be digitally synthesised, meaning the composition of the substrate, whether atomic or digital (bits), is irrelevant. What matters is that this substrate demonstrates specific interrelations and characteristics, such as self-preservation, self-reproduction, and autonomous movement for its own sake, among others. Numerous examples exist (Conway’s Game of Life, Polyworld, RepRap, Slugbot, etc.). These advancements appear to blur the traditional line between life and artefact, a matter of great significance for the question of founding SAI. In this respect, SAI seems to be an AI that potentially engages in a ‘life-to-life’ relationship. But accordingly, are robots ‘living’ entities? Are artificial intelligence such as ChatGPT entities that, although not “material,” exhibit some of the characteristics of life?
Addressing robots, Lévy [38] suggests that, according to some broadly accepted biological benchmarks (such as those proposed by Koshland [34]), it is not entirely far-fetched to classify robots and, by extension, AI programs as life forms. De Collibus [13] expresses a contrary view regarding Large Language Models (LLMs). Several attributes typically associated with living beings are also identifiable in ChatGPT, like the ability to self-evolve and learn, a trait shared with humans, dogs, or fish. ChatGPT can capitalise on accumulated data like these organisms, enhancing its complexity and devising increasingly effective solutions over successive problem-solving attempts. However, these technologies lack a crucial aspect of life as defined by Spinoza’s concept of
Intelligence and symbiosis: three patterns based on (dis)continuity
How do things stand with intelligent life? This question is essential for the foundation of an SAI since it is assumed that the two lives that should enter symbiosis are both intelligent lives. Indeed, in the case of the artificial symbiont, its life might coincide completely with its intelligence. SAI thus allows us to ask important questions about the relationship between life and intelligence.
The problem is tough since it depends on what kind of definition of intelligence we start with.
A spectrum of perspectives on defining intelligence, ranging from inclusive to exclusive, underpins discussions on the nature and potential of SAI. Starting with Schelling and Darwin, a minimal view emerges that intelligence essentially involves the capacity for organisation. This perspective posits that even simple biological entities, such as mushrooms or worms, demonstrate intelligence through their organised efforts to solve problems and adapt to their environment. This organisation principle extends to artificial intelligence, where software and programs exhibit a basic structure enabling them to reconfigure based on contextual inputs to meet specific goals [41, p. 21]. From this perspective, which could be called ‘homogeneous continuism’, an SAI is already at work. It has been for a long time [39]. Our devices, through which we interact with generative AI systems, are already in some sense forms of intelligence with which we live in symbiosis. Here we are dealing with a definition of intelligence that flattens discontinuities and reduces them to a mere difference in degree: it is enough for an entity to exhibit a specific computational strategy to be considered intelligent (from viruses to humans and AI systems). At this level, an SAI is conceivable as a form of integration between human beings and digital devices [35, 62], in a relationship that provides for their increasing autonomy, seamlessness and self-development. A perspective that has begun to be investigated in the
However, there are less inclusive definitions, such as those that restrict the field of intelligence to vertebrates with a minimum of brain activity [40]. In this case, calling that AI an ‘intelligent’ life would already become less noticeable. Floridi [26], for example, contends that the intelligence of current generative AI systems falls short of even that of a sheepdog, likening their cognitive capabilities to those of a dishwasher (see also [14], in which Dennett argues that AI is the result of “a slow, mindless process”). Accepting this premise implies that SAI cannot be based on our current interactions with technologies like smartphones or ChatGPT. Instead, envisioning SAI requires imagining a relationship between humans and a robot whose artificial intelligence parallels at least that of an animal. Under this scenario, true SAI would manifest primarily in an exosymbiotic, rather than endosymbiotic, framework, potentially side-lining wearable or prosthetic AI technologies from the SAI paradigm. However, the problem is that the realisation of ‘animal’ level AI remains highly problematic.
In opposition to continuism, there are various versions of intense or absolute discontinuism, whereby intelligence is a uniquely human capability. To define intelligence as any ability to solve problems or to calculate even the way animals do would be an equivocal way of talking about intelligence because proper intelligence is only our own. Indicative of ‘proper’ intelligence would only be capabilities such as intentionality, universalisation, creativity, spontaneity, self-consciousness, emotionality, etc.: all characteristics that are the exclusive preserve of human beings. In this case, to talk about SAI, we would have to approach science fiction or post-humanism or very distant in time perspectives that contemplate the possibility of achieving an Artificial General Intelligence (AGI) and thus some “Singularity” [7].
Embracing constructivism: rethinking symbiotic AI’s foundation
Suppose we delve into the historical divide between organic life and artificial constructs, questioning whether machines can exhibit characteristics traditionally associated with living organisms. In that case, speaking and envisaging a foundational hypothesis of SAI takes a lot of work. While some argue for a continuum of intelligence across biological and artificial entities, others contend that true symbiosis with machines necessitates AI with capabilities rivalling at least those of animals. Accordingly, from the kinds of human-machine interaction known today, it is pretty impossible to find ‘symbiosis’ by reflecting on the nature of intelligence, the potential for symbiotic relationships between humans and machines, and the boundaries between living and non-living entities.
Based on these premises, in this part of the paper, we will argue in favour of a possible alternative pathway to the foundation of SAI. We will pursue a
From a deterministic and theoretically firm perspective, we know that a constructivist approach may not appear as the most rationally justified path to speak of ‘foundation’, as nothing is constructed without first founding something. However, given the highly sociotechnical nature of SAI, there is no possibility of a foundation other than the ‘weak’ foundation of a constructivism that maintains a flexible, adaptable, cautious, and context-aware direction of thought. This position is supported not only by the foundational dilemmas and doubts we have shown in Chapter 2 but also by the argument that a constructivist approach is the only possible one if we want to reconcile an explainable justification of SAIs with ethics of AI based on the human-centred and trustworthy design of machines, as demanded by European institutions [21, 30] and significant international agencies [48, 63]. To demonstrate this, in Section 3.1, we will highlight the critical points of this discrepant opposition between the foundation of SAIs and the ethics of AI, such as human-centeredness.
Having highlighted this discrepancy in Section 3.2, we will argue in favour of a constructivist approach as the only possible way to reconsider the relationship between SAI and human-centeredness in dialectical terms rather than opposition. Of this approach – which we are trying to refine in the FAIR project – in this paper, we present its methodological outline consisting of a theoretical onto-epistemological framework (3.2) and a preliminary evaluation framework (3.3) to identify the main sociotechnical characteristics of an SAI system and, from there, proceed to the subsequent steps – which we are still refining in our FAIR team – for the ethical assessment of the human-centeredness of SAI.
SAI foundation vs. human-centred ethics
As previously discussed, the intricate nature of the ‘symbiosis’ concept encompasses various definitions and foundational viewpoints regarding the complex relationship between beings in general, with particular emphasis on humans and artificial agents. This relationship poses a significant challenge due to its dual investigative nature, where symbiosis is interpreted as a connection between life forms at one level and as a relationship between forms of intelligence at another [28].
However, what implications arise from such foundational variety when transitioning from speculation to ethical evaluation and normative orientation? Do we genuinely necessitate an ethics of SAIs? And if so, given the absence of a unified foundation, what type of normative orientation should we adopt?
In ethics, specifically, the normative ethics associated with AI [26], obligations are not necessarily imposed, but behavioural normative orientations are expressed. A normative orientation indicates a ‘should be’, reflecting a structured yet not always systematic framework containing justifiable and coherent moral normative statements [12]. However, it’s essential not to misconstrue the ‘should be’ as a forced adherence to a value or ideal; instead, adherence may be motivated by various factors, such as a consideration of consequences (
Therefore, the challenge lies in establishing a foundation for being and determining the normative orientation to govern the diverse aspects of moral ‘should-be’. Consequently, the question arises: to what normative orientation can SAI, already complicated by excessive variability at its foundation, be traced back?
The human-centric approach is the indispensable cornerstone of normative guidance within the FAIR project framework. It forms the bedrock of the European Commission’s ethos on AI ethics [21, 30] and resonates throughout various international normative doctrines [48, 63]. Yet, harmonising the intricate foundation of SAI with the imperative of AI human-centeredness presents formidable hurdles. Our inquiry reveals this reconciliation to be somewhat problematic. Upon scrutiny, a growing dichotomy emerges between these two paradigms, primarily due to irreconcilable disparities.
Beginning with SAI, the quandary lies in its excessive foundational variability, which defies attempts to embed it within the realm of human understanding. Consequently, its origins need a distinctly human-centric focus.
A second challenge arises concerning the ethical dimension of AI human-centeredness, revealing inherent vulnerabilities. Even before delving into specific technology interactions, criticism may be levelled at the human-centric approach for its latent speciesism [60]. Critics contend that human superiority is a flawed concept, advocating instead for acknowledging intrinsic value in non-human entities. The application of anthropocentrism to AI ethics thus risks being perceived as a biased global guiding principle, prompting scrutiny over its relevance in matters of international and environmental justice [8–10].
These challenges underscore the apparent discordance between an SAI needing a more explicit foundation and a notion of human-centeredness that may verge on abstraction or even AI colonialism [10]. If left unresolved, this dichotomy is poised to escalate. On the one hand, “techno symbiosis” is on the rise, albeit not exclusively in a biological context but also in a metaphorical and meta-semantic sense [29]. Conversely, substantive proposals to transcend mere principle-based definitions and actualise normative ideals into tangible, democratic social and political frameworks – such as navigating human existence in the algorithmic age – remain conspicuously absent [27].
Constructivism as an onto-epistemological framework of SAI
Could we transcend
Symbiosis, in essence, does not exist. It is not merely constructed
An ideal type’s comprehensive and explanatory power lies in the richness of the constructivist approach to the SAI foundation. It is not a proper foundation because, as we have said, symbiosis per se does not exist. Instead, symbiosis manifests as a set of processes rather than a brute fact in the interaction between humans and machines. Therefore, theoretical attention should shift toward understanding the dynamism of the processes through which such condensations exhibit some degree of symbiosis[8, 9].
New approaches are emerging around similar needs, challenging the link between machines and ethics understood solely in normative terms (compliance with laws and ethical principles) or engineering terms (machine ethics). The problem does not lie in the ontological definition of the type of symbiosis with machines but in extracting rational and social value from these definitions, interrogating them both as technological devices and as metaphors of “socio-material practices” [49] or “techno-scientific practices” [53].
Finally, such a repositioning of symbiosis in the domain of possibility and dynamics would render dialectical the old opposition between conceptual foundation and ethical evaluation of human-centeredness. In doing so, the foundational excess of SAI would not be a conceptual problem but an epistemic opportunity, a potential strength that constructivism would help to understand by extracting value and insights from how it can be defined. In this regard, it is precisely,
Evaluating a socio-technical system is notably more challenging than assessing an individual technology due to numerous variables, contextual variations, scenarios, and changing user dynamics. In our ongoing research as part of the FAIR project, we are developing a methodology for the human-centred impact assessment of SAI, which comprises several steps, each executable through diverse techniques and approaches.
The initial step of this methodology entails electing socio-technical constructivism, which we have delineated as the onto-epistemological foundation [53] upon which to ground our method. Naturally, this foundation is ‘non-foundational’, as it is determined by the levels of abstraction and participation implicated in techno-scientific practices.
Sociotechnical features for a preliminary ethical evaluation of human-centred SAI
How can we effectively evaluate the human-centredness of an SAI system? What criteria should we consider when identifying the level of symbiosis within the system?
Indeed, establishing an evaluation framework is the second step of the methodology we are working on. This framework scrutinises and weighs the ethical robustness concerning the design and implementation of SAI systems in various contexts. Its primary purpose is to describe the application of an SAI system deployed in a specific project and sociotechnical context. This step allows the method to initialise and serve: (a) to map out the general socio-technical features of the SAI system under analysis; (b) to provide a set of dimensions to contextualise the application of the SAI system under examination; (c) (For each dimension) to offer a series of questions, the answers to which facilitate the screening of the respective SAI system.
To be succinct, this article will not delve into the questions but rather aims to account for the five dimensions along which screening occurs in our method.
On the other hand, it is identifying
Conclusions and future work
A constructivist approach to symbiosis presents itself as the most viable alternative for reinstating human-centredness in the governance of SAI. Should we persist in adhering to rigid, deterministic foundational methods, we risk gradually divorcing ethics from being a mere adherence to moral principles (such as autonomy, beneficence, non-maleficence, justice, etc. [21, 30]) to being a compelling aspect of machine design, and more broadly, in shaping the digital society we aspire to inhabit. Indeed, the notion of sustainable, fair, and trustworthy algorithms – AI ethics – cannot be solely relegated to compliance with ethical principles. Ethics, in this vein, remains overly abstract and unnecessary [9], fails to serve stakeholders by providing clarity or guidance in development and commercialisation [46], and is at risk of being exploited maliciously [26] or even weaponised as a tool of biopower and colonisation [10].
We must relinquish the grip of determinism – the persistent desire to control centrally – and instead embrace a distributed, contextual, relational, and dialectical structure to embody the ecosystemic essence of human-centeredness truly. While a constructivist approach may initially appear to shift responsibility from humans to machines, it, in fact, addresses the issue of AI responsibility not by fixating on who holds control (humans or machines) but by methodically analysing and assessing the
In Chapter 2, we highlighted the main issues concerning the possible foundation of SAI. Crucial philosophical questions were raised about the possibility of thinking about a form of AI capable of entering symbiosis with humans, starting from the specificity that such a technology should exhibit compared to more traditional AI. In Section 2.1, we questioned the application of biological categories – such as symbiosis – to artefacts. While robotics and AI challenge the conventional distinction between living and non-living based on the capacity of organisms to move autonomously, self-develop and reproduce, some differences remain that currently appear insurmountable. Also, in Section 2.2, we offered a scan into three patterns of SAI based on as many definitions of intelligent life: the kind of SAI we can conceive of strictly depends on how we think about the continuity between organisation and intelligence. The less continuist our position, the less chance we have of putting down a foundation of SAI in a strong sense, namely from a biological or ontological point of view.
Chapter 3 explicitly addresses the potential of a constructivist approach in redefining the foundation of SAI and its alignment with human-centred ethics. We advocate for adopting a constructivist perspective to navigate the intricate landscape of SAI, placing particular emphasis on attributes such as flexibility, adaptability, and context awareness. The discourse extensively explores the multifaceted challenges arising from the diverse foundational perspectives of SAI, proposing a constructivist framework aimed at comprehensively understanding and evaluating SAI systems. This section delineates the fundamental aspects of a theoretical (onto-epistemological) framework and an evolving evaluative methodology designed to assess the sociotechnical characteristics and human-centred attributes of SAI systems.
This method that we are working on will require further refinements as the FAIR project progresses. For this reason, in the future, the research will involve jurists to address the theme of acceptability from the ethical and legal viewpoints. Then, the legal and ethical acceptability of SAI will need to go through an operationalisation process to be of practical relevance for the design and implementation of SAI applications. High-level principles will be, therefore, turned into operational definitions that pave the way to technical solutions,
Footnotes
Acknowledgments
This work was partially supported by the project FAIR— Future AI Research (PE00000013), which is part of the NRRP MUR program funded by NextGenerationEU.
