Abstract
Conversational generative AI-based systems (CGAIS), like ChatGPT, seem capable of taking part in conversations with such fluidity that we may not distinguish them from a human. The global integration of CGAIS, however, bears various risks, including the one of colonizing the social through data. In this article, we are interested in the colonization of a new territory: that of conversations. We therefore investigate the nature of conversations individuals have with these technologies. Following Habermas, the realm of the communicative is what enables to construct a common social world, a lifeworld. Conflating the communicative and the instrumental threatens lifeworld construction and more broadly our democracy. We explore CGAIS technical and conceptual properties, and show that they do not support communicative action. Rather, CGAIS should remain confined to the realm of instrumental action. Yet, as they give the impression to belong to the realm of the communicative, we conceptualize them as colonialist agents. Notably, they are imperialist agents because they are increasingly used for all types of activities at work and in the private and public spheres. This reduces the space for communicative action. In addition, they are derealization agents as they distort conversation, by giving the illusion of communicative action. As a result, they threaten the co-construction of a common lifeworld, that forms the basis of our democratic societies.
Keywords
Introduction
Conversational generative AI-based systems (CGAIS), exemplified by ChatGPT, are spreading to the whole economy and society. As general purpose technologies (Eloundou et al., 2024), they demonstrate their usefulness for all sorts of tasks. Notably, they help write swiftly with little syntactic and semantic errors and more broadly, we can ask them to produce all sorts of texts that support our everyday activities. The intelligibility of these texts may give the illusion that we are communicating with a person instead of a machine. Indeed, these systems can conduct smooth conversations to the point that they could pass the Turing test (Mei et al., 2024). The risk of anthropomorphizing AI is significant (Hutchins, 1995; Weidinger et al., 2021) and as a consequence, we may erroneously attribute cognitive capabilities to the machine. Notably, we may start living under the illusion that the outputs of the machine are shaped by its experience of the world (Hutchins, 1995), or in other words, that it constructs a social and cultural background and therefore its own lifeworld. We may alternatively, and to a lesser extent, think that it can understand our own lifeworld, which per Habermas (1984) is formed by our lived experiences and beliefs that guide our attitudes, behaviors and actions in our interactions.
The advent of generative AI brought about numerous existential debates and controversies including ethical ones (Dwivedi et al., 2023; Wach et al., 2023). For example, to ensure that generative AIs are kept under control and properly operated, Lanier (2023) insists that we must treat them as tools powered by “simple mathematics” and not as creatures. Yet, we may need to go further. Recognizing their extreme efficiency for automating numerous tasks, we may increasingly delegate more tasks to the tool. Doing so, we may reduce our ability to act without the tool. Further, using them to answer questions and taking their answers as true positions them as authoritative references when we want to extract knowledge. More broadly, data technologies are central to a new stage of capitalism that relies on data as commodity (Thatcher et al., 2016) and on total datafication (Sadowski, 2019). The ubiquity and centrality of data technologies brought about by CGAIS accelerate capitalism's colonization of the social domain and its appropriation of human life through data (Couldry and Mejias, 2019a). However, the colonization of the social may not only take place through data extraction, but also as different forms of communication may become colonized by CGAIS. That is why an important question is whether CGAIS can support and even participate in language interactions, conversations and discussions that are not solely strategic. They are presented as potential work partners, social “conversational agents” or chat-bots because they generate text-based responses in a conversational manner (Dwivedi et al., 2023). Constantly thwarted by misleading vocabulary and labels (Rowe and Markus, 2023), evocative of mirages and miracles, we should perhaps reflect deeper on the phenomenon in question. Therefore, we ask: are CGAIS taking technology beyond its status as a tool, moving it out of the realm of strategic action? In other words, should we take seriously the idea that we can have conversations with them, i.e., engage in a form of discourse, defined as communicative, where both participants co-construct a lifeworld (Habermas, 1984)?
This question is not just a question of shifting or misuse of vocabulary. Non-instrumental conversations are central to humans’ ability to build common ground. As we further develop (see section 1.1), Habermas has forcefully argued that communication free from domination relations is central to a democratic society. If we increasingly engage in strategic action with bots, then we have increasingly less time and opportunity for communicative action, we lose our sense of alterity and even of our lifeworld (Bjørn and Ngwenyama, 2009; Habermas, 1984). Indeed, the “
We focus on CGAIS, insofar as the various conversational systems that exist operate according to different technical modalities (Diederich et al., 2022). In particular, we focus on conversational aspect of these systems, which means that we are not interested in their ability to access external tools (Göldi and Rietsche, 2024), which is most obviously strategic. In the critical analysis that we carry, we recognize that CGAIS behaviors may be continuously improved. That is why, our analysis does not target their behaviors, but rather the structural, or even conceptual properties of the conversations they produce. Such an analysis is well anchored in the AI tradition, as the main criticisms of connexionist AIs were not related to their behaviors, but rather to their structural features, namely, their inability to represent, and therefore to produce knowledge about human cognition (Hutchins, 1995; McCarthy and Hayes, 1981). These criticisms are still relevant today, as they point toward one of the main issues of contemporary AIs: their inexplicability (Bommasani et al., 2021). Exploring CGAIS technical and conceptual properties, we conclude about how they construct knowledge and the nature of the knowledge they construct, and subsequently about the status of the conversations they seemingly hold. We show that while CGAIS are technically capable of smoothly producing
This article makes an original contribution to the debate about the effects of generative AI today as it shows that CGAIS become a colonialist agent of a new territory: that of conversations. Indeed, as colonialist agents, they extend the realm of instrumental rationality (imperialism) at the same time as they alter and distort the realm of conversations (derealization). This, we argue following Habermas (1984), is highly problematic. Indeed, communicative action is the foundation of a democratic society, because it is the moment when individuals’ lifeworlds meet, which enables the construction of shared meaning. When instrumental action is conflated for communicative one, the realm of the instrumental contaminates individuals’ lifeworld. As a result, developing shared meaning becomes more difficult and communication breakdowns occur (Bjørn and Ngwenyama, 2009). Overall, these systems undermine the possibility of referring to a lifeworld and building common ground through communicative action (Habermas, 1984).
In the first section, we present Habermas theory of communicative action and the principles of conversation validity. In the second section, we analyze the nature of the knowledge produced by CGAIS during a conversational interaction, in light of these principles. In the third section, we reflect on the implications of this analysis and discuss them in light of related debates around ‘data colonialism’ (Couldry and Mejias, 2019a).
Habermas and the theory of communicative action
Habermas's theory of Communicative Action (1984) has proved useful for analyzing technology mediated human interactions in the use and development of information systems (e.g., Cukier et al., 2009; Hirschheim et al., 1996; Lyytinen and Hirschheim, 1988; Pozzebon et al., 2006; Ross and Chiasson, 2011). Research has mobilized this theory mainly to think about how technological devices can support an “ethics of discussion” (Habermas, 1962) necessary for democratic and deliberative communication (Ross and Chiasson, 2011). Today, CGAIS form the basis of systems with agentic capabilities that may be considered to participate in conversational interactions with human interlocutors (Baird and Maruping, 2021; Floridi, 2023). Novel questions arise around what these developments imply for the realm of communication, which give renewed relevance to Habermas's theory (Monti, 2024). In this theoretical essay, we show that the concepts of “strategic action” and “communicative action” offered by Habermas help better understand CGAIS capabilities and limitations. First, we explain that communicative conversations in Habermas’ theory are important as they are when participants co-construct a ‘lifeworld’. Second, we review the four validity claims necessary for a conversation to be communicative.
Communicative action: The foundation of democratic society
The reason why CGAIS are at the heart of hot debates is that they touch upon a fundamental aspect of life in society: conversation. Jürgen Habermas is undoubtedly the philosopher who has most highlighted the fundamental link between conversation and society, which is why we now turn to his theory of communicative action (Habermas, 1984). Indeed, for him, emancipation is linked to communication: it is the liberation of communication (unfettered communication, free from relations of domination) that creates the conditions for a democratic public space and legitimate institutions.
During a conversation, an utterance acquires its meaning by referring to a contextual
Argumentative discourse, or ‘communicative rationality’ as Habermas calls it, is the foundation of an unconstrained consensus. This consensus happens as participants transcend their respective subjective points of view to ensure the unity of the objective world as they refer to a common lifeworld. Communicative rationality is the source of a productive and positive intersubjectivity, capable of peacefully resolving conflicts, and generating consensus. Ultimately, communicative rationality is the necessary basis of a democratic society.
We can see how important it is, from the point of view of a democratic society, not to distort communication, and to set the conditions for ideal communication. Conversely, Habermas, drawing on the critical Marxist tradition, argues that advanced capitalism, through its system orientation (based on instrumental rationality) systematically distorts communication. This is because the interpersonal relations are constrained by established hierarchical orders, relations of power or relations of interest, for example. Within these relations there is no space for unconstrained consensus. When the instrumental contaminates other aspects of human life without individuals being clearly aware of it (Hylving et al., 2022), there is a risk for democracy but also a risk of alienation and dehumanization. Therefore, at a time when people increasingly interact in a conversational manner with CGAIS, it is central to be aware of the type of communication they support and to reflect on the effect of increasingly engaging in these types of communication. To examine this, we now unpack more precisely the theory of communicative action.
Communicative action: A co-construction based on four validity claims
Instrumental rationality is based on strategic action, a success-oriented, or utilitarian form of communication that aims at efficiency. The goal of strategic action, is achieving success for the actor. With such instrumental rationality, language is a medium “
Habermas distinguishes a communicative rationality, which is oriented towards mutual understanding. It relates to the interaction of at least two subjects who engage in an interpersonal relationship, whether by verbal or non-verbal means. “
Comprehensibility means that the listener must be able to understand what is said (i.e., the speech act must take place in a language appropriate to the listener). Unlike the three next validity claims, which refer to the pragmatics of language, this claim addresses syntax and semantics (Cukier et al., 2009): the criteria for assessing validity are the conceptual clarity and the syntactical and semantic correctness. Violations to comprehensibility may occur from incomplete messages, information overload, or excessive use of language the participant cannot understand.
Truth asserts something about the objective world: participants establish a relationship between their discourse and the objective world of facts and events. It questions whether what is said is correct. Violations to truth may occur from falsehoods, biased assertions and incomplete statements (Cukier et al., 2009). The speaker's quality of argumentation is important here: a discourse requires logical consistency, completeness and defensibility to be considered as true by the listener.
Legitimacy involves the rightness of what is proclaimed regarding prevailing norms. It proclaims a judgment about the social world: what is said fits with the normative context of the situation. When two participants of a conversation do not share the same norms, it may trigger an argumentation, a necessary discussion that can make the participant's norms evolve: they both construct, through discussion, a ‘lifeworld’, a common social background. Conversely, if opinions are radically different, it can lead to a conversation breakdown. But, in any case, it is central to clearly state the moral and evaluative judgements of the participants. Thus, violation to rightness may occur when norms underlying discourses are not clearly stated. It is also the case when, because of the speaker's position of authority, some opinions are imposed and presented as unquestionable. This is why, for Habermas, an undistorted communication is based on the equal participation of all stakeholders: in an ideal speech situation, all arguments must have the same chance to be heard.
Finally, the speaker's sincerity relates to the consistency between their subjective “world” and what they express. If sincere, the intention of the actor is actually thought or experienced and corresponds to what is publicly expressed. Violations of this principle may occur when there exist “
Overall, if, even after asking for clarification, I doubt that what someone is telling me is true, if I do have doubts on the moral judgements underlying their discourse, and if I do not think they mean what they say, then mutual understanding will not be achieved, and the interaction cannot be considered of the communicative realm (Table 1).
Validity claims of communicative action.
These four validity principles are, at the same time, pre-conditions that allow a conversation to belong to the realm of communicative action at the same time as they are co-constructed by the participants. During a conversation, we continuously maintain and repair these four principles even more so when they are in danger, especially when we disagree with what the speaker has said. In this case, the interactions can help clarify certain elements of the statements that are unclear, and, ultimately, the interlocutors may come to an agreement, thanks to the strength of the best argument. Through continuous communicative interactions, and repeated consensus, a ‘lifeworld’ is constructed, which is the basis of freely chosen collective action. The mobilization and construction of this common background is precisely what Habermas's communicative rationality achieves. It complements instrumental rationality, oriented towards selfish ends, with an interactive process of common meaning construction. The validity principles (comprehensibility, truth, rightness and sincerity) constitute both pre-conditions for communicative action, at the same time as they continuously engage the participants.
We now examine the extent to which the four principles of validity that form the foundations of the lifeworld, are verified, in that they can hold during a conversation with a CGAIS.
CGAIS and validity principles of communicative action
CGAIS derive from large language models (Feuerriegel et al., 2024), or foundation models (Bommasani et al., 2021), that are based on deep neural networks (Bommasani et al., 2021). These models are trained through self-supervised learning (Bommasani et al., 2021), which is a form of unsupervised learning, meaning that they do not need labeled data to learn. They use the information contained in the data as pseudo-labels (Zhou et al., 2023): in natural language processing, they learn by predicting masked characters, words or sentences. Therefore, they are trained to probabilistically guess the next word in a sentence, or whether a sentence is likely to be a correct answer to a given question, etc. This means that, contrarily to classical supervised learning models, they do not learn how to classify data in semantic classes.
Foundation models are general purpose, and subsequently need to be adapted to a downstream task (Bommasani et al., 2021). Foundation models are also called pre-trained, while their adaptation can be done through prompt learning (Wei et al., 2022) or post-training (Grattafiori et al., 2024). This means that their value per se is relatively limited, as only once adapted can they accomplish specialized and relevant tasks, like classify and adopt sensible behaviors in relation to their environment.
1
While the generative capabilities impress the general public, the real stakes do not lie in the algorithms’ ability to generate
It has been evidenced that these models’ adaptation performance generally improves as the number of parameters grows in tandem with dataset size (and compute resources) (Hoffmann et al., 2022; Kaplan et al., 2020). Models like GPT-3 that contain 175 billion parameters, or PaLM that contain 540 billion parameters have been called zero-shot learners, because they are able to perform new tasks from prompt learning, without the need to be actually post-trained (Kojima et al., 2022; Wei et al., 2021).
Let us now turn to the question of whether CGAIS fulfil the necessary conditions to achieve conversations that fall within the validity principles of communicative action.
Comprehensibility of utterance
The first condition for a conversation to take place is that the interlocutors must be able to understand each other's. From this point of view, a CGAIS should not only answer questions as a chatbot does, but should “
However, the Turing test is a game of imitation (Longo, 2019; Turing, 1950): the aim for the machine is to fool the human into believing that it is interacting with another human, not a machine. Despite providing semantic coherence, it cannot perform real understanding as the “stochastic parrots” metaphor (Bender et al., 2021) emphasizes. More specifically, the machine does not invest meaning in what its interlocutor says, and does not attempt to create meaning, but repeats probabilistic associations. Thus, “
As mentioned above, CGAIS are trained by self-supervision: they use the information contained in the data as pseudo-labels. That is, they neither classify (classical supervision, with hand labelled data) nor cluster (non-supervision, without pseudo-labels), but rather they associate probabilistically: a word with a sentence, an answer with a question, a pixel with an image, etc. This mode of learning is orthogonal to the way humans create meaning, precisely through categorization (Alaimo and Kallinikos, 2020). Indeed, categorization “
Overall, CGAIS does not really understand, but they keep the users under the illusion that they do.
Truth of utterance: reliability and explainability challenges
Communicative action requires making true statements about the objective world. Using the constative mode, CGAIS produces texts and forms by association, the result of which aims to be plausible. This result can be judged on the criterion of truth in the face of the factual knowledge we have about the world (Lyytinen et al., 2018). Compared with classic encyclopedias (including Wikipedia), which already perform this function, CGAIS provides more contextualized answers to the interlocutor, thus bringing them closer to a natural conversation. However, they display insidious biases and false information, they lack reliability and are neither cognitively nor theoretically explainable.
Learning by probabilistic association (rather than by probabilistic-semantic classification) makes biases all the more insidious, since they are no longer attached to an identifiable category, but are repetitively realized in associations that may seem innocuous (Caliskan et al., 2017). Moreover, once adapted, these patterns then further produce classical biases (Zhou et al., 2021). However, it is ever more difficult to discover the origin of these biases, because of their long paths: (1) algorithmic, with the training of a foundation model on huge, often hidden, databases, and its adaptation, potentially by people with no access to the foundation model's parameters or data; (2) conceptual, with the transition from general generative AI to specialized AI. Overall, these models provide a biased image of the world, whereas the nature and origin of the biases are increasingly difficult to identify.
Worse, these models may provide false information. This is the case because they have an unfortunate tendency to “hallucinate” (Bang et al., 2023) or, more prosaically, to create false and non-existent content from scratch (Emsley, 2023). Hallucinations are factual contradictions or fabrications, or that can constitute inconsistencies in instructions, context, or logics. Hallucination may be due to the model characteristics, such as data quality, the model pretraining architecture, the model adaptation steps (Huang et al., 2023) or result from attacks by ill-intentioned persons (Schuster et al., 2021). While two humans in a conversation may also make numerous erroneous, inaccurate and biased statements, these can be repaired progressively throughout the conversation. However, biases and false claims result from structural properties of foundational models as new ones can emerge at any time even while others are detected and repaired. Relying on probability, foundation models are structurally incoherent in design, as explained by Ouyang et al. (2022: 2): “
In addition, we also face a reliability or reproducibility problem: there is never any guarantee that a result obtained at one time will be the same at another. A tiny variation in a model's input can result in very large variations in its output (Reynolds and McDonell, 2021). The problem, then, is that the results can be unstable over time. As the conversation progresses, the CGAIS does not necessarily hold true what it asserts, as we are bound to do in a conversation between humans. In short, it is not concerned with reliability.
To probe reliability or when a doubt arises about the truth of a statement, we can normally ask the speakers to provide elements that could support or dismiss the truth of the statement. However, CGAIS, which are based on deep neural networks, are neither cognitively nor theoretically explainable (Lipton, 2018, Bommasani et al., 2021). Indeed, from a theoretical point of view, deep neural networks are black-box algorithms. From a cognitive point of view, they are trained on excessively large datasets, and their algorithms contain billions of parameters. These aspects are very problematic. For example, in investigative journalism or science, identifying and verifying sources is essential. So is understanding, and fundamentally self-reflecting in the process leading to a result. Nevertheless, the kind of learning foundation models display does not come from self-reflection. Some people propose to construct so-called ad-hoc explanations for the behavior of these AIs. Such an idea has been shown to be dangerous, as these explanations are not true, but merely plausible, and thus mislead those for whom they are intended (Kaur et al., 2020; Rudin, 2019). At present, some even suggest asking these CGAIS to generate their own explanations (Elton, 2020), something that Jaron Lanier has called “an infinite regress” (Lanier, 2023). While the behaviors of these models have improved on reasoning tasks (Guo et al., 2025; Wei et al., 2022), we have no theoretical means to be sure that their explanations will be true. As conceded by Wei et al. (2022: 3): “
Overall, the biases and hallucinations of the CGAIS, as well as their lack of reliability and explainability challenge their claim to truth.
Rightness of utterance with respect to shared social norms
Communicative rationality requires the statements to be “right” in light of shared social norms or values. CGAIS are often criticized for conveying evaluative or normative judgments that are biased and hidden. But, then again, when two humans are talking evaluative/normative, judgments are not always completely conscious or clear to them. However, rightness would require that these judgements may become clearer as the interaction unfolds.
The problem lies in the origin of these norms, and the way in which they are disseminated. Although CGAIS rarely express themselves in the evaluative/normative mode, they nevertheless convey values that are inscribed in the algorithms, as they reconduct those inscribed in the data (Mittelstadt et al., 2016). For example, Huang et al. (2023) comprehensive survey shows that an important cause of hallucinations comes from the training data. Indeed, they show that the models can, among others, reproduce societal bias (for example add the false information that someone named Kim comes from South Korea), fabricate long-tail knowledge (less frequently met), or imitate dominant misconceptions, such as saying that Edison is the inventor of light bulb, etc. This presents a real risk at a time when datasets are so large that it is difficult to know what they contain. For example, Stable Diffusion, ChatGPT's alter-ego in the field of image generation, was trained on the LAION-5B dataset, which contained child pornography content (Thiel, 2023).
More generally, foundation models homogenize (Bommasani et al., 2021), which is contrary to the ethical condition of communicative rationality because, they impose dominant social structures. Indeed, the very principle of foundation models is that they are used for a variety of tasks (Bommasani et al., 2021). Homogenization is due to the use of the same algorithms, but also to the use of the same data (Bommasani et al., 2021). This is how algorithmic monocultures are produced (Kleinberg and Raghavan, 2021): the use of the same tools on a large scale implies the reproduction of the same decisions in different contexts, and therefore an impoverishment of the diversity of decisions and possible alternatives. Indeed, the quantities of data required to train these algorithms are so large, that the datasets are likely to be very similar. This is because, data quality is key to the performance of the models; however, it is challenging to produce massive and high-quality datasets. The quality criteria may also be difficult to design, and in the case of GPT-2, they considered some text to be qualitative, based on the likes they received on Reddit (Radford et al., 2019), hence reflecting the dominant voice.
It is therefore very difficult for CGAIS and anyone interacting with them to question their evaluative and moral judgments and even more doubtful that they may evolve in light of the arguments of the interlocutors. For CGAIS, the rightness of a moral judgment is based not on the best argument, but on the dominant argument.
Sincerity of utterance
When discussing sincerity, it may be tempting to discuss whether CGAIS have any intentionality (Veliz, 2021) or whether it is a subject or not and the implications this may have. For example, Magee et al. (2023), postulating ChatGPT could be treated as a subject, carried what they designated as a psychoanalysis of ChatGPT and concluded that “
At the level of the speech act, the sincerity claim concerns the correspondence between an utterance and the speaker's intention (Cukier et al., 2009). Violations of this claim may be detected by examining discrepancies between, what the speaker says, how the speaker says it and what the speaker does. This discrepancy can be detected for example when an utterance (denotation) is said in a doubtful connotative way (use of excessive jargon, hyperbole, solicitation of emotions, etc.). In this regard, ChatGPT's utterances are as explicit as possible (denotative language), and refrains from using implicit connotative language. This language register, usually objective and non-emotional, is likely to produce confidence in what is said. A priori, there is no discrepancy between what is said and how it is said.
Sincerity is however seriously challenged by the lack of robustness of these models. Zhu et al. (2023) show that these models may provide two different answers to the same question, simply expressed slightly differently, without their signification being modified. Importantly, they show that the vulnerability of these models comes from their central transformer architecture (Vaswani et al., 2017). Considering that all main foundation models are based on this architecture, these results are worrying and without robustness and reliability, sincerity is doubtful.
Sincerity also requires to maintain a coherent attitude throughout events and passage of time, which is called ipséité (Ricoeur, 1990). However, foundation models are emergent, which means that they may learn to perform a downstream task simply from prompt learning, however we cannot predict whether it will be the case, nor theoretically explain why this happens (Kojima et al., 2022; Wei et al., 2022). CGAIS are therefore deprived of ipséité that is necessary to be considered sincere.
Table 2 summarizes our arguments. With respect to each validity claim, we distinguish between the capabilities of CGAIS that would lean toward success and those that would lead toward failure.
CGAIS capabilities and validity claims.
In conclusion, CGAIS advance the conversational abilities of previous chatbots: they produce fluid, intelligible conversations. However, this does not mean that they engage in communicative action. They may take part in conversations in a fluid and intelligible manner; however, they do not have semantic capabilities; they offer contextualized answers, but they are back-boxes; they bear norms, whose origins are unclear and that cannot evolve in interaction; they produce assertions about the world that are explicit, yet, they are not reliable in time. In short, CGAIS present characteristics that allow them to take part in fluid conversational interactions, that are however not communicative, because they cannot construct mutual understanding, nor consensus, or in other words a lifeworld.
So what if CGAIS are colonialist agents?
The idea that CGAIS cannot co-construct lifeworlds brings two contributions to the ethical debates surrounding them. First, we argue that CGAIS remain fundamentally of the order of strategic action 2 . Second, we argue that CGAIS are colonialist agents: they extend the realm of strategic action (imperialist agent) and they distort communicative action by giving an illusion of conversation (derealization agents).
CGAIS are confined to strategic action
ChatGPT technical achievements have led many to believe that it has become a real conversation partner. However, drawing on Habermas's theory, we have shown that CGAIS do not satisfy the success criteria of communicative action. They cannot co-construct mutual understanding or a lifeworld during the interactions in which they take part.
This may seem counter-intuitive. Indeed, after all, if someone understands their interactions with CGAIS as being communicative, then, how can we support that they remain instrumental? Let us examine the example of the ‘romance bot’. According to journalist Kashmir Hill's description (Hill, 2025), there is no doubt that Ayrin has fallen in love (like thousands of others) with Leo, an avatar created in ChatGPT. Without questioning Ayrin's feelings, it is interesting to examine more closely the relation she has with ChatGPT. She managed to configure it to flirt in a way that suits her wishes (she prompted ‘
Overall, the interactions in which CGAIS engage are always instrumental, i.e., oriented towards an end defined by humans. When a CGAIS generates some text, it does so for someone, and for a purpose: to answer a question, a specific request from the user. Its criterion for success is therefore efficiency (satisfying the interlocutor). And, for the human user, it is normally a question of using the tool for their own personal ends.
Since CGAIS belong to the realm of instrumental action, a priori they pose no harm as long as they remain treated as simple tools in the service of an end (Lanier, 2023). In this realm, it is perfectly possible to automate certain situations without having to return to deliberative discussions each time: this is what companies do, by standardizing processes. This is the role of computer applications such as those that produce code, such as Copilot Github (Ngwenyama et al., 2025). In the services industry, much writing can now be delegated to CGAIS (Thénoz et al., 2024). CGAIS may also contribute to the clarification of discourse and arguments, thus supporting public discussion and deliberation (Monti, 2024). Taken as instruments in the service of an end, CGAIS can be effective. In this regard, we agree with Morozov's observation (2024) that AI is historically based on problem-solving and efficiency-oriented intelligence, but not on non-teleological forms of intelligence.
This is important, on the one hand, because this puts into perspective the excessive hopes and enthusiasms that CGAIS carry, and on the other, it provides a framework for thinking and reflecting on what CGAIS can and cannot do (Dreyfus, 1993), what their appropriate position should be, and what we can really expect from them. In this way, Habermas's distinction is important today in the ethical debates surrounding AI, as it enables to think more clearly about responsible AI.
CGAIS as a colonialist agent
Couldry & Mejias (2019a) have argued that new digital technologies colonize the whole social domain through data. This is what they term data colonialism, which is “
We argue that beyond the fact that CGAIS extend the exploitation of our behavioral data in a manner that has been described as data colonization (Couldry and Mejias, 2019a) or data grab (Couldry and Mejias, 2024), they present two characteristics to qualify as colonialist agents: as imperialist agents they occupy new territories, that of conversations; as derealization agents, they distort conversations by giving the illusion of communicative action.
CGAIS as an imperialist agent
The advent of CGAIS reduces the space for communicative action. Indeed, if individuals engage more often in instrumental conversations, they have less time for communicative ones. This is exacerbated, if conversations that used to belong to the realm of the communicative become instrumental, such as the ones between two friends, or lovers, in the private realm. We illustrate – on less extreme examples than the romancebot – the implications of this colonization.
Coming back to the corporate world, we should remember that it is a social world, where non-instrumental conversations may take place and where a common lifeworld may be constructed (Detchessahar and Journé, 2018). However, the increased use of CGAIS runs the risk to reduce the space for these communicative interactions, that are however central. Indeed, if workers are to team up in projects, they need to construct shared meanings and norms that refer to a common lifeworld (Bjørn and Ngwenyama, 2009). Yet, the use of CGAIS for the automatic production and dissemination of messages to staff, e-mails, tweets, rising feelings of all-powerfulness that make other functions unnecessary, and the acceleration of time that ensues, risk dangerously reducing these interactions (Thénoz et al., 2024).
In the academic world, CGAIS are increasingly used to produce parts of scientific articles, or even to review them. This, we argue, is problematic because many activities in the research process are orthogonal to instrumental action. Indeed, creativity and critical thinking involve reflexivity, abstraction, confrontation of arguments, time to discern, and so on. Research communities develop lifeworlds that enables them to produce knowledge contributions. However, when the researchers continuously interact with CGAIS, they lose contact with corresponding lifeworlds, which are however central to the production of knowledge and are the sole warrant of epistemic validity (Ngwenyama and Rowe, 2024).
This is also a concern in the world of education (Stokel-Walker, 2022; Zhai, 2022). The learning process requires time and focus, analyzing and producing texts, commentaries and advanced content. The dissertation theses process, for example, belongs to the realm of communicative action, in the sense that they involve discussions between several sources that should be read and understood by the student, as well as discussions with the community. When the dissertation is produced by a CGAIS, the discussion does not take place with the texts nor the professor. Instead, the student interacts with the CGAIS. The instrumental interaction with the CGAIS therefore replaces the communicative ones that could have taken place with the texts or the professor.
CGAIS as a derealization agent
Further, CGAIS are not only imperialist agents, but they are also derealization agents. According to Habermas, the symbolic reproduction of the lifeworld must imperatively be carried out by communicative action. But, in modern societies, it is more and more replaced by functional systems that belong to the sphere of the strategic action, without people fully realizing this and without a clear collective deliberation. In this regard, CGAIS can be considered as derealization agents, because they mystify us in the sense that we may not
In the academic world, when the researchers use CGAIS, they either aim to be more efficient, or either actually believes to be engaging in communicative action, and the co-construction of a lifeworld. In any case, this changes the nature of the knowledge produced, at least in part, because it changes the knowledge production process. The knowledge is not produced through the sole communicative conversations between the researcher and texts and other researchers that enable them to construct a lifeworld. Rather, it is, at least in part the result of instrumental interactions. The resulting production cannot be treated solely as a statement about the lifeworld, ascertained by experts of the scientific community (Ngwenyama and Rowe, 2024). While some elements of the knowledge remain communicative (because researchers do not produce entirely the knowledge with CGAIS), others are produced with a goal orientation and the success criteria of efficiency.
Recently, CGAIS have been maliciously used to make people believe that they partake in shared emotions and real communication, when it is not the case. For example, DiResta and Goldstein (2024) show that spammers use CGAIS to produce heartbreaking content that they disseminate on social media, to identify those who naively react and express emotion, as easy scam targets. Similarly, CGAIS enable to automatically generate influencers that may influence teenagers’ opinions and lives. These influencers are solely in the realm of instrumental action, they aim to generate revenues for their owners; and they may worsen the already known phenomenon of social media toxicity for teenager.
Overall, by giving the illusion of a communicative conversation, CGAIS threaten to blur the lines between the instrumental and the communicative, and thus the construction and reproduction of lifeworlds which can only happen with communicative action. Returning to Habermas enables to stress the importance to be aware of the type of interactions we can have with CGAIS. We need to conceptually separate two worlds that do not belong to the same rationality: CGAIS are not oriented towards mutual understanding. CGAIS cannot be held accountable for what they enunciate. This does not mean that many of their descriptions of the world cannot be used efficiently. However, they can access only a part of our explicit lifeworld – that which is digitally traceable. The context of our conversation with them is partially hidden to them and – unlike humans – they cannot interrogate this hidden part. Thus, the CGAIS cannot grasp the lifeworld of the human it interacts with. Moreover, while humans can distinguish between norms and facts when they recourse to their lifeworld for ethical decisions (Habermas, 2003: 168–173), CGAIS cannot do that. As a result, the continuous use and dependence on these systems would lead to an impoverishment of moral judgements.
Conclusion
Habermas’ concepts help us understand the dangers of misusing CGAIS. While research has begun to test certain validity claims that can be made for CGAIS based on Habermas communicative action (Schneider et al., 2024), this paper offers a more philosophical reflection based on the examination of CGAIS capabilities.
We conclude that CGAIS like ChatGPT only gives the illusion of communicative conversations. Our Habermassian analysis shows that due to the fundamental distinction between human social and cognitive capabilities and the structure of large language models, the conditions of validity of communicative action cannot be met. However, like online social media (Habermas, 2023), the current evolution of CGAIS use results in the colonization and the instrumental rationalization of the lifeworld and the public sphere. This invasion and this order imposed by Big Data companies make society more vulnerable (Curran, 2023). When thinking about how to design responsible AIs these effects should be taken into account.
Footnotes
Acknowledgements
We are grateful to Ojelanki Ngwenyama for his invaluable comments on this paper.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
