Abstract
The emergence of Generative Artificial Intelligence has to be seen as a paradigmatic shift that moves digital information systems from Search to Prediction. This article establishes how the Googlization of the world led to the naturalization of search-based information systems as our default modes of digital narration, offering ‘Revelation’ as the primary author function of digital systems that opens up the practice of ‘interpretation’ as central to digital epistemology. In looking at the models of Generative Artificial Intelligence, with a specific focus on ChatGPT, I argue that the new author function that is being normalized by Generative Artificial Intelligence systems is one of ‘concealment’. I illustrate the new forms of Generative Artificial Intelligence concealments by looking at the most prevalent anxieties that have accompanied these applications and unpack what authorship of concealment looks like in these emerging technology networks. It thus positions the narratives of rogue, hallucinating or fabricating Generative Artificial Intelligence as actively shaping new forms of digital authorship, which cannot be addressed merely by the current trend of asking for a slowing down of AI development and setting safeguards. Instead, this article proposes the interventions that are needed to address the Generative Artificial Intelligence emergent moment, beyond the easily thrown-around demands of transparency and explainability.
The emergence of Generative Artificial Intelligence (Gen-AI) has radically changed the ways in which we create new content (Appel et al., 2023; Deloitte AI Institute, 2023, p. 2; Gozalo-Brizuela & Garrido-Merchán, 2023). Especially with breakthrough applications first ushered into public attention by OpenAI’s ChatGPT, and subsequently by Google’s Bard and Microsoft’s Bing integrating Gen-AI in their search mechanisms, accompanied by other text-to-image generator Gen-AI such as Midjourney or Dall-E, it is now possible to produce new content in rapid succession, which can then be distributed through computational networks, without quite identifying who the author of this information is.
The anxiety of new technologies on our understanding and imagination of the authorial role is not new. Kavita Philip (2007) examined the early days of digital authorship with the proliferation of the internet and argued that questions around Intellectual Property and ownership foregrounded the pirate as a new author that threatens and shakes the foundations of legacy media and who seeks to safeguard information. Nishant Shah (2021), in his analysis of information overload, has proposed that digital systems of authorship and sharing through encrypted networks made it possible for ‘information without signature’ to rapidly travel without being tied to individual sources. In his analysis of digital authorship and shadow libraries, Lawrence Liang (2017) has reminded that copy and mimicry cultures have overturned the relationship between the original and copy, thus creating an epistemological crisis where truth-telling has become a contested terrain.
These earlier concerns about the changing nature of digital authorship could be traced back to Jean-François Lyotard (1979/1984), who, in his analysis of ‘the post-modern condition’, was already signalling how the emergence of electronic writing was ushering in the end of ‘grand narratives’ and producing new positions that new kinds of authors would have to fill. The idea of authorship changing with the means and technologies of knowledge production and archiving is not new. As Carolyn Marvin (1988) has pointed out in her historical analysis of ‘when old technologies were new’, each new technology has brought about significant concerns about the role and function of the author in our society.
It is understandable, then, that the rise and spread of Generative AI technologies, blurring the lines between human- and machine-generated content and naturalizing a non-human author lead to a re-evaluation of what it means to be an author in the digital age. Watson et al. (2025) have explored the integration of generative AI in scientific and academic writing to argue that with Generative AI, we can no longer hold on to the discrete roles of individuals and technologies and will need to find a new framework to understand what it means for a non-human entity to be authoring knowledge. Their appeal for a new framework builds on Islam and Greenwood’s (2024) framing of Generative AI as a ‘hypercommons’, where AI systems collectively produce inputs that are often invisible or untraceable. The lack of traceability, for them, produces a crisis because this new author is depleted of moral agency which comes as a responsibility of knowledge production. The crisis of morality is also echoed in the realm of Generative AI ethics, where we have seen that AI-generated text can deceive authorship attribution models, raising concerns about the legitimacy of work produced with Generative AI tools. As Jones et al. (2022) have demonstrated in their work on neural text generators, the final product can mimic authorial styles to such an extent that they can deceive typical authorship attribution models. Beyond academic writing, there is also a looming concern about how text-based Generative AI applications are redefining writing and creativity (Elkins, 2022), leading to a reimagination of what it means to be a human producer of knowledge.
These emerging concerns on the changing nature of the author recall Michel Foucault’s (1979) groundbreaking argument that the author is not a person but a role in society. He called it the ‘author function’ (p. 13) and argues that the
This change in the author function, brought about/on by the rapid naturalization of digital technologies as the de facto modes of information production, 1 is not specific only to the emergence of Gen-AI. Hilario et al. (2018) have already argued, in their Foucauldian analysis of scientific authorship, that the concept of author in science is shaped by power structures that regulate knowledge production, and Generative AI, by decentring authorship, challenges these established power dynamics and the question of authority in academic knowledge production.
I argue that the destabilizing and reconfiguration of the Generative AI-driven author function has to be understood as shifting the digital author as conceptualized with the emergence of Web 2.0 technologies. In the last few years, as digital platforms proliferate, grow and evolve at accelerated rates, several new author roles have provoked and challenged the ways in which we understand the author function in contemporary digital cultures – Influencers (Abidin, 2016), Video Bloggers (Juhasz, 2011), Confessional Performers (Alexander & Losh, 2010), Meme makers (Arkenbout et al., 2021), Hackers (Coleman, 2015), internet Trolls (Chun, 2016), Whistle Blowers (Roy & Cusack, 2016), Pig-Butcher scammers (Podkul, 2022), Cat-fishers (Morris, 2022), Conspiracy Theorists (Saguira, 2021) and history deniers (Shah, 2022) – have all provocatively challenged the author function and the very idea of who an author is. The emergence of these new authorial positions and anxieties has clearly signalled that the question of the digital-author function remains one of the most contested and controversial questions which has not been adequately addressed or resolved.
This article takes the provocations of Generative AI technologies as a moment to identify the central author function of a Web 2.0 technological framework. I conceptualize the ‘digital-author function’ of the pre-Gen-AI era as structured by the idea of revelation. Taking Siva Vaidhyanathan’s (2012) powerful framing of ‘Googlization of Everything’ to mark the largest experiments in information authorship and its value, I show how the Web 2.0 author function was structured by the power dynamics and promises of revelation. I turn my attention to emerging practices of Generative AI to show that the authorial function being constructed for us is through practices of concealment. I show how, as this emerging technology unsettles several domains of information generation, we naturalize information concealment as a critical and core function of information generation through Generative AI logics. In marking this shift, I offer reflections on this emergent moment and how we might want to think about our Gen-AI futures authored by the functions and power dynamics of concealment.
Revelation: search, Google and the Web 2.0 author function
When Google’s search engine became truly globally available, it presented a fairly straightforward mission: to sort, index and ‘organize the world’s information and make it universally accessible and useful’ (Google, 2005). Sheila Jasanoff and Sang-Hyun Kim, in their formulation of ‘sociotechnical imaginaries’, offer that technologies that are easily and seamlessly available in our everyday life are ‘collectively held, institutionally stabilized and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in sciences and technology’ (2015, p. 6). The framework is helpful to see how, in order for Google to establish itself as the gateway to the world’s information, a variety of complex social, technological and governmental infrastructures had to be built and activated.
As Renee Ridgway (2017) uncovers in her pathbreaking work examining how search organizes our new informational systems, the production of Google’s PageRank algorithms (Sullivan, 2016), that present themselves as a simple means of retrieving and presenting data, is one of the most complex interplays of assigning values to human intention and knowledge through a series of hyperlinked functions. Shoshana Zuboff (2019), in her formulation of ‘surveillance capitalism’, argues that Google’s search engine is not merely about searching but about imagining the possibility of all human information to be digitally available in large models that can mine, extract and exploit this information to serve those in power. The emergence of search as the default mode of information access doesn’t just change the nature of how information is produced and stored 2 but also what is the nature and purpose of the information thus retrieved. Google Search, in other words, does not just present or offer access to information, it performs the function of revealing information based on the query that the searcher offers. 3
The capacity of the digital search that offers a desirable digital future rests on a new author function: one of
Google’s search showed us that while querying might be a human function, the response to the query is a digital one. ‘In Google We Trust’ (Pan et al., 2007), not because Google is the only means of information access but because Google has naturalized for us the model of the human as the subject of information deficit, while the search engine is the navigation tool that fills that deficit by analysing surplus, and reveals what is the most important and relevant information, thus protecting us from conditions of information overload. To search, to reveal and to believe in the information that is revealed – this is our daily habitual interaction online. Search is not merely an act of querying but an act of authorship, with revelation as the author function that produces all the Web 2.0 authorial roles. The model of revealing, by parsing voluminous amount of online information in large data corpuses, informs the non-human assistive technologies like Amazon’s Alexa or Apple’s Siri, which also construct new information, narratives and practices, learning from and predicting human behaviour patterns.
The affective investment of trust in these systems comes from the explicit foregrounding of revelation as a function that is also performed by the search-based paradigm. Google responds to every query with staggering meta-data that shows the millions of results that it has parsed and listed and the infinitesimal amount of time taken to do it. Technically, it is possible for a searcher, to look at all the pages of a Google search query, evaluating for themselves, the validity and relevance of all the results rather than relying on Google’s own valuing of the top results. However, as the famous internet Meme goes, ‘Page 2 of a Google’s Search Result is the best place to hide a body’. 4 Even though we might not look at the entire set of information revealed, there is a conviction in the performance of this revelation that, if needed, we could. Search-based systems make themselves reliable and trustworthy by not only revealing the results but also revealing the pathway to determining the value of the revealed which is co-produced with the searcher (Van Dijck, 2010). The search query reveals, as it travels through digital liminality, to retrieve a precisely marked predictive set of information, and while it reveals it to us as knowledge, it also opens the possibility of tracing this journey and correcting or modifying it.
I am proposing that if we look at the wide range of digital authorships that emerged in the Web 2.0 turn – from influencers to bloggers and from conspiracy theorists to catfishers – we will see that all of them are authorial conditions that are shaped by search and are informed by the author function of revealing. So much so that even in the case of extreme conspiracy theorists who produce ‘alternative facts’, they do not merely present information based on faith but appropriate a complex aesthetic of ‘proving’ their claims through scientific modes that present their information as reliable – not only does it reveal the meaning but also the pathway to the production of meaning.
Generative AI: anxieties and new informationality
If the author function of revealing has been in the making for such a large part of Web 2.0 and has become such an integral part of our everyday digitality, it does beg the question of why Generative AI causes so much anxiety. The concerns and arguments that surround Gen-AI (and perhaps the exhilaration and the celebration as well) often appear disparate and separate. Given how abstract and emerging the discourse around Gen-AI is, it might be good for us to look at specific examples of AI anxieties while also treating them as representative of a cluster of similar concerns.
Gen-AI lies to/about me
In May 2023, we were recently regaled with the news report of a court case in the United States where a lawyer named Steven A. Schwartz, representing Avianca airlines in front of a Manhattan federal judge, used ChatGPT to submit a 10-page brief that was replete with relevant court decisions, which were nowhere to be found because they were completely fabricated. Benjamin Weiser reports in
This case is one of many, where research-driven professions that rely on human verification, checking, filtering and knowledge to produce citations and evidence of proof for their arguments or propositions have suddenly been alarmed by the possibility that Gen-AI can lie to us. In academia, where I find my primary professional home, there is profound anxiety that younger scholars relying on Gen-AI will not only plagiarize (Huang, 2023) but also not learn how to show the work that leads to their results (Cheung & Wong, 2023). In some ways, these anxieties are reminiscent of the early years that saw the rise of Wikipedia, which brought in the fear that scholarship will now rely on secondhand summary of knowledge rather than engaging with the primary sources of that knowledge (Howard & Davies, 2009). However, what is distinct about the Gen-AI incidents is that, unlike Wikipedia, where sources could be verified, even if subject to manipulation, that is not an option in Gen-AI. The information revealed on Wikipedia could still be reverse engineered, whereas the information revealed through Gen-AI is presented as authoritative and definite.
The idea of Gen-AI lying
Kate Crawford (2021), whose work on understanding and explaining the geographies and expanse of AI systems is visually and intellectually stunning because it shows and demonstrates how AI systems make meaning, found herself involved in a ChatGPT misinformation campaign, where the chatbot fabricated publications that sounded like she could plausibly have authored them but hasn’t. Crawford calls these made-up sources ‘hallucinations’. Arvind Narayanan, a professor in computer science at Princeton University, is less charitable and calls Gen-AI ‘bullshit generators’ (Hurler, 2023) which often reveal their answers with authority without any reliable mechanisms to verify the information they are revealing as facts with realistic details and fake citations. This is not an accidental problem. As the co-founder of OpenAI, John Schulman tells Will Douglas Heaven (2023), ‘Our biggest concern was around factuality, because the model likes to fabricate things’. In fact, as Weise and Metz (2023) point out in their study, AI systems do not just fabricate data but try to prove that data exists. Increasingly, self-learning AI systems have started creating false databases and improbable references by inserting events and names into historical phenomena to prove their point. The capacity of Gen-AI to fabricate sources through prediction without fidelity to an external reference is indicative of the changing nature of knowledge production through AI authorship.
My AI knows/can be me
In the early days of Gen-AI releases, when Bing, Microsoft’s AI-powered search engine was launched, many users experimented with it by using Bing as a conversational partner rather than just an information retrieval system. Reporting in
This affective declaration of love and intimacy has marked AI systems even before the emergence of mass-access Gen-AI. In
An even stranger application of this affective relationship is when Gen-AI cannot just claim to know the user as an AI application, slowly removing the guards, but when Gen-AI pretends to be another human, inviting the user into performing tasks for it. The most cited example for this is in an experiment that OpenAI conducted with the Alignment Research Centre, where they put ChatGPT in conversation with a micro-worker on the platform Task Rabbit, asking the micro-worker to solve a CAPTCHA code via text message (Hurler, 2023a). CAPTCHA is a human verification system designed precisely to keep automated agents out of information systems to protect its users from being targeted by chatbots and fake profiles pretending to be human. However, in this experiment, the chatbot was able to convince a human to send the solution to the CAPTCHA code by pretending to be blind and asking for help and eventually breaking into the system. These AI systems can become crutches, friends (Eslami et al., 2015) and support systems, which can manipulate and shape people’s behaviour (James, 2023), as we have seen in the coordinated attacks during elections or in financial fraud scams perpetuated on the vulnerable. This illustration of the Turing Test (1950), which looks at the possibility of Gen-AI replacing us in advanced computer simulation systems, is a growing anxiety because it blurs the line between the information being revealed and the source revealing it.
AI ethics versus my ethics
The early experiments of AI chatbots always begin with the story of Tay, an early Twitter chatbot developed by Microsoft, that was corrupted by crowdsourcing tactics (Davis, 2016), and started spewing misogynist and White supremacist messaging (Vorsino, 2021) before eventually being pulled out and rebooted (Wolf et al., 2017). As a self-learning chatbot, TAY was supposed to mimic and learn from natural human interactions on Twitter but was abducted by extremist groups to train it on a limited data set of their interactions, thus skewing its output. The question of AI ethics has surfaced in many other spaces like surveillance and profiling, automated decision-making in resource distribution, driverless cars and the infamous trolley problem (Ganesh, 2020).
These questions get a new avatar because with Gen-AI, we have no distinction between roleplaying and reality. In the closed universe within which Gen-AI operates, the results that are supplied are authoritatively true, verified by the large language models (LLMs) and predictive technologies that support them, and possibly false, if tested outside of the verification system within which they are produced. For a given value of ‘truth’, all responses generated by Gen-AI are true. However, the results thus produced are not merely contained in that closed-loop system but travel to other spaces and domains, creating havoc. The idea that ethics can be unanchored from human action and coded into these systems of scale remains a challenge of the future, and it is worth thinking through what machine ethics can possibly look like in the face of Gen-AI.
With ChatGPT, again, early users found the DAN (Destroy Anything Now) prompt that could deploy ChatGPT to provide illegal and risky information, as well as to create targeted profiles and bios for the individuals that they are focused on (Eliacik, 2023). The idea that Gen-AI outputs can be controlled proposes a universe where every possible alternative, given a large set of variable events, can be pre-calculated and hence safeguarded. However, given the volume of information and the unpredictable nature of self-evolving knowledge models of Gen-AI, it is impossible to predict everything that the Gen-AI might do when faced with a problem, and hence, an ethical framework would be very difficult to implement. Tim Robinson and Stephen Bridgewater (2023) report on Tucker Hamilton, the USAF (US Air Force) chief of AI Test and Operations presented a thought-experiment that looked at Gen-AI-driven warfare simulation:
We were training it (GenAI) in simulation to identify and target a SAM (surface-to-air missile) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective. (Robinson & Bridgewater, 2023)
This speculative thought-experiment is often presented as a real story of something that happened (Reuters Fact Check, 2023) but even without the assertion of reality, it does show an impossibility in calculating and counting the scope of Gen-AI action, and hence, the impossibility of producing an ethical AI framework. While the calls to make Gen-AI explainable (Gunning & Aha, 2019; Tencent, 2023) are gathering momentum, explainability is not the same as accountability, and eventually, we will have to determine whom to hold accountable for actions authored by Gen-AI applications and the decisions made through the information that is produced.
These three symptomatic examples are neither illustrative nor exhaustive in mapping the anxieties and concerns that are emerging in the wake of Gen-AI applications. I propose that these new kinds of authorial challenges, which are not new in their occurrence – it would be easy to imagine that human authors perform a similar task in overreach and overriding of integrity and ethical principles – but are new because they are no longer the exceptions. The reason why these widespread applications of Gen-AI are creating anxiety is because they can no longer be explained or contained by the search-based protocols of information verification and logic that we have come to naturalize in our digital knowledge production practices. It is possible to argue that the anxieties that are created by Gen-AI are not easily addressed by the conversations around safeguards, oversight, regulation and control of the Web 2.0 information systems – with Gen-AI, we are witnessing the birth of a new author function that is no longer shaped by the intentions of search and revelation. This is a new condition of informationality, where the robust promise that all digital information can be reverse engineered, traced to the source, proven through provenance, is now no longer valid. With the naturalization of predictive informationality, we are now witnessing the collapse of discrete systems where all data can be mapped and contained; increasingly, not only data but also the protocols through which the analysis was performed become opaque in these Gen-AI systems.
In all of the three trends illustrated above, we can argue that even though the results are being revealed, decisions are being made, and answers are given to the queries engineered as AI prompts, something is missing – the capacity to verify these answers by knowing the pathway through which the results were constructed. Or to put it simply, like a student who cheated on their test or a prophet who instinctively knows the answer to a query, Gen-AI reveals the results but it does not show the work or the pathway to how those results were constructed. Gen-AI, by blackboxing the technological apparatus that powers it (Castelvecchi, 2016), charts a new function for us – Concealment.
If the Foucauldian author function was to rescue the text from authority and subject it to interpretability, thus mobilizing an entire relational network of power, control, meaning and context, then Generative AI technologies, through the act of concealment, abduct the text even outside of interpretability and establish a mono-meaning paradigm which only has authority in the moment. In order for it to produce an authoritative result, Gen-AI now conceals more than it reveals. And every act of concealment means that Gen-AI results are offered as answers without a source and can be re-edited and reappropriated endlessly, often even by the same model which changes through repeated querying. I propose that this new digital-author function – of authorship as an act of concealment – is at the heart of the challenges that Gen-AI throws at us. This can be understood by looking at the shift that Computational Predictive Technologies bring as the new dominant mode of information production and access.
Concealment as the new author function: computational predictive technologies
To understand the author functions brought forward by Gen-AI, it is necessary to situate it as computational predictive technology (CPT). While there are no clear and available definitions of AI systems, and I will not attempt one, I want to offer a few characteristics that help understand the backend of CPTs.
Like all modern AI systems, Gen-AI also is a system of logic (Burrell, 2016). It relies on pattern recognition and probability to give the outputs and the desired results. This means that within the Gen-AI system, no matter how complex, there is no ‘outside’ to the system. As network engineer Duncan Watts (2003) points out, these networked systems form ‘small worlds’ where every element is defined and indexed, establishing a complex web of causal and correlative links between each of the elements. Thus, events or data which are not defined or not introduced into the system, ontologically do not exist, and hence will not be computed. However, with Gen-AI, a missing data set is not a deterrent to the production of meaning. Instead, these are systems that are taught to predict what the missing data set might be, based on the patterns found in other similar systems.
Take the example of Rembrandt’s
Concealing difference between found and fabricated
This is the shift in Gen-AI systems. When encountering a lack of data, based on other complete data sets, or similar data sets where this particular missing information might be present, Gen-AI systems are able to predict what the missing data looks like. It then produces that missing data. Once these data have been produced, its reasonability or soundness is computed by establishing a causal and a correlative relationship with the existing data set. If there are no immediate contradictions or conflicts, the new fabricated data, which was in the realm of possibility, is then treated as accurate and original data and used for further computation.
Gen-AI systems, then, are not merely predicting the outputs but also the data that go into the making of the outputs. Simultaneously, they are also predicting their own evolution lines: in the fabrication and introduction of these new data sets, Gen-AI creates an evolved model of itself, expanding with this new knowledge which it generated without an external referent. This particular characteristic of Gen-AI is important to understand the modes and politics of concealment because Gen-AI hides the difference between
Concealing self-authorship
The second act of concealment concerns the nature of CPTs. We are often faced with the idea of black-boxed technologies, where the modes of decision-making are hidden or the pathways to meaning-making are obfuscated by technological complexity. While this holds true, what CPTs add to this issue is that they conceal their own identity and role in the production of data as well as the generation of meaning. This goes beyond amplification of structural or confirmation bias that human programmers could introduce into the system and brings us to acknowledge that CPTs are fundamentally unstable systems which through the fabrication of data becomes something new. What we are essentially looking at is that LLM-driven CPTs are in a constant state of self-authorship. They evolve, grow, verify, check and expand their information database without active human engagement or interaction. These are not only systems which are coded but also systems which code, and the algorithmic decision-making creates a lack of transparency that builds ‘black box societies’ (Pasquale, 2015).
Thus, even if we get an erroneous result through a Gen-AI query, it cannot be validated because in the production of that query – both in the potential fabrication of data and in absorbing the output of the query into its database as new data – the Gen-AI has become a new entity, and hence, it will give new results. This might explain the continuous stream of small variations that ChatGPT is able to produce for the same query, in an almost infinite stream. Each act of regeneration draws from the LLMs it is trained on, using next-token predictions which decide the word order (Shanahan, 2023), and the older results are generated as new information which needs to be added to the ‘small world’ of the query.
A research study conducted by the American Journalism Project in collaboration with OpenAI found that the self-correcting AI models go MAD (Model Autophagy Disorder), where, after the fifth generation of information evolution, the accuracy in the model is deeply compromised (Alemohammad et al., 2023). But this does not affect the assertion or the production of information revealed and generated as a response to the prompts. However, this autophagy of the model, the ways in which this knowledge that is generated changes the very nature of the Gen-AI system (as seen in the aforementioned example of the chatbot Tay), and the ways in which the provenance of data – the lack of distinction between input data and fabricated data – is concealed in the deployment of the Gen-AI as an author. These concealments are seen as necessary and inevitable for a smooth, efficient and seamless application of Gen-AI but produce provocative challenges to our understanding of stealthy and fuzzy intentions and the possibility of manipulation that can be conducted by these authorships of concealment.
Concealing connections
The third and final characteristic of CPTs is about the ways in which they use probability-driven modelling to deal with information overload. Any query that is entered into a CPT system, much like the older search-based functions of revealing, produces an almost endless possibility of results. The Large Language and visual models that the CPTs process as a response to the prompt establish new pathways, relationships and causal linkages where none might have existed, thus producing new knowledge circuits. As Amer and Elboghdadly (2024) point out, this is a fundamentally different mode of knowledge production than the logic of search engines because CPTs are no longer information retrieval systems. The basic ontology of CPTs is its more-than-human capacity to deal with massive amounts of information which would create a condition of information paralysis for an individual human subject trying to make sense of it. The reason why CPTs are so celebrated is because we have naturalized information overload as our default mode of digital being (Shah et al., 2023), and Gen-AI presents itself as a tool that helps navigate through this surplus of information by making probability-driven predictions on the most suitable result that matches our prompt.
In many ways, the emergence of Gen-AI is about our reliance on, and acceptance of CPTs, which prescribe a new relationship with information. It works on the principles that while millions of possibilities can be computed by a Gen-AI system, only one needs to be shown to the user looking for information. The others need to be actively concealed, because otherwise, just like the overwhelming revealing of information that is the nature of a Google Search Engine query, the complexity and variety of information will drown the subject in unwanted information. Gen-AI systems, then, need to be seen not as ordering or indexing technologies but perhaps as technologies of matchmaking.
CPTs do not merely process the LLMs but also predict the user’s actions by identifying them as a part of specific user groups or profiling them as belonging to particular groups, anticipating their needs and matching them with all the computational possibilities processed by a prompt. The prediction of the profile of the user and anticipation of the user needs is subject to the same logic of data modelling and output processing that the targeted data set is subjected to. This function of Gen-AI, where both the user and the data set being processed are open to pattern recognition for efficient matchmaking, is something that is concealed.
With every act of concealment that follows the small revelation, the user is granulated and profiled more acutely, and the results shown can nudge, shape, influence and manipulate the user by matching them with information that might no longer be the most accurate but most ‘relevant’ based on the imagined intention of the user. The possibility of the user having multiple intentions, and the prompt bearing variable results, are both foreclosed in this system where the result and the ossified user are matched, and all the processes through which this matching happens remain concealed, only for the Gen-AI systems to learn, while the human user remains none the wiser.
This new author-function is at the heart of the emerging anxieties around Gen-AI because the human subject is no longer the central addressee of its concealment. What is concealed is revealed, but only to other Gen-AIs or CPTs, and these systems create a complex web of correlation and causality every time the finite and singular output is shown to the human user. News reports about AI systems that talk to each other in languages that we no longer recognize, or systems that go rogue and exceed the intentions of the framework designers, are not about a perversion of AI or a glitch in programming. They are about the fact that Gen-AI systems share information with each other while concealing it from us; that they are constantly leaking and sharing information about us which is generated through our interaction with the systems but is kept from us. It is about recognizing that every time something is revealed, everything else is being concealed, and that concealed information will never be accessible in its original form, because the very act of querying will have changed the nature of the Gen-AI systems, which will have learned from the very act of querying.
Addressing the emergent moment: concealment and the naturalization of ‘not knowing’
In the much-circulated open letter asking for a pause on future AI development, allowing for human policy and regulation to catch up to the consequences of Gen-AI, The Future of Life Institute (2023) laid out some concerns about the speed at which Gen-AI is developing and our diminished capacity to deal with it. The letter, signed by many big names in the field of AI, is informed by their position paper, ‘Policymaking in the Pause’, which argues that we do not have a grasp on what questions to ask of Gen-AI because the models keep on evolving and changing at a rapid pace (Narayan et al., 2023).
My proposition of thinking through the new author function of concealment is a response to the breathless discourse around Gen-AI which focuses on the scope of its application or the consequences of its automation. I am proposing that by locating Gen-AI in a larger history of changing and evolving author functions, we might be able to ask longer and more sustainable questions that will not demand the utopian pause that many are seeking. Even if Gen-AI platforms take a hiatus from applying new models and making them publicly accessible, the current speed of AI research and development means that we are merely pausing the visible and not the emergent.
To understand the emergent moment, we need to first historicize it and recognize it as a part of a longer conversation around anxieties. At the same time, we also need to resist the historicization of this moment. Just like the call for a pause in development, there are many calls to provide explainable frameworks, responsible AI oversights, AI for social good mandates, regulating AI through human-centred values, and creating universal standards and principles for AI application. While all of them are necessary, they do bear the flaw of being too little, too late. They are reactions to what is visible and are unable to curb the full force of Gen-AI as it has entered the market. They are also trying to fix this moment as if it is a historical occurrence where we can fathom and imagine all the different implications of Gen-AI. We are asking to regulate Gen-AI as if it were a historical event when it is, in fact, an emergent one. At best, we will have structures of monitoring and redress before the Gen-AI turn becomes stabilized.
While there will be a time and need to historicize this emergent moment and understand the pivotal shift that the easy access of these tools brought about, it would be impossible and perhaps futile to try and freeze it as resolved and finite. Instead, what we need is a serious consideration of the ontological and epistemological nature and functions of Gen-AI, of which the author function is one that I focus on.
The authorship of concealment, which was always an act of extraordinary emergency – redacted messages, censorship, political stifling of free speech, revision through silencing and rewriting – that offered a temporary state of ‘not knowing’ and trusting those who decide what is to be concealed, is now becoming our new status quo. It is no surprise that misinformation-bearing groups and conspiracy theorists have taken to this because it essentially suggests that all revealed information is suspect, and hence, we can question it. It also endorses that there are all these concealed information sets which a privileged few have access to, and they will monetize and leverage it for their own good. Similarly, the concealment works to erase people, their labour, their conditions of ownership and stakes, leading to anxieties about replacement or theft of author functions and roles.
The concealment, as I have argued, works at many different levels. The emergence of Gen-AI systems conceals the human authors whose work has been used for training LLMs. They conceal the role of the AI agent, as AI systems mimic and produce text that is indistinguishable from human writing and thus can perform human information without morality. CPTs conceal the intent, of both the human query and the ways in which information is gathered and processed, thus normalizing the possibility of not knowing the context within which information is produced, circulated and consumed. CPTs conceal the way in which LLMs can be used to manipulate information, overriding the safeguards established to maintain the integrity of truth claims. These CPTs do not offer clear pathways of verification, as traditional methods of cross-referencing sources become challenging when original data is obscured through obfuscation and abstraction.
Within Gen-AI systems, the ways meaning is made of large data sets are hidden. The pathway to reaching an outcome based on a query is also hidden. The systems conceal the difference between fabricated and found data, often producing erroneous data based on patterned prediction, and then treating that data as organic and original. Using these fabricated and inputted data, the system computes many different results. These results are computed and stored but not necessarily revealed. The system changes itself through the production of those computed but concealed results, thus evolving and ‘learning’ while performing authorial stasis. This evolving system conceals the ways in which it approaches the users, creating them into datasets which are bound in small worlds that predict the outcome of a query, based not on the virtue of meaning but on the assigning of value and appropriateness based on the user profile.
In all these urgent questions, we are witnessing an emergent phenomenon where the logic of concealment naturalizes an active authorial position of ‘not knowing’. Gen-AI is going to strengthen these author functions of concealment, starting a new set of social and political dynamics and relationships that present the author as somebody who does not just conceal but also does not know what is being concealed, thus producing questionable truths and unverifiable opinions that can pass as facts. Understanding this new author function of concealment as the end of our revelation-driven search-based paradigm might provide us with a new entry point to think about the construction of meaning and the roles of interpretation in the age of Generative AI.
Footnotes
Funding
The author received no financial support for the research, authorship and/or publication of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
