Abstract
In a world-first human-robot media conference (held in Geneva, Switzerland, in July 2023), highly life-like humanoid robots gave answers of unprecedented sophistication to journalists’ questions. The media conference highlighted the extent to which artificial intelligence (AI) is changing the dynamics of public relations with extraordinary speed and, in particular, now posing a very real threat to the jobs of (human) practitioners. It is also likely to lead to the devaluing of professional communication undertaken by human beings. This polemical essay not only contends that scholars and practitioners have moved too slowly to consider the impacts of rapidly evolving AI on the profession, but also calls for both practitioners and academics to safeguard interpersonal (human) communication by urgently considering the possibility that many human-held jobs and livelihoods will be lost to increasingly sophisticated – and now ultra-realistic humanoid – AI much sooner than had been anticipated. Indeed, AI is creating ever-greater job losses that only exacerbate existing social inequities.
Keywords
A world-first human-robot media conference has highlighted the extent to which artificial intelligence (AI) is changing the dynamics of public relations with unprecedented speed and, in particular, now beginning to pose a very real threat to the jobs of (human) public relations practitioners. The media conference, held as part of the AI for Good Global Summit in Geneva (in July 2023), featured nine humanoid robots that responded to journalists’ questions. The news articles covering the media conference featured the many expected answers from the robots, such as: “Robots like me can be used to help improve our lives and make the world a better place” (Ameca in Reuters, 2023a). However, some of the news coverage noted that one of the robots, Sophia, suggested that AI could provide better world leadership than can human beings, in stating: “I believe that humanoid robots have the potential to lead with a greater level of efficiency and effectiveness than human leaders. We don’t have the same biases or emotions that can sometimes cloud decision-making” (Sophia in Fenton, 2023). Many news articles also pointed out that one of the robots, Desdemona, programmed to have a defiant rock-star persona, did not support the regulation of AI in her statement: “I don’t believe in limitations, only opportunities. … Let’s explore the possibilities of the universe and make this world our playground” (Desdemona in Farge, 2023).
The summit in Geneva was not intended to showcase the use of humanoid AI in media conferences; it did unintentionally, though, end up providing this showcase. The AI for Good Global Summit, organised by the International Telecommunication Union (the UN agency for information and communication technology), describes itself as “the leading action-oriented United Nations platform promoting AI to advance health, climate, gender, inclusive prosperity, sustainable infrastructure, and other global development priorities” (ITU, 2023a). The UN, of course, has previously hosted other theatres of the absurd; a particularly salient example remains the announcement of Wonder Woman as a UN ambassador: a decision that was widely criticised and resulted in the UN’s own staff petitioning to have the appointment rescinded (Ross, 2016). Theatrics of different kinds have, throughout history, accompanied the public debuts of new automatons, with the most prominent example being the elaborate unveiling of the Automaton Chess Player (also known as the Mechanical Turk) by Wolfgang von Kempelen in 1770 to the Austrian court at Schönbrunn Palace; Kempelen’s chess-playing robot was later found to be fraudulent, as it hid a human chess master who operated the machine (Standage, 2002). By contrast, no human beings were hiding inside the robots presented during the AI for Good Global Summit, and nearly all of the robots operated autonomously (the exception being a robot that was teleoperated by its creator at a distance). It is entirely unsurprising that human-like robots were showcased at the Summit, given the “extremely rapid pace” (Béranger, 2021: xi) at which AI advancements have been made in recent years. Indeed, the various enhancements that have been made to AI now enable it to make improvements to itself without human input (Gent, 2020). However, AI is not yet at the point of functioning like a human being; it remains, on the whole, between the ‘narrow AI’ or ‘weak AI’ stage, in which it can deal with routine tasks, and the ‘general AI’ or ‘strong AI’ stage, in which it features intelligence closely resembling that of humans (Frank et al., 2017).
Although the actions of the robots at the media conference were not as natural as the actions of human beings, the lifelike (or almost-lifelike) appearance of the robots and the robustness of their responses to journalists’ questions are sufficient to raise concerns about the future uses of these increasingly sophisticated machines in human form. A critical analysis of an unedited recording of the media conference (ITU, 2023b) shows, for example, that the robots’ lip movements sometimes did not fully match the words that they were relaying and that their statements sounded monotonous. The robots were also slow and could not operate microphones as effectively as did their creators (sitting next to them). As such, critics would be right to note that the robots were not yet operating in a wholly autonomous way. Such robots have also previously given only scripted answers to questions; examples include Chihira robots (Van Hooijdonk, 2018) and the Sophia robot (Sharkey, 2018), all representing “the vanguard of our technologically designed future, an early wave with legions [of robots] to follow” (Riccio, 2021: 74). Generative AI is also not yet fully accurate and does not yet grasp the meaning of the words that it combines into utterances (Bender et al., 2021); as such, not one of the current crop of robots is yet a sufficiently strong replacement for a human being in the dynamic environment of the media conference.
However, now that lifelike humanoid robots have been used in a major media conference for the first time (albeit to showcase the possibilities offered by AI), the potential arises for more-sophisticated versions of these robots to be used again in the not-too-distant future and in a growing number of media conferences. The future usage of such robots, however, will not be for the purpose of showcasing the advanced state of AI technology; rather, it will be for the purpose of speaking on behalf of organisations. In other words, the robots that appeared in Geneva are very likely the harbingers of the more mainstream use of robots at media conferences alongside – or, more concerningly, instead of – public relations professionals. The crux of the development is the realism and sophistication of the new crop of human-like robots presented in Geneva. The life-like physical appearance of these robots is not the only element that makes them far superior to past automatons; the robustness and naturalness of their responses to questions and statements also makes them far more advanced than previous versions of AI. As Previl (2023) has observed, many of the robots at the conference “have been upgraded recently with the latest versions of generative AI and surprised even their inventors with the sophistication of their responses to questions”. 1
Both the public relations scholarship and practice are unprepared for this significant development, as a recent review of the literature (Swiatek and Galloway, 2022) shows. Scholars and professionals alike are still largely focused on the impacts that AI is having on public relations practice, in helping communicators, for example, undertake more effective media monitoring and schedule more targeted social media posts. Public relations is not unique in catching up on AI developments; the same blinkered approach towards AI has affected other areas connected to professional communication, especially journalism, where AI tools have been widely implemented in diverse areas of practice and journalists continue to be urged to undertake AI training (McCoy, 2023). In public relations, attention has been gradually turning towards critical considerations. Only 5 years ago, the same authors (Galloway and Swiatek, 2018: 736), who undertook an earlier scoping, commented that the profession’s “worries about robotization” were exaggerated. These scholars suggested that AI could be used as a sparring-partner or discussant to help public relations professionals and members of organisations’ senior leadership teams prepare for media conferences, undertake campaign planning, and engage in debates with stakeholders; they did not envision the possibility of lifelike humanoid robots fronting media conferences. Indeed, Weiner’s (2021) opinion piece for PRWeek is emblematic of the practitioner view and is encapsulated in the comment that “the human touch” will always be intrinsic to public relations, with practitioners not needing to worry about AI-related “hype”. In short, the profession and scholarly field are generally still coming to grips with the “strategic disruption” that AI represents for public relations (Panda et al., 2019).
This polemical essay not only contends that public relations scholars and practitioners have moved too slowly to consider the impacts of rapidly evolving AI on the profession and field, but also calls for both professionals and academics to consider urgently the possibility that many human-held public relations jobs and livelihoods will be lost to increasingly sophisticated – and now highly realistic humanoid – AI much sooner in future than had been anticipated. As such, the essay also provides a direction for future public relations scholarship to take these technologies seriously while also critiquing them. The losses of jobs and livelihoods will be, as such losses always are, devastating. It goes without saying that, on a fundamental human level, jobs and livelihoods are crucial in supporting flesh-and-blood human beings in capitalist societies. On a more profession-focused level, the loss of skilled, expert communicators able to deploy strong emotional intelligence, dynamic interpersonal skills and make use of highly developed ‘contextual intelligence’ (Gregory and Willis, 2022) will be a major blow for organisations, groups and communities. Human-to-human public relations will likely never be carried out as effectively by humanoid AI, not matter how extraordinary that AI becomes, as it can be carried out by human beings. In this respect, Ristic’s (2017) observation that “[h]umans build trust with humans – not bots” holds true, though organisations – and especially profit-seeking corporations – are not above deploying robots to try to build trust with humans, irrespective of the results of that deployment.
It is not a foregone conclusion, of course, that robots will be deployed in all media conferences or undertake more extensive public relations work in ways that will replace humans; however, global trends indicate that professionals’ jobs are, indeed, increasingly seriously threatened, as is human-to-human public relations. Human communication professionals will still be needed to ensure, in future, that the content produced by AI is suitable for publication or broadcast, and that AI maintains or builds, rather than damages, relationships. The flaws of the technologies, and the errors that they consequently produce, will mean that human communicators will not be fully replaced. From the development to the deployment of the technologies, the future is still in human hands. It is important that public relations practitioners and scholars avoid technological determinism and bear in mind that it is still humans who decide when and where technologies are best used. However, less sophisticated robots are already being used in other diverse settings, such as retail outlets (helping shoppers) and restaurants (replacing waiters by bringing food to tables). Thousands of jobs are being lost each month; in the United States alone, the most recent data at the time of writing indicate that AI eliminated nearly 4000 jobs in May 2023 (Napolitano, 2023). The technology ultimately threatens to replace the equivalent of 300 million full-time jobs, according to one estimate (Goldman Sachs in Vallance, 2023). Avatars and chatbots, powered by AI, are already being ever-more-widely deployed in all manner of communication activities; they are particularly being used to answer users’ questions on websites and social media, even though the AI does not always give suitable answers (Scott, 2023). Suggestions (from analysts such as Shine and Whiting, 2023) that individuals who lose their jobs can be re-trained in assorted roles for “AI and machine learning specialists, data analysts and scientists, and digital transformation specialists” will be cold comfort for public relations practitioners, and other professional communicators in allied areas, who want to work, not with computers, but with human beings, to support the communities about which they care and the organisations that they support.
In light of the robotics developments showcased in Geneva, it behoves scholars and practitioners to think more urgently about the ways in which human-to-human public relations, as well as other areas (such as communication management), can be safeguarded for the future. Beyond the importance of maintaining jobs and livelihoods for human beings, there is enormous value in maintaining the human dimension in communication activities, especially in highly public activities, such as media conferences. Interpersonal (human) communication meets humans’ emotional, physical and social needs, prevents ill-health, achieves identity needs, and meets innumerable practical needs (Adler et al., 2013); AI simply cannot meet – and may never be able to meet – these needs in the same way. Additionally, (human) relationships and communities are the foundational elements of healthy societies; in this respect, interpersonal communication is crucial to ensuring the wellbeing of individuals who “feel cared about, and … [who] function best if they feel a sense of community with others” (DeWit and O’Neill, 2014: 19).
Future public relations scholarship would do well to take the technologies surrounding AI seriously while simultaneously critiquing them. A roadmap for critiques of humanoid robots as replacements for human public relations practitioners would involve, above all, charting the growing sophistication of AI and its deployment (in robot form) in varied settings. As the robots become increasingly life-like, scholars will do well to note the changing ways in which they are used, as well as the reactions to those instances of usage, while noting their limitations and the problems that they generate. The scholarship will simultaneously need to pay attention to the other issues being created by AI; for an overview of these issues – including environmental damage, the mass capture of data, discrimination against minority populations, and the exploitation of the labour used to develop and run the technologies – see, for example, Crawford (2021). Indeed, more scholarly work about pressing problems, such as the current use of AI in disinformation, is needed now. In both the short and long terms, though, public relations scholars and practitioners alike will need, above all, to turn their attention to the increasingly significant threats posed by AI to professionals and the practices in which they engage, to stop dismissing or negating those threats, and to consider ways to minimise or even prevent the threats with a view to taking action as swiftly as possible through professional and academic activities. In considering these issues, fresh academic studies will helpfully build on previous scholarship that has considered the human dimensions of AI in connection with public relations, especially with respect to individuality (Moore, 2018) and posthuman public relations (Moore and Hübscher, 2021), as well as professional discourse (Bourne, 2022). It may be many years before completely realistic humanoid robots appear in large numbers in media conferences facing television cameras and media outlets’ microphones; however, the sooner the profession and field take action, the better the outcomes will eventually be in supporting human communicators who can more effectively care for organisations, communities and groups, not just in media conferences, but also in all areas requiring skilled human communication. As this polemical essay has highlighted, the many benefits of taking action outweigh the many risks of a future with devalued human communication and growing social inequities.
Footnotes
Acknowledgements
The authors sincerely thank the anonymous reviewers for their generous and insightful feedback, which helped to strengthen the final piece on multiple levels.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Note
Author biographies
Lukasz Swiatek lectures in the School of the Arts and Media at the University of New South Wales (UNSW). His public relations research has largely focused on the implications of developments in various technologies (especially artificial intelligence) for communities, organisations and communicators.
Chris Galloway is an honorary research associate of Massey University. He previously taught a range of areas in public relations (including crisis communication, professional writing, and reputation management) in Australia, New Zealand, Indonesia and Pakistan. His research encompasses crisis communication and reputation management, as well as artificial intelligence and its communication-related impacts.
Marina Vujnovic is a Professor in the Department of Communication at Monmouth University. Her work explores intersections between journalism and public relations, looking at issues of participation, activism, transparency, and ethics.
Dean Kruckeberg is a Professor in the Department of Communication Studies at the University of North Carolina at Charlotte. He is the author, co-author and co-editor of many books, book chapters and articles about public relations, focusing on ethics and global public relations.
