Abstract
Since their introduction in late 2022, generative AI applications have proliferated as Big Tech companies seek to encourage widespread adoption from the public. This article reports on the findings from exploratory qualitative research conducted in mid-2025 with Australian adults about their knowledge, everyday practices and imaginaries related to generative AI. Nearly all participants, regardless of their age, gender, ethnicity or geographical location, had experimented with generative AI applications, and many had incorporated them into their quotidian routines. However, far from being enchanted by these technologies, these Australians saw them as little more than mundane software that was now pervasive and therefore unavoidable. Generative AI was described as offering useful tools or helpers for achieving better efficiency, time-saving, and productivity in accomplishing routine tasks at home and work. Most participants were aware that the tools frequently generated incorrect information, and therefore required checking, but seemed largely untroubled about this. They expressed concerns about the impacts of possibilities of fake information, scams and data privacy issues, and the loss of learning or critical thinking that generative AI use could cause. However, participants also expressed feelings of powerlessness over what they could do to avoid using generative AI in the face of the determination by Big Tech – and in some cases, employers and educational institutions – to promote its use. More profound negative impacts were mostly recounted as abstract or as potential problems in a future world if generative AI development by Big Tech was allowed to progress unchecked.
Introduction
With the introduction in late 2022 of public-facing generative AI software such as OpenAI's ChatGPT, Google's Bard/Gemini and Microsoft Copilot, a range of predictions have been made about its impacts on society and to what extent people may adopt or resist using it. OpenAI, founded by Sam Altman, Elon Musk and others, was the first company to release chatbots to the general public that used large language models to present information in a ‘chatty’ conversational style. The company continues to promote itself with the techno-utopian ‘mission to ensure that general artificial intelligence – AI systems that are generally smarter than humans – benefits all of humanity’ (OpenAI, 2025). Since then, other Big Tech companies, including xAI, Anthropic, Amazon, Meta, Apple, Google and Microsoft, have offered their own generative AI products to consumers, promoting them as helping users achieve better productivity, efficiency and save time in performing everyday tasks.
The concept of ‘personal AI’ is used in many of these promotional discourses in an attempt to encourage user familiarity and acceptance of what has often been portrayed as a disquieting alien technology (Hanna and Bender, 2024). For example, Meta advertises its generative AI software as ‘personal AI for your life’, offering superior voice and text conversations to make it easier to use and be ‘helpful throughout your day’ (Meta, 2025b). Its website features images of people preparing a meal, working or driving a car while chatting to the Meta AI app. Meta claims that Meta AI is ‘shaping an AI-driven future’ and that ‘Our goal is to build AI responsibly, for everyone’ (Meta, 2025a). This promotional language has been reflected in other industry texts and news coverage of generative AI, which have often focused on representing AI chatbots anthropomorphically as ‘intelligent’, ‘collaborators’ or ‘assistants’, while erasing the role of humans who create, train and moderate the data and models underpinning the technologies (Guest et al., 2025; Hanna and Bender, 2024).
Generative AI technologies have sparked controversy since their release, but criticisms are now mounting as they become more pervasive due to Big Tech attempting to profit from them (Guest et al., 2025; Nelson, 2025). A growing literature has developed that identifies Big Tech's extractivist and exploitative ethos as well as significant problems related to the use of intellectual property and data privacy and security (Hogan, 2024; Ruschemeier, 2025; Schaake, 2024; Tacheva and Ramasubramanian, 2023). AI companies have been criticised for rushing their new technologies to market without due testing or consideration of potential harms when in public use. The mainstream news media have reported on a series of scandals, including allegations that generative AI use has led to suicides, health accidents and wrongful imprisonment due to failures in the software and poor design and deployment choices. (Nelson, 2025). The major environmental health impacts on local ecosystems and residents and competition for energy and water sources caused by Big Tech's expansion in AI infrastructures such as data centres have also received sustained critical attention (Hogan, 2024; Simpson, 2025).
Recent analyses have commented on the current impacts of the rush to incorporate these technologies into people's everyday lives. Far from repeating technological dystopian narratives suggesting that future developments will result in super-intelligent AI systems replacing or even destroying humanity, such analyses argue that generative AI services offer little of worth to targeted users (Bender and Hanna, 2025; Guest et al., 2025; Hicks et al., 2024). Continual hyperbolic claims by the ‘Empires of AI’ (Hao, 2024) about the benefits to users afforded by their generative AI products in educational settings, the workplace, healthcare and other domains of people's lives have been strongly contested by critics. They have drawn attention to what they see as the ‘AI con’ (Bender and Hanna, 2025), the errors (often termed ‘hallucinations’) that are frequently found in content generated by this software (Bender et al., 2025; Hanna and Bender, 2024) and the impacts on learning when it is introduced into educational settings (Guest et al., 2025).
These critiques therefore focus on the more mundane dimensions of the hazards of generative AI and argue for resisting the automatic incorporation of these technologies into everyday life. As Emily Bender and Alex Hanna (2025) write, it is important to fight the hype language of Big Tech (both utopian and dystopian), bring criticism to the ground level of how people are actually using generative AI and, based on this experience, what they see as its benefits and harms. This includes attention to the politics of generative AI and how people's experiences are structured through their social group membership or geographical location. As such, these analyses of generative AI technologies represent the latest in a longstanding body of social research that has drawn attention to the hype and significant social and ethical problems surrounding the profit-driven development and marketing of ‘smart’ devices, the Internet of Things (Goulden and Cameron, 2025; Lupton, 2020), urban AI applications (Luusua et al., 2023) and cloud computing and other infrastructures supporting data harvesting and processing (Kitchin, 2021; Mejias and Couldry, 2024). Research investigating people's practices in relation to mundane software such as mobile apps (Clark and Lupton, 2023; Morris and Elkins, 2015), everyday automation (Pink et al., 2022), personal data privacy (Draper et al., 2024) and algorithmic folk theories (Ytre-Arne and Moe, 2021) offer a substantial basis for investigating how the public is responding to generative AI applications.
Since the release of ChatGPT and other generative AI applications, an expanding body of social research, to date mostly using quantitative surveys, has sought to examine public understandings, uses and imaginaries of these technologies. Building on this literature, in mid-2025, we conducted five online focus groups with 32 Australians from diverse backgrounds about (i) their knowledge and practices related to generative AI technologies; (ii) their perceptions of the benefits and harms of generative AI related to the natural world and environmental crises; and (iii) their imaginaries about how generative AI might develop in the future. Following an overview of related research and an outline of methods, the current article reports on the findings from the discussion groups concerning their knowledge, practices and imaginaries related to generative AI services. (Participants’ knowledge of and reactions to the environmental impacts of these technologies are the focus of a second forthcoming article.)
Related research
Several studies conducted internationally have attempted to discover to what extent members of the public are using generative AI technologies and the benefits or drawbacks they discern in such use. Recent research by the US National Bureau of Economic Research in collaboration with OpenAI using its data on ChatGPT use (Chatterji et al., 2025) estimated that by July 2025, close to 10% of the world's adult population had used it, with rapid growth in low- and middle-income countries. Early adopters were mostly men, but the gender gap has narrowed dramatically, so that just over half of users are women. Work-related conversations by users had received steady growth since the chatbot was released but there was even faster growth in non-work-related messages (72% of all usage by mid-2025). ChatGPT was used principally for practical purposes for getting tasks done, seeking information and help with writing (particularly work documents) rather than for self-expression, creative or playful activities.
An earlier survey conducted by the National Bureau of Economic Research (Blandin et al., 2024) found that by late 2024, nearly 40% of the US population in the age group 18–64 years had used generative AI. Almost a quarter (23%) of employed respondents were using it for work at least once in the previous week, and 9% used it every workday. One third of respondents reported using generative AI applications outside of work, with 27% using it at least once in the previous week. Pew Research Center surveyed more than 5000 American adults at a similar time (August 2024) (McClain et al., 2025). Key findings were that respondents were more concerned than excited about the possibilities of AI, with 43% thinking that its increased use was more likely to harm than benefit them, and one-third were unsure. In terms of their use of AI, 27% said that they interacted with it several times a day, and a further 30% used it about once a day or several times a week.
Looking beyond the USA, a national survey undertaken in the second quarter of 2024 investigated the use of AI chatbots in Spain in the previous six months (Suárez and García-Mariñoso, 2025). It found that older and less well-educated people, together with women, were less likely to use chatbots, while people who were higher users of other digital technologies were also more likely to use them, as were students compared with full-time employees. Another UK study conducted by the Alan Turing Institute in late 2024 involved children aged 8–12 years, their parents or carers, and teachers (Hashem et al., 2025). Findings showed that more than half of households were using generative AI technologies, with those of higher socioeconomic status and in England being more likely to use them. Nearly a quarter of the children included in the survey reported using generative AI, with ChatGPT the most commonly used application. Three-quarters of parents, carers and teachers reported high levels of concern over the negative impact on critical thinking skills of children's use of generative AI. Yet two-thirds of teachers admitted to using these applications, predominantly ChatGPT, in their work for lesson planning, research, providing student feedback and designing assignments.
To date, limited research on Australians’ generative AI use has been published. A survey of adult Australians conducted in early 2024 (Notley et al., 2024) found that 39% of respondents had experience using text-based generative AI services such as ChatGPT or Bard. Of this group, only a minority (13%) were using these services regularly. Of the 59% who had not used it at all, almost half had no interest in using it. An additional 29% of respondents knew of these services but had not used them, while 26% responded that they were not at all familiar with these services. Younger respondents, those with a high level of education and with a high household income, were much more likely to be regularly using generative AI. The respondents demonstrated largely negative attitudes toward generative AI. Just over half thought that it was being developed too quickly and 40% agreed that generative AI will harm Australian society, while only 16% disagreed and the remainder (44%) were unsure. Almost three quarters (74%) believed laws and regulations are needed to manage risks associated with generative AI.
Another study surveyed both Australian (in October 2023) and British respondents (in January 2024) about their awareness of AI embedded in everyday digital technologies. In both samples, traditionally digitally excluded groups (women, those of lower educational attainment, older people) demonstrated significantly lower awareness and held more negative views about AI than did members of other social groups (Bentley and Evans, 2025). A cross-sectional survey administered across four Australian universities (Henderson et al., 2025) showed that half the student respondents had sought feedback on their work from generative AI, valuing the ease and timeliness of obtaining such feedback, but were aware that it was less trustworthy than feedback provided by their teachers. A report published by the Australian Government's statutory body Jobs and Skills Australia used a meta-analysis to estimate that 21%–27% of Australians, mostly in white collar occupations, were secretly using generative AI at work without their manager knowing, and had attempted to hide this use because they feared being considered cheating, lazy or less competent by colleagues (Jobs and Skills Australia, 2025). This report was published at the same time as the Australian Productivity Commission and the Federal Treasurer were urging citizens to more actively use generative AI in the workplace to achieve better efficiency and productivity.
Departing from the large-scale survey approach, a small number of studies have used qualitative methods to investigate people's ideas and beliefs about generative AI in greater depth. In research directed at eliciting Americans’ perceptions of chatbots (Cheng et al., 2025), 12,000 participants were asked to provide open-ended metaphors reflecting their mental models of these technologies. Data were collected over a 1-year period following the mainstream adoption of ChatGPT (May 2023). Findings showed that participants generally thought of AI as a helpful tool and a powerful search engine or robot, but also in anthropomorphic ways as warm and competent. Another qualitative project eliciting metaphors used three in-person workshops with students, academics and support staff at a business school in an Australian university, inviting them to articulate their own and their peers’ viewpoints on generative AI (Vallis et al., 2025). Four categories of metaphor were identified across the workshops: conceptualising generative AI in terms of its tasks and practical capabilities as tools; portraying it in human-like relationships, roles and social positions; representing it as mysterious, unreliable, uncontrollable or magical; and portraying it as having volition, intention or agency, and as competitors with humans.
Adopting a different perspective, between March and May 2023, researchers interviewed 20 older American adults (aged 65 years and over) about their perceptions of AI chatbots in an attempt to identify their folk theories concerning how these applications worked and factors influencing their use. Participants interacted with ChatGPT as they were being interviewed (Enam et al., 2025). Interviewees’ responses tended to anthropomorphise the chatbots and see them as trustworthy providers of information but exhibited uncertainty about how ChatGPT found information and generated its answers to queries. They appreciated the fast responses provided, the simple process by which queries could be entered and the ‘personal touch’ of the friendly conversational style used by the chatbot. The interviewees did hold concerns about data privacy and security of information and its accuracy. Other research on AI folk theories involved online focus group discussions with Brazilian university students in mid-2024 (Eder and Lhamby, 2025). Five folk theories were identified: AI is a constant duality; AI is explainable, just not in detail; AI is inevitable and inescapable but sometimes unnoticeable; AI is about power; and AI is what we make out of it.
In this article, we build on and extend these studies, discussing findings from our qualitative research to provide some in-depth insights into how Australians are responding in a fast-changing environment in which a growing range of constantly updated generative AI applications are both highly promoted and regularly criticised. The qualitative method of focus groups was chosen as a way to facilitate interactions and discussions between participants. In the Methods section, we explain the reasons for this choice and how we developed an innovative approach to conduct these discussions online. The Findings section outlines the topical themes identified across the group discussions, followed by a discussion of the findings and how they relate to previous research on mundane AI use.
Methods
The focus group method is often chosen for in-depth exploration of topics about which little is known, used to generate researchers’ understanding of attitudes, norms and practices rather than seeking generalisable insights from a population. Focus groups were first used in therapeutic settings and for market research but have been employed in academic social research since the mid-20th century. Group members are encouraged to build on or debate each other's viewpoints to deepen and extend the discussions (Parker and Tritter, 2006). While group discussions have drawbacks, including allowing less time for allowing participants to go into detail in their responses, or potentially that they avoid providing controversial opinions or highly personal information, this approach also offers benefits over individual interviews. Applied to novel media and digital technologies, focus group discussions can be productive in drawing out tensions or disputed issues as well as inspiring participants to engage in conversations with each other about issues they may otherwise not have thought to mention (Cheong and Nyaupane, 2022).
For our study, we modified the standard focus group discussion method by using a collaborative digital whiteboard to present the questions asked of the group and a set of images to which participants were invited to respond, and by incorporating two creative prompts at the end of the sessions, which invited participants to think together to imagine the future applications and impacts of generative AI. All focus groups were conducted online using Microsoft Teams to facilitate accessibility and inclusion for participants living in different parts of Australia, including rural regions. The discussions were recorded and auto-transcribed using the built-in Teams software. Ethics approval was provided by the UNSW Sydney Human Research Ethics Committee, and all participants provided signed consent via PDFs attached to an email to the research team before their participation in the focus group. People were informed ahead of time that the discussions would be about generative AI and its environmental impacts but were assured that they did not need to know about or use these services to take part.
A research company that specialises in hard-to-reach populations was commissioned for recruitment. They were asked to recruit people aged 18 years and over from a broad range of ages, a balanced mix of genders and a mix of metropolitan and rural regions. Once interest was expressed from eligible members of the research company's panels, they were asked to sign up for one of the focus groups and provided with the login details for the online session. Following participation, they were provided with an AUD100 digital gift card as thanks for their time.
Table 1 provides details for each participant (all names are pseudonyms). Eighteen participants were women, 14 were men. They were located in five different states/territories across Australia, with 24 people living in a metropolitan area and eight in a rural region. Three participants were in the 18–24 years age group, five in the 25–34 age group, five were aged 35–44, sixteen were aged 45–64 and three were aged 65 years or over. The participants came from diverse ethnic/cultural heritage backgrounds, reflecting the multicultural Australian population. There were three people with Indigenous heritage, five with South-East Asian/Indian subcontinental heritage, one with Middle Eastern heritage, ten with continental European heritage, seven with Anglo-Celtic heritage and six who described themselves simply as ‘born in Australia/Australian’. In terms of educational attainment, just over half (17 people) had completed university-level qualifications at the Bachelor's level or higher.
Participant sociodemographic details.
Each focus group was one hour long, and all were designed and conducted by the authors. We used FigJam, the Figma collaborative digital whiteboard tool, as a visual aid to facilitate the conversations among the group participants. Each question we asked verbally during the sessions was also displayed as text on the whiteboard, and the group facilitator (the second author) used the sticky notes function throughout the sessions to write notes of participants’ views as they were expressing them. This meant that the whole group could see the notes to help generate further discussion and reflections.
Following the provision of an overview of the project and group member introductions, we began with the question ‘What is generative AI?’ as a way of investigating initial ideas about what the technologies are. To identify actual practices of use and attitudes towards the technologies, the group was then asked, ‘Who here has used generative AI tools such as ChatGPT? What are some examples of how you use them? For those who haven’t tried these tools – please explain why?’. The second part of the focus group addressed the environmental impacts of generative AI technologies. Participants were first asked, ‘Can you think of any ways in which generative AI might have good or bad impacts on the natural environment?’. Before moving on, assuming that some participants may not be familiar with the infrastructure supporting generative AI, we used the FigJam board to show an image of the outside and inside of a data centre, a diagram made by one of the authors to show in simple terms how data centres have an impact on the environment, and a photo of electronic waste. After displaying these images and explaining what they show, the facilitator asked the group: ‘What thoughts come to mind when you see these images. How do you feel about them?’.
The group discussions ended with two creative prompts designed to inspire thinking about the future, as follows:
Imagine a future Australia in 50 years’ time. What do you think chatbots like ChatGPT will be used for in relation to helping combat climate change, loss of biodiversity and extreme weather events? In what ways might generative AI be harming the environment in this future Australia? If you were an AI developer and could create any kind of AI you like that would help solve the problems facing the environment, or the impacts of problems such as air and water pollution on humans, what would it do?
Before ending the sessions, participants were given the option of making any further comments, expressing their thoughts and feelings about generative AI.
The analysis adopted a reflexive iterative thematic approach, aligning with what Braun and Clarke (2023) describe as ‘big Q’ qualitative research. Our research materials were the notes written by the discussion leader on the FigJam whiteboard during the group conversations, together with the auto-generated verbatim transcripts of the focus groups (first checked and corrected by the authors), to develop a set of fieldnotes for each workshop. Together, both authors developed an iterative approach drawing on repeated reading of the fieldnotes and transcripts in which we individually conducted thematic analyses and selected illustrative quotations to support these topical themes. We passed our insights back and forth with each other, building a shared analysis.
Findings
The findings are grouped under the topical themes identified across the group discussions: definitions of generative AI; everyday uses; errors, disinformation, fake content and data privacy; generative AI not human enough or dehumanising; and imagining generative AI futures.
Definitions of generative AI
For the most part, participants defined generative AI in terms of what they thought it could do. They noted that it is a ‘computer program’ (Jess) that can construct various types of content like text, audio, and video, by drawing in information that is available online: ‘it's fed off all the information that sits publicly available on the internet’ (Michael). Some people described generative AI as the latest version of a search engine. As Farhad put it: ‘It's kind of like a more advanced stage of Google – I guess like a Google on steroids where you can ask it a more specific question and it gives you a more thorough answer’.
Generative AI applications were portrayed as a new way to create content from information that was already available on the internet, sourcing relevant material and returning it to users. Participants emphasised that while the data themselves were pre-existing, the most significant function of generative AI technologies was in recombining the digital information or summarising it in new ways: ‘Generative AI solutions are the ones that search through huge amounts of data available on internet and all other sources and try to give you the output in a way that it seems original’ (Vihaan). Participants also made reference to the diversity and range of generative AI tools, There's many, many things that it can do. People use it for assignments and people use it to generate video content. It goes and gathers all this information all over the world […] you actually type in what you want and whatever the content has been fed to it, it will go and peruse that content and it will bring it back to you. (Mila)
People described Generative AI's role in assisting with efficiency and productivity: ‘it just generates information a lot quicker than if you try to find it yourself’ (Laura). These technologies were positioned as providing answers and assistance in various tasks and quickly creating documents and websites as well as providing summaries of information. As Bianca put it: ‘It's like a whole accumulation of data. It can construct anything, like a letter or an email’. For a few participants, the creative affordances of generative AI were key to how they defined it. ‘It allows you to create videos and photos with your imagination […] and it's so realistic’ (Suresh).
Several people suggested that it was the human-like conversational affordances offered by generative AI that most distinguished it from other software. Tracey commented that: ‘I’d say if I was describing it to someone, I’d say it's a robot trained to act like a human’. For Vihaan, ‘it's just like you’re chatting with a friend’, while Shelley said, ‘I look at it like it's an advisor that you can turn to. You can ask it absolutely anything and it will give you the information’. Michael noted that technologies such as his laptop with voice recognition software helped him to converse with generative AI: ‘it's like talking to someone and they can tell you anything you want. You can ask them by voice’.
Some people commented on how the software has the capacity to learn. Terry described it as ‘learning from itself’, while Jess described it as ‘like a computer program that can be taught. And I guess the more input that it has, the more it can like, the more it can learn what's right and what's wrong’. Others noted that some effort was required on the part of the user to ensure the correct query was made: ‘You tell it “I want to create this”, and if it doesn’t spit out what you want, you literally go, “No, I don’t want that – I want this instead’’’ (Brooke).
Everyday uses
Nearly all participants had tried generative AI applications to some extent. ChatGPT and Copilot were the most commonly used applications, deployed for various tasks such as emails, content creation, social media posts, and presentations. Many participants noted that generative AI could now be found embedded in other applications, giving examples such as Google Search, Samsung phones, Facebook, messenger apps, Instagram, LinkedIn and Outlook. Some people had only experimented a little. For example, Michael said, ‘I’ve only kind of like teased it a bit, just mucked around with it, see how it works’. Fiona was the only person to say she hadn’t yet tried these technologies but went on to comment that she would be learning how to do it in the near future: ‘my daughter recently told me about ChatGPT. And she said it was amazing, and she was going to show me how to use it next week’.
Some people expressed frustration at the ubiquity of generative AI. As Kelly put it: ‘It seems to be in everything. Because you just switch on to everything and there it is. It's like you haven’t even been given permission to see if you want to use it or not’. Chloe, who described herself as ‘not a huge fan of AI’, went on to say: I try and avoid it where I can. It really irritates me that it seems to be in every single app ever now. Every app has a search with AI function and on some message apps. Like you can like rework your message with AI to make it funnier or make it friendlier or make it more supportive.
However, Chloe was unusual in her direct intention to avoid or resist the use of generative AI. Most other people were enthusiastic adopters, describing numerous everyday uses of the latest AI applications. Bianca said that she uses ChatGPT daily – mostly to compile travel itineraries. Suresh also used Copilot for travel planning: ‘if I need to go somewhere I ask it, “OK, what's the weather there, what's the best time to go there?”’. Suzanne said she was using generative AI to create images of how her house might look with interior decoration or renovation changes, while Nicole planned her garden layout with it. Kelly described how she had recently had a medical procedure and had used generative AI to help her understand what the doctors were advising. Brooke gave the example of using generative AI to find information on the internet and then format it in the way she needed: Let's say you choose to contact a real estate agent. I want to see how many [in her residential area] real estate agents there are. I want their contact number, I want their address, I want their e-mail address, I want all their names. And it will generate and populate a list for you that you can also copy and paste into a Google format or Google templates and stuff like that as well.
Beyond domestic use, another common use recounted by participants was for work purposes. People said that they find generative AI applications beneficial for saving time and enhancing productivity at work. Abasi said that he used ChatGPT ‘a lot’, going on to mention ‘emails, producing content, social media posts, Facebook and things like that. I use it in presentations. I use it in Excel formulas. It's very good and saves lots of time’. Jacqui described similar uses at work: If I’m writing an email or sending an SMS to somebody, I can literally just press a button. It just gathers all the information from the screen on their file. I just skim over it and go, ‘yeah, that sounds good’. And then press send.
Suzanne, a small business owner, uses generative AI to create social media content to publicise her business. Suresh, a business analyst, described how he used Copilot to help him write business cases: ‘It helps me to present the information a little bit more professionally’. Maja described how she used generative AI applications when applying for jobs: You can just put your resume in together with a job description and get it to fix up your resume, your cover letter and then LinkedIn invites within minutes. Things that used to take hours, it's now down to two minutes.
Maja also used these applications at work for helping her take complex documents and render them into much simpler language for better access to those with low levels of literacy. Krish works in health and safety inspection and found generative AI useful to quickly format reports he must write on investigations and audits he carries out. Similarly, Laura uses these applications for generating policies and procedures documents and making posters.
For Carlos, a university student, ‘a positive effect of generative AI is that it can be used as a teaching tool’, going on to give examples of providing structures for essays and assignments. He had found that he had been encouraged to use ChatGPT for his studies by his lecturers. He also noted that it could be used to ‘cheat on tests’ by providing answers. Ben also described generative AI as ‘a great research tool’ that could be used for work or study purposes.
Errors, disinformation, fake content and data privacy
While many participants found generative AI applications to be useful, they also expressed several concerns and anxieties about these technologies. The most common drawback expressed by participants was the constant errors in the content returned from their queries. Participants emphasised the need to use generative AI with caution and constantly double-check it due to the risk of incorrect information being provided or missing information. As Abasi noted: ‘It's not always 100% accurate. You have to read carefully the content that is generated by AI. And especially if you know the subject that is generated, you feel sometimes there's something wrong here’. Ben pointed out that: ‘It gives answers as if it's 100% correct, but yeah, it generally won’t be. It’ll say something as if it's a fact’.
Some people pointed out that given these continual errors, generative AI should not be equated with human intelligence. For example, Suresh observed that: ‘With GenAI, even though we say artificial intelligence, it provides content based on patterns. It's not thinking like humans are. It may not be the right content’. Paul wanted to make the point that the language used to describe generative AI technologies, such as ‘teaching’, ‘learning’ and ‘intelligence’, was misleading: ‘It worries me when we use the word teach [in relation to AI], because we’re not actually teaching. Generative AI is not intelligent. So to be able to teach somebody something, there needs to be a level of intelligence there’.
Even more concerning than continual errors in generative AI content was the idea that the software could be used by others to deliberately mislead people or spread malicious disinformation. Jelena expressed her worry that with the wholesale use of generative AI in news reporting and social media posts, it was difficult to trust the information presented in such content: ‘we are not told about it and that's what scares me. I want to know what's real’. She also brought up the issue of how generative AI could be used for the purposes of political disinformation: ‘creating conflict and fake news, etc, and just causing confusion’. Fiona gave examples of how fake images were being created and shared on news sites and social media platforms such as Facebook: Well, I mean, there's so much misinformation out there. You can use AI to create images that are just completely false. So there should be some sort of checks and balances, since, you know, you can’t really trust a lot of the news these days too […] It's also on Facebook, you get these AI generated images that are very, very deceptive.
Chloe pointed out that there were few provisions to control the AI-generated content that was added to the internet. She was particularly concerned about the deep fakes used to create explicit sexual content, leading to image-based abuse and domestic violence: ‘especially as we see lots and lots of younger people using AI, it really just makes me quite concerned in terms of an online safety and a privacy lens’. Using examples he had seen in India, Krish also referred to how fake content could have serious ramifications by inciting violence: ‘there have been people who have used AI to make videos or audio clips and spread it out in social media, which has caused community violence in India and people have died’.
A related set of concerns expressed by participants referred to the ways that generative AI applications harvested and processed users’ personal information. For example, Tracey recounted how she had experience of an AI application apparently knowing where she lives, even though she had not directly told it: AI has definitely lied to me. It said something about [the place where I live …]. I’ve never mentioned where I was from at all […] And when I tried to drill down about ‘How do you know where I live?’, it said, ‘Oh, just a guess’. And I said ‘You’re lying to me. It said, ‘I'm not programmed to lie’, but it didn’t say ‘I didn’t lie’.
Responding to Tracey’ comment, Maja replied: ‘That's confirming all of our worst fears, isn't it? That it starts stalking you’. She went on to express worries about AI software reading and using people's personal information without their knowledge or consent. Maja had noticed at work that generative AI is ‘even on Outlook. When you start typing your emails, it's always present there’. This worried her, as she wondered whether the software was ‘going to read all my information and use everything that I’m saying and doing freely’. She did not feel that she could do anything to remove the function from her Outlook. In a different group, Nicole commented that scammers were using these technologies, ‘so it's a bit scary that they’re finding newer ways to use AI to scam people’. Alicia, who works for a government department, noted that: ‘We’re very cautious of things like data breaches, so we’re very concerned with using these sorts of things because you never really know who can get access to them eventually’.
Generative AI not human enough or dehumanising
Several people expressed their concern that generative AI tools were diminishing critical thinking skills now that people were beginning to rely on them for everything. As Vihaan put it: ‘it's making us lazy, ‘cause now I instead of proofreading my emails, I just put everything in Copilot’. For his part, Jake had found a negative aspect of using generative AI was having to proofread the content it created to ensure that the language used reflected his own style and identity: I find that if I’m for example putting together like a cover letter or an e-mail or a report, like the language used doesn’t reflect like who I am. So I do need to spend some time going through the information and making it a little bit more personable and not robotic.
Jacqui observes that her clients, particularly those who were older or who were from a less educated background, disliked dealing with a chatbot: ‘They don’t want to talk to a computer. They want to talk to an actual real person’. She went on to note that she felt ‘torn between both ways because I like person to person interaction, but I can also see how in the future it's going to make life so much easier. But at the same time so many people are getting left behind’.
Numerous participants noted that in educational settings, generative AI tools were causing difficulties. Some people pointed out that schoolteachers face significant challenges now that students use AI for homework and assignments, ‘and they’re not actually learning anything, because they’ve just gone to the internet feeding in their worksheet and then getting their answers done for them and then they’re not actually reading it all’ (Alicia). Vanessa, who has sons in high school and university, expressed her concern that their use of generative AI was ‘taking away their creative thinking and they’re just relying on it too much for doing the research or creative writing’. Nicole, who was herself a secondary schoolteacher, said she uses ChatGPT in her teaching. However, she noted that she drew the line at students’ requests to use content created with generative AI as part of their portfolio of creative work: ‘It's only as good as we can make it. So yeah, I’ve told them, no, you can’t’.
For some people, the use of generative AI technologies was viewed as dehumanising. As Alicia put it, using this software means ‘losing your humanity a little bit’. It is for this reason, said Chloe, that she has made a deliberate decision to avoid using these technologies: If I can get ChatGPT to write me an email or to write this paragraph in an assignment that I have to do, or write me a recipe – like, I can see that it would be very, very easy to start using it for absolutely everything. And I’m scared of losing who I am as a person.
Maja said that she worried about how children and young people may be developing relationships with chatbots and thereby isolating themselves from other people: ‘we’re already seeing how AI is being promoted as a kind of “new best friend” for kids. Instead of going out and forming real, face-to-face connections, they’re engaging with robots online’.
Imagining generative AI futures
When considering the futures of generative AI in their lives, participants articulated both hopeful and pessimistic imaginaries. Some people imagined future uses of generative AI that would benefit human lives in more profound ways. For example, possibilities for further developments of generative AI applied to health and medicine were suggested by Krish: ‘You could rule out human errors that might happen during surgery. Especially with major medical procedures or for people suffering from serious illnesses’. In the same group, Jelena responded to Krish's comment by suggesting: ‘Why can’t we ask AI how to alleviate dementia and some of these diseases? I mean, how to be healthier and how to live a healthier long life so we can experience the benefits that a healthy old age can provide’. In a different group, Tracey also gave the example of future generative AI potentially providing help to older people to communicate with others, receive medical care or entertain themselves. She drew on the example of her own mother, who is almost 100 years old, to think about how these technologies could potentially help her if she reached that age. However, Tracey also acknowledged that ‘in my heart I think that generative AI will probably be used by the powers that be for profit and no thought to the future’.
Some people raised concerns about generative AI or other novel technologies such as robots taking people's jobs. This issue was noted by Suresh: ‘it will take away jobs from like a certain industries or certain types of work’. Suresh's comments led to Zainal in the same group mentioning people in the service industry having their jobs taken over by robots, and Michael describing a case he had heard about where a young man had committed suicide after becoming attached to a chatbot. In a different group, Shelley was worried that ‘humans need to be able to keep their brain functioning’ and Paul raised the example of children using generative AI as therapists and becoming dependent on it for emotional support. Jelena then reflected that if people began to rely on chatbots for social relationships, this would lead to greater feelings of loneliness and social isolation.
This group continued to debate whether greater use of chatbots for mundane tasks would free up more time for humans to socialise with each other more, or whether it would lead to loss of cognitive function and reliance on robots and chatbots for company. Paul took these ideas further by suggesting that: ‘Chatbots won’t exist in 50 years’ time. It’ll have moved to something totally different that we can’t even imagine. You’re basically going have a silicon chip in your head that just, infiltrates your thoughts or whatever it may be’.
In other groups, discussions about the future of AI led to participants reflecting on the need for better regulations and governance to prevent it from becoming too powerful and potentially harmful. The difficulties of achieving strong regulation in Australia were pointed out by Chloe: ‘There are so many different AI platforms. If you regulate one of them, people are just going to go to the next one. And they’re going to find a different way to do whatever they want to do with AI’. Farhad gave the example of YouTube and how difficult it had been to prevent harmful content being delivered to children. For him, regulating generative AI was an even bigger problem: ‘It definitely needs to have some form of control from the government to make sure that children are protected as well, ‘cause it's going to change the world’. Responding to Farhad's comments, Ben conjured up an even more dystopian future by drawing on narratives from the
In the face of what seems to be the overwhelming power of Big Tech, several participants commented on their lack of agency to challenge this at the individual level, particularly given its pervasiveness in many software applications. For example, Maja emphasised that: We have a history of embracing new technologies as exciting and innovative, only to discover their negative effects years or decades later. And by then, it's often too late, no one wants to give up the convenience or ease of their lifestyle. You can’t put the cat back in the bag once it's out.
There was some optimism from a few participants that generative AI applications could potentially improve through further refinements. Farhad observed that he had compared the latest iteration of ChatGPT with the earliest version and had noticed that it had become more accurate and ‘smarter’ in its content, which he attributed to high use: ‘it's learning every day with the inputs that it gets from so many people’. Kabir also wanted to acknowledge that generative AI was still in the early stages of development and that it was difficult to predict where future developments might go. He found this ‘scary’ and ‘exciting at the same time’. Kabir went on to point to the need to think carefully about the ethical implications of generative AI use, weighing the time-saving benefits against the harmful impacts: ‘we have got to ensure that we have a moral compass that we can continue to operate within. And not start to use these sorts of technological advancements for destruction rather than more for making life easier’.
Discussion
Across the focus group discussions, we identified several key themes related to participants’ experiences, understandings and imaginaries of generative AI technologies. Compared with earlier Australian research (Bentley and Evans, 2025; Notley et al., 2024), our participants had accepted these technologies more completely. Nearly all participants, regardless of their age, gender, ethnicity or geographical location, had experimented with generative AI applications, and many had begun to incorporate them into their quotidian routines. This degree of adoption perhaps reflects greater acceptance of AI and a degree of routinisation through the gradual introduction over time into many domains of life, including the workplace (Jobs and Skills Australia, 2025) but also personal life (Chatterji et al., 2025).
There was little evidence among our participants of what Goulden and Cameron (2025) refer to as ‘mundane resistance’ to applications such as ChatGPT or Copilot. Rather, findings demonstrated what we might call ‘mundane adoption’ of these applications in ways that many users found helpful rather than spectacular. Far from being enchanted by the possibilities of these technologies, these Australians saw them as little more than handy software that was now pervasive and therefore unavoidable. Generative AI was portrayed as an accessible, automated way of finding and compiling information from the internet. Reflecting Big Tech's ‘personal AI’ promotional discourses, these technologies were mostly deployed for routine tasks at home and work, portrayed as useful tools or helpers for achieving better efficiency, timesaving and productivity (c.f. Chatterji et al., 2025; Cheng and Wu, 2024; Enam et al., 2025; Vallis et al., 2025).
Notably, beyond some playful speculation about a science-fiction version of the futures of generative AI, participants’ imaginaries do not reflect the more far-fetched imaginaries of promoters of AI hype or their dystopian detractors (Hanna and Bender, 2024). Unlike other Australian findings (Vallis et al., 2025), there was little suggestion from participants that AI itself would become a powerful, super-intelligent agent capable of controlling or replacing humans. Instead, and perhaps because their everyday experiences had amply demonstrated to them the failings and lack of ‘intelligence’ of contemporary AI, our participants’ concerns were grounded in the realisation for some that over-use of these applications could make them lazy or delimit their capacities to learn from doing. Most participants were aware from their own experience that the software they used frequently generated incorrect information, and therefore required checking, but seemed largely untroubled about this. While there was some degree of anthropomorphising in describing the services as ‘a friend’ or ‘an assistant’ – or more negatively as ‘lying’ or ‘stalking’ people – this was not consistent. Other use of language represented the services less as human and more as automated machines. Several people challenged the use of the term ‘intelligence’ to describe the software or emphasised that it was human intention behind its misuse. Similar findings were evident in qualitative Brazilian research on folk theories about AI (Eder and Lhamby, 2025).
Recognising that generative AI was now ubiquitous in many digital devices, software and social media platforms, participants for the most part seemed resigned to adopting them and making the best use they could of them. In this way, generative AI was positioned as just another harmless, practical and convenient app: another piece of ‘mundane software’ contributing to the ways that everyday life has become ‘appified’ and automated (Clark and Lupton, 2023; Morris and Elkins, 2015; Pink et al., 2022). There are also resonances here with previous research identifying ‘digital resignation’, or lack of user resistance in response to the introduction of digital applications and systems into everyday life by corporations or governments, and ‘privacy cynicism’ or ‘apathy’, the recognition that once datafication and digitisation have spread into most corners of society, there is little that can be done by ordinary people to protect their personal information from third parties and scammers (Draper et al., 2024). As one of our participants put it, ‘you can’t put the cat back in the bag once it's out’.
Cheong and Nyaupane (2022) refer to the ‘dialectical tensions’ in focus groups they held with US university students about ‘smart’ technologies and the Internet of Things. Likewise, we could discern some dialectical tensions in the ways that our participants described generative AI applications. On the one hand, this software was mundane and practical; on the other hand, it could deskill users or be deployed for criminal or abusive purposes. Generative AI was viewed as both time saving and ridden with errors that required extra time to check. It created novel content but did so using information that was already available rather than generating new data. It could be used in educational ways but also detracted from learning. It was human-like in its friendly conversational interface but did not possess human intelligence and could potentially dehumanise and socially isolate its users.
These tensions in our participants’ views on generative AI reflect a broader local and global context in which fierce debates are being carried out about the value of these technologies, to what extent they should be incorporated into education and workplaces, and the extent to which the AI Empires should or can be regulated by governments (Bender and Hanna, 2025; Guest et al., 2025; Hao, 2024; Hicks et al., 2024). While most of our participants did not mount strong criticisms of Big Tech for their exploitative and profit-driven motives, they did express feelings of powerlessness over what they could do to avoid using generative AI in the face of the determination by Big Tech – and in some cases, their employers and educational institutions – to encourage its use (c.f. Eder and Lhamby, 2025). Indeed, many were unwilling to give these technologies up for the convenience they offered for everyday tasks. When imagining the future, some participants did express concern about the challenges to human thinking, social relations and identity caused by regular generative AI use. There were worries expressed by some people about the impacts of disinformation or fake information, data privacy issues, scams or social isolation. However, for the most part, participants did not directly relate these risks or harms to their own lives in the present day. Notably, the participants demonstrated little knowledge of the hidden human labour behind generative AI technologies or awareness of the exploitation of people working in this industry (Hogan, 2024; Ruschemeier, 2025; Schaake, 2024; Tacheva and Ramasubramanian, 2023).
Conclusion
Our Australian study builds on previous research by using a qualitative approach that provides in-depth insights into recent public perceptions and practices related to generative AI. Our research goes some way to demonstrating how a method such as online focus groups investigating current understandings and practices can inspire lively conversations among participants, in some cases, pushing people to examine their assumptions or misperceptions. Including a heterogeneous group of people in each discussion group meant that diverse experiences and ideas could be expressed in a context in which people could respond to others’ comments, validating or extending them in different directions. Including creative prompts to encourage speculative thinking helped participants to think beyond the present day to consider how current generative AI applications may develop into the future. In such a dynamic technological landscape, there is a need for further research that investigates how Australians’ everyday use and social imaginaries may change as the services themselves are developed and marketed in the future.
Footnotes
Acknowledgements
Thank you to the participants for sharing their thoughts and ideas with us.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Australian Research Council (grant number CE200100005).
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
