Abstract
Artificial Intelligence (AI) is becoming increasingly prevalent and its application highly sophisticated. Concerns about AI's vulnerabilities and future threats have, however, long been debated among scientists and the tech community. By integrating Beck's theory of risk society with an audience-centered sense-making approach, we seek to understand the effects of AI on the general public's daily lives and their concerns when they adopt AI technology. Five focus groups with 36 participants from Singapore, a technologically advanced country, were conducted to investigate their risk perceptions of AI and AI-powered technology. We found that participants were not passive consumers of content showing up on their news feeds; indeed, some participants attempted to outsmart the algorithms. They were aware that tech companies often tweak algorithms to personalize content and drive consumers into rabbit holes. Nevertheless, despite a certain level of awareness and sporadic attempts to “outsmart” the system, many users might still be influenced by these algorithms, underscoring the extent to which consumers are often manipulated by smart tech powered by AI. We discuss the theoretical and policy implications of our findings by looking at the contextual factors.
Artificial Intelligence (AI) has grown in significance in recent years, so much so that it now permeates many aspects of our daily lives, from curating content on social media to correcting expressions for work-related matters. Nevertheless, despite AI's relative novelty, spanning just two decades, concerns about its vulnerabilities and future threats it poses have been extensively debated among scientists and the tech community (Bostrom, 2003; Johnson, 2019). Research on AI and its social, economic, and political consequences have highlighted several interlinked issues – from data privacy concerns (Brevini and Pasquale, 2020; Crawford, 2021) to enhanced corporate power (Flensburg and Lai, 2022) to jobless growth and economic inequality (Schwabe and Castellacci, 2020). AI-based systems have also been associated with algorithmic bias, perpetuated by overgeneralization of AI algorithms across subgroups and ethnicities, leading to socially undesirable outcomes (Parikh et al., 2019). According to Crawford (2021: 116), the “risk profile of AI is rapidly changing as its tools become more invasive and as researchers are increasingly able to access data without interacting with their subjects.” Correspondingly, online users are increasingly dehumanized and reduced to mere data. Furthermore, AI does not have an instrumentalist view of human actors, with its technology suited more for “strategic action” that manipulates behaviors rather than “communicative action” that leads to informed dialogue (cf. Habermas, 1979: 209).
The notion of risk associated with technology was well articulated and conceptualized by the late German sociologist Ulrich Beck who argued that we are living in an emergent world risk society (Beck, 2009a) that poses deep questions about how people perceive and adopt technology. He argued that modernity is increasingly characterized by global risks, from pollution, genetically modified food, and nuclear disaster (Beck, 1992), to climate change, terrorism, and global financial crises (Beck, 2009a). While Beck died in 2015, before most recent advances in machine learning, his warnings concerning technological advancement and its potential dangers for society may also apply to AI. Thus, drawing upon Beck's notion of risk society, we try to understand how people perceive risk in relation to AI technology.
Several studies have been conducted over the years to understand public opinion toward AI. Some of these are surveys initiated by corporations (Carrasco et al., 2019; Pega, 2017), while others are quantitative scholarly studies (Horowitz and Kahn, 2021; Zhang and Dafoe, 2019). However, few studies adopt a qualitative perspective, although there are some notable exceptions (cf. Aitken et al., 2020; McCradden et al., 2020). Furthermore, the scope of these studies is often confined to what is profitable for companies or governments. We argue that a qualitative approach, specifically focus groups, is necessary to develop a current understanding beyond what is politically and commercially salient. As noted by Metzger et al. (2010), focus groups are advantageous because of (a) the ability to explore ideas that are not easily quantifiable, (b) participants are encouraged to draw on experiences that are personally relevant and meaningful, and (c) discussion amongst group members can elicit ideas that might not emerge in one-on-one interviews. Thus, focus groups present an ideal setting for an in-depth investigation of AI's impact on everyday experiences.
In order to gain a more comprehensive understanding of public risk perceptions regarding AI and AI-powered technologies, we conducted five focus groups with participants from Singapore. The backdrop provided by this technologically advanced nation is particularly relevant due to its position as an early adopter of emerging digital technologies. This makes it an insightful indicator of the general public's attitude toward these advancements. Singapore's proactive approach is notably evident through its significant investments to promote AI innovation in recent years (Wu, 2023). As a result, Singapore has emerged as a global leader in digital adoption. Supporting this claim, the International Institute for Management Development (IMD) ranked Singapore 3rd among 63 countries in their 2022 World Digital Competitiveness Rankings (IMD World Competitiveness Ranking, 2022). However, public sentiment remains mixed: a recent survey by Ipsos revealed that while the general public largely expressed excitement over AI, nearly half also registered nervousness, particularly with respect to digital privacy and job security (Ipsos, 2023). Hence, our study seeks to further unpack these concerns, exploring individuals’ personal experiences with AI and its implications on their everyday lives. Our findings show that while the participants see many benefits of AI in everyday lives, they associate the emergence of AI-powered technology with several types of risks. It also reveals that the risk perception of AI is not only shaped by personal worries but also by broader concerns for the collective and society.
Literature review
Trust and the risk perception of AI
Before analyzing the literature on the risk perception of AI, it's pertinent to briefly discuss how AI has been defined. Despite the growing deployment of AI in various aspects of human lives, there does not seem to be any unanimity in how AI is defined and its definition varies depending on who is asked. To experts, AI is deeply intertwined with the fields of data science and machine learning, which involve computational methods to interpret datasets and develop programming models based on these deductions (Kotu and Deshpande, 2019). Meanwhile, others argue for a more sociotechnical perspective, suggesting that AI should also be viewed in the light of its social and cultural implications (Bostrom, 2014; Crawford and Joler, 2018).
The lay public often envisages AI as intelligent machines that replicate human cognition and behavior (Kersting, 2018) or imagined by specific applications like Apple's Siri and Google's search engine (Crawford, 2021). Indeed, AI is a type of “floating signifier” in that the term seemingly has no fixed meaning but instead means different things to different people (Chandler, 2017: 90). It's therefore important to be mindful of the context of AI's specific application in any given situation. These diverse perceptions and the rich scholarly discourse around the definition of AI play a crucial role in shaping the debates about AI's potential risks and reward.
The context into which AI is emerging is that of a world risk society. According to Beck (2009a), society has become increasingly preoccupied with global risks—ecological, terrorist, military, financial, biomedical, and informational—that cannot be easily controlled by individuals or even cooperating nation-states. There is, according to Beck (2009b), a deep “manufactured uncertainty” in society due to “omnipresent” risks, which are met with “denial, apathy, and transformation” (Beck, 2009b: 291), the latter of which refers to attempts to transform the perceptions, living conditions, and institutions of modern societies to prevent major catastrophes. However, Beck (2009a: 54) warned that political institutions had limited effectiveness in dealing with the substantial risks produced by advanced industrial societies because of which individuals lose faith in the rational promises made by these institutions. As a result, individuals are compelled to rely on themselves: being detached without being re-anchored is the bitter-sweet consequence of individualization in a global risk society.
The world risk society framework developed by Beck (2009a, 2009b) thus has considerable relevance for understanding how Singaporeans might perceive AI and its potential risks. Beck (2009b: 298) argued that “modern risks are social constructs in which collective perceptions of the future are envisioned” but these risks are experienced both individually and collectively. However, social inequalities in society mean that an individual's “risk-class” (cf. Curran, 2018) makes them better equipped to deal with certain risks, such as financial crises, environmental degradation, or joblessness caused by AI. Furthermore, his theories underscore how rapid social and technological change in society have resulted in “new contradictions, tensions, and decisions (as well as) increasing anxiety, insecurity, and uncertainty” (Woodman et al., 2015: 1121). Indeed, the implication of Beck's theories of risk and individualization (cf. Beck and Beck-Gernsheim, 1995, 2002) suggest that trust is a coveted commodity, influencing how new technologies are perceived, adopted, and potentially regulated.
Trust, then, is quintessential to the adoption and acceptance of new technology. For example, trust in government saw greater support for the use of AI by governments (Carrasco et al., 2019); a variable influenced by the socio-cultural context. Apart from trust between humans and institutions, there is a need to consider trust between humans and technology. Unlike trust in human-human interactions, human-technology trust involves both the specific technology and the technology provider (Siau and Wang, 2018). Thus, trust in technology considers the characteristics of the user, the environment, as well as the performance, process, and purpose of the technology (Siau and Wang, 2018). Other studies have measured trust in technology using human-like criteria or system-like criteria; the choice of criteria is dependent on how human-like the technology is (Lankton et al., 2015). Trust in AI is also heavily influenced by one's perception of risks involved, regardless of whether these perceptions are valid or accurate (Sanders et al., 2011). The emphasis of technological risk in trusting AI leads us to our discussion of literature on technological risk perception.
Perceived risk “refers to people's judgements and evaluations of hazards they … are or might be exposed to” (Rohrmann and Renn, 2000: 14–15). It is the result of subjective judgements by the individual and may or may not manifest as actual risk. The concept was used to study health crises (Dryhurst et al., 2020), food consumption (Phillips and Hallman, 2013), and buying behavior (Zhang and Yu, 2020). In these studies, risk perceptions influence behaviors adopted. Hence, risk perception is also valuable in studying technological risk, which is defined as “the processing of physical signals and/or information about a potentially harmful impact of using technology and the formation of a judgment about seriousness, likelihood, and acceptability of the respective technology” (Renn and Benighaus, 2013: 293). In studying new technologies, technological risk perception directly impacts adoption of the said technology. Hence, in this study, we seek to answer the question: What are the perceived risks of AI and AI-powered technology?
Five types of risks relating to 23 types of commonly used technologies were explored by Stuck and Walker (2019): financial, performance, physical, psychological, social. Based on their study, they called for a nuanced view of perceived risks because each of the five types of risk necessitated a unique way to mitigate the risk to encourage adoption. Skirpan et al. (2018) distinguished between experts and non-experts to assess their evaluations of the risks of emerging data-driven technologies such as filter bubbles. They argue that the disparity between public risk-perception and actual risk often arises from the fact that non-experts may not fully grasp the intricacies of the risks involved. Therefore, it's not that risk-perception is entirely divorced from actual risk, but rather that its understanding can be skewed among non-experts due to a lack of comprehensive knowledge. As such, even though the public voluntarily adopts a certain technology, any harm experienced might be involuntary. This underscores the importance of understanding AI-related risks from the user's perspective.
Neri and Cozman (2020) found that most of the chatter on Twitter regarding the risk perception of AI was related to an existential crisis where AI, it was feared, would replace humans. A survey commissioned by The Royal Society (2017) found four risks associated with machine learning: (1) harm to self and others, (2) replacement by technology, (3) depersonalization of self and personal experiences, and (4) restrictions on individual freedom. Given the fact that public perception of the risks involved in using AI is context specific (Neudert et al., 2020), these broad risk perceptions are further investigated in our study. Thus, digging deep into the experiences of Singaporeans might produce insights that are unlike results of studies conducted in other parts of the world.
Audience-Centered sense-making approach
Sense-making is “the cyclical process of taking action, extracting information from stimuli resulting from that action, and incorporating information and stimuli from that action into the mental frameworks that guide further action” (Seligman, 2006: 109). As such, sense-making is an ongoing, interactive process. The key to the said approach lies in how it makes explicit circumstances that tend to be taken for granted (Weick et al., 2005). The sense-making approach was used to understand how news audiences make sense of the high-choice media environment (Edgerly, 2022). The sense-making perspective was applied to understand how educators in higher educational institutions integrated new technology in classrooms (Fairchild et al., 2016). Hsiao et al. (2008) also used the approach to examine the adoption of a then-novel GPS-enabled taxi-dispatch system in Singapore. One function of sense-making is to predict the future. According to Klein et al. (2006: 72), sense-making “helps us muster resources, anticipate difficulties, notice problems, and realize concerns.” Through the recurring process described at the start, sense-making puts into perspective possibilities emanating from the mainstreaming of AI technology. The results from the sense-making process will affect subsequent decisions and attitudes toward AI technology. The audience-centered sense-making approach is particularly useful lens for investigating the perceived risks of AI among non-experts.
First popularized in the age of mass media, audience-centered research remains valuable in the age of new media and technology (Press and Livingstone, 2006). As Livingstone (2007: 174) shows in a study investigating youth interaction with content online, an audience-centered approach helps us “acknowledge the interaction between empirical reader and the social contexts of everyday life.” Focusing on the context of everyday life is crucial to our investigation. While prior research on public attitudes toward AI are focused on the profitable and the politically salient, through an incorporation of the sense-making approach with an audience-centered focus, we can observe the ongoing negotiations that people engage in during their interactions with AI technology. According to Mathieu and Hartley-Møller (2021: 3), the “notion of negotiation is important as media effects are not conceived as direct and linear, but always mediated by the interpretative resources and contexts that are brought to bear on media consumption.” The said approach helped Mathieu and Hartley-Møller (2021) understand audience trust in datafied media in the context of their daily lives. Similarly, through an audience-centered sense-making approach, we are led to an understanding of AI-related risks that are deeply rooted in the everyday lives of the participants. With the above theoretical discussion, we propose the following research question:
Methods
Participant recruitment and demographics
Singaporeans aged 21 years and older were randomly recruited on Telegram, a cloud-based instant messaging service popular in Singapore. We posted a recruitment notice on public Telegram channels specifically created to share survey links. Members of these channels regularly receive notifications about new research participation opportunities. These avenues for convenience sampling are becoming increasingly popular and have been extensively utilized by the local scholarly community (Wong and Wu, 2023). 40 shortlisted participants then completed a demographics survey, collecting information on their age, education level, and household income (see Supplementary Material A1). The recruitment process achieved a balanced 1:1 ratio for both gender (20 males, 20 females) and age (20 aged 40 and younger, and 20 aged 41 and older). These participants were divided into five groups of eight based on age: two groups were catered to ages 21 to 40 years, another two to ages 41 to 70 years, and the last group represented a mix of the former, catering to ages 21 to 70 years. This assignment is designed to achieve maximum variation in opinions to better ascertain whether there are any age-based differences in attitudes toward AI; the final group was meant to observe if an inter-generational group of participants would produce different insights from a more homogeneous group age-wise. Four participants did not turn up due to unforeseen circumstances. Hence, a final pool of 36 people participated for which each received SGD $40 as reimbursement. This study received IRB approval at the National University of Singapore.
Focus group design
Each focus group was conducted by two researchers—one moderated while the other took notes—over the video-conferencing platform, Zoom. The five focus groups lasted between 1 h 30 min to 1 h 50 min. Participants were required to turn on their cameras to promote participation. All participants provided informed consent to be audio-recorded prior to the start of the focus group; all audio recordings were destroyed after they were transcribed in full. To anonymize participants, they were assigned code numbers and renamed accordingly on Zoom. Any identifiable research data was also coded at the earliest possible stage of the research. We adopted a semi-structured approach to allow participants to share their experiences and highlight the issues that mattered most to them. The focus group guide was split into four central areas: AI in everyday life, AI for public good, the misuse of AI, and AI in Singapore (See Supplementary Material A2). In the first phase, participants defined AI and shared their experience with AI technologies, zooming into specifics like applications on smartphones that are powered by AI, social media algorithms, and bots. In the second phase, participants discussed positive and negative consequences of AI, as well as the domains in which they believe AI recommendations should be applied. In the third phase, participants discussed how AI can be misused by bad actors, their encounters with manipulated audio-visual content, and deepfake technology. In the final phase, participants answered personally relevant questions such as how they think AI will benefit Singapore, as compared to other countries, and what form of regulation should take place in Singapore.
Each of these areas is broad and complex, and by splitting the discussion into these four categories, participants can explore each area in greater depth. This approach contrasts with organizing the discussion around specific applications of AI; participants may have different levels of familiarity or interest in these specific applications and, as a result, may not contribute to the discussion in equally meaningful ways. In contrast, organizing the focus group discussion around broader areas of interest related to AI allows for a more comprehensive exploration of the topic. Participants can substantiate their points with personal examples, fostering a more inclusive platform for sharing perspectives.
Data processing and analysis
The collated data were fully transcribed and qualitatively coded based on Corbin and Strauss's (1990) grounded theory approach. Each line in the transcripts was analyzed line-by-line to identify relevant codes and compared to previous lines to determine whether it fell under an established code or constituted a new one. We identified 1289 references forming 37 codes during the initial open coding stage. We then merged related codes into larger categories by identifying patterns in the data; some codes unrelated to AI risks (e.g., benefits of AI) were excluded. This second stage of axial coding unveiled six relevant categories. Finally, selective coding unified related categories into overarching themes, thereby categorizing findings into individual (44 references), societal (73 references), and national (22 references) risks. While there are numerous ways of contextualizing the results, we settled on these based on their level of impact—to the individual, society, and nation—since these categories were broad enough to encapsulate the wide range of concerns raised by the participants, while still being specific enough to provide meaningful insights into the potential implications of AI on different levels.
Results and discussion
In this section, we analyze three broad areas of AI-related risks: risk to the individual, society, and nation. Aside from risks posed to the individual, participants also suggested risks that might harm society and the nation, even though they did not necessarily consider themselves to be personally affected. This is in line with the study on attitudes toward AI in banking conducted by Aitken et al. (2020), which found that participants often contemplated AI usage beyond the individual. Table 1 outlines the most frequently discussed types of risk at each level. All our findings are substantiated by quotes from participants and discussed in turn. Although some types of risks may seem to fit into more than one category, we have carefully considered each type of risk and its characteristics to identify the most appropriate category and included explanations of our reasoning to account for our decision-making process.
Summary of the most frequently discussed types of risk
Risk to the individual
The possible risks at the individual level range from physical ramifications of excessive consumption to privacy intrusion. Moreover, these risks are not confined to a specific age group or demographic, as anyone can become a potential victim. Our data analysis revealed 17 references for consumption risk and 27 for privacy risk, emphasizing the growing awareness of these concerns in the digital era.
Consumption risk
Personalized algorithms form a crucial aspect of the online experience and participants often appreciated the content recommended to them by AI. However, they also felt that excessive and targeted recommendations may affect their lives adversely by driving them into rabbit holes. Other than the short-term impact of wasting time by being directed to read unnecessary content and spending money on products they do not need, participants also considered the longer-term impact of problematic smartphone usage caused by reading addictive content recommended by AI-based algorithms. Participants mentioned that the most common type of content recommended by AI related to purchase of products. P16 believed that this resulted in an unhealthy relationship with consumerism: I think it's unhealthy to be surrounded by, especially when it comes to purchases, it's not the healthiest thing to be surrounded by material goods all the time. And to have that part of your brain continuously exposed to the item.
Despite the vast number of choices available online, participants paradoxically felt that personalized algorithms reduced their ability to make choices because they are constantly fed with what they are predicted to need. P1 shared his dilemma regarding receiving recommendations for health supplements that are helpful, yet not necessarily warranted: “Is it beneficial to me? Yes, but do I necessarily want to always be on manufactured desire?”
The above discussion is not only indicative of the paradoxes of choice phenomena (Schwartz, 2016), but it also relates to ethical issues discussed by Aytekin et al. (2021), whereby survey participants felt that their online activity was constantly monitored and they were being directed to make purchases. Amongst the numerous AI functions, personalized algorithms mentioned 17 times across five focus groups, were perceived to pose risks to participants individually by causing excessive consumerism. This supports the theory of the risk society and its consequences (Beck, 2009a; Beck and Beck-Gernsheim, 1995, 2002) as collectively this shows how individuals reflect on and negotiate the risks posed by existing AI. The processes of individualization are present in the content personalized by algorithms and affects its triggers, which in some cases generate varying degrees of anxiety regarding social status, consumption, and one's use of time in what has been labelled as the “attention economy.” In other cases, the so called “choices” generated by the algorithms were in fact leading to indecision and a degree of unhappiness; a phenomenon common to consumer societies and what psychologists have labelled as the emotional outcome of the paradox of choice (Schwartz, 2016). The availability of numerous options not only leads to poor choices, but it also decreases satisfaction among consumers.
Privacy risk
Participants were also worried that their AI-powered devices might be invading their privacy. While there is no evidence to show that smart devices can listen in on conversations, such speculation was rife in our focus group discussions. P2 believed that the reason he received very specific advertisements shortly after discussing baby planning with his wife was because his device had been eavesdropping on his private conversation: I had never gone to any of these websites or anything, so it was very clear my audio, my location, whatever it is, they were all eavesdropping on what I was talking just half an hour to one hour back.
The perception that AI is being deployed by mobile apps and social media platforms for listening to private conversations was shared by participants across all five focus groups. Aside from blatant eavesdropping, participants also feared that smart devices have access to their private conversations on instant messaging platforms. Claimed P20: I think WhatsApp reads what we are talking about. I have a friend who was a drug addict, and I was WhatsApping another friend about this ex-drug addict friend that I was quite concerned about. Two days later, my Quora subscription starts suggesting articles to me about someone who has a very bad past. I’m like okay, really out of nowhere I don’t get the Quora thing coming out. Then suddenly I start reading the Quora said something like “I used to be a drug addict, should I tell my interviewer about this?” And then I quickly unsubscribed from Quora. I don’t know if my brain is making it up, but it is very bizarre. So, I feel like my smartphone is doing something.
For participants who fear that technology might be used to spy on them, they went so far as staying away from AI-powered technology. P16 chose to avoid smart home devices completely: “That's the reason why I will not use it, because it is recording technically everything I say.” Hence, it is obvious that personal privacy weighed heavily on the minds of some participants and directly affected their likelihood of utilizing an AI-powered device. Such statements show an awareness of the diffused and covert forms of surveillance utilized by big tech companies in their pursuit of profits and market share. They are also indicative of what Bauman and Lyons (2013) label “liquid surveillance”: the mobile and flexible methods of surveillance that come from government agencies, corporations, social networks, partners, parents and even employers in the period of “industrial modernity” (Beck, 2009a). In the risk society, AI can be mobilized as a “big brother”, and the research participants showed a critical reflexivity regarding the potential for the misuse of this power. While privacy risks may have broader societal implications, our findings revealed that discussions of privacy infringement by AI were often grounded at the individual level. Furthermore, it is ultimately the individual who is most directly affected by the exposure of their personal information or activities. Hence, we chose to categorize privacy risks as a threat to the individual.
Risk to society
The perceived risks to society include misinformation, operational risk, and risk of economic inequality. Our data analysis revealed 28 references for misinformation risk, 26 for operational risk, and 19 for economic inequality.
Misinformation risk
AI might endanger society through its effect on the flow of information. Participants were aware of misinformation as a societal ill and felt that personalized algorithms, again, had a role in exacerbating the problem. P18 discusses how filter bubbles can worsen vaccine misinformation: People are anti-vax right, then they keep getting anti-vax messages, which are not true. Then their belief will be more entrenched, and it is harder to get them to change their views if it's all based on false recommendations that were continuously recommended to them.
What is interesting in the discussion of misinformation risk is the stark contrast between participants of different age groups. On one hand, participants from the younger age groups were worried about older members of society whom they felt were more susceptible to misinformation. Participants from the younger focus groups (aged 21–40) cited examples from their personal lives—as expected from an audience-centered approach. These younger participants observed older family members who were trapped in filter bubbles on social media and believed that older adults were at greater risk of falling for falsehoods because they lacked understanding about how online recommendations are powered by AI. Declared P21, “I think it's the older generation that is affected to a large extent.” While misinformation can be seen as a risk to the individual because of the potential harm to personal beliefs and decisions, participants’ discussion focused more on the potential ramifications for others rather than themselves. Hence, we chose to classify misinformation as a risk to society.
On the other hand, participants from the older age groups (aged 41 to 70 years) were concerned about misinformation's impact on society, but they included themselves in the discussion of the potential ramifications. Older participants acknowledged that they, along with others around them, might fall prey to misinformation. Thus, they have adopted active precautions against misinformation. For example, P28 discussed how she tries to guard herself against misinformation: There are too many fake videos and sometimes we do not know which is real. I only trust one or two, don’t go to so many social media platforms and confuse ourselves, distort our mind, real become unreal, and we believe the untruth to be real. It makes a lot of mistakes in life. My take is to keep to a few social media platforms that we trust, that's all.
While participants from the older age groups did not admit to being more vulnerable to misinformation compared to other members of society, they did admit to being fearful of misinformation. This goes to show that older participants were not entirely clueless about the dangers of misinformation online. For example, P9, a participant from the older age group, was aware that AI is not neutral technology but is created by people and organizations with their own interests. According to P9, the manipulation of online information flow is worsened by the involvement of organizations with vested interests: AI is not your fair playing ground, because a lot of organizations they tweak their algorithms such that to feed their needs you see. So you read a newsfeed, it seems like they sway the direction in a certain way, or to a direction where the algorithm wants you to go.
It is interesting to note that most of the participants were aware that algorithms could be manipulated by businesses and vested interests to serve their respective agendas. Even if it's not misinformation per se, a concentration of the same type of information also hampers the online experience. According to P19, content meant to cater to users’ likings could end up doing more harm than good: It's more of an issue where the information becomes too concentrated because initially when I go in, I want to learn a lot more but I think when the algorithm picks up what interests you and they start promoting too much of a specific thing. And then you end up getting a part of the information, or part of videos that cater to only one aspect. So that is frustrating. Then you have to outsmart the algorithm by searching for other things so that they will recommend you other videos.
Within the fifth group of participants between the ages of 21 to 70, participants were equally concerned about misinformation posing a societal risk. For example, P35 who felt that misinformation “can turn the world upside down.” However, in contrast to younger participants in other groups, younger participants within this group did not talk about older members being more susceptible to misinformation. It could have been because this batch of younger participants did not hold the same stereotypes and assumptions that were observed in the other groups. It is also possible that the difference in the composition of the group prevented younger participants from freely speaking their minds. Based on the findings from the first four focus groups, we maintain that there are some age-based differences in perceptions of misinformation as a societal risk.
Operational risk
Risks under this category refer to the potential negative consequences that arise when AI does not function seamlessly. This could include errors or malfunctions in the AI system itself, or problems that arise from the integration of AI into existing processes. We classified operational risk as a societal risk because it affects the day-to-day operations of various sectors within a society, such as healthcare and immigration. A concern that participants had about integrating AI systems with existing procedures is the replication of human biases in algorithms. P18 describes the danger of relying on AI to process immigration applications: If the AI auto-filters all the applications, you don’t really know what are the processes and specific things that they are looking for. It could be very disadvantageous to you if you are unaware of all these inherent biases in the system.
Attribution of responsibility came to mind when discussing operational risks. It is often difficult to anticipate all possible scenarios in which an AI system may malfunction or produce undesirable outcomes. When something goes wrong with an AI system, it can be difficult to determine who should be held responsible for the consequences. Considering the abusive and harmful content that circulates on platforms that supply AI recommendations, P9 questioned who should be responsible for showing such content to users: These are AI, they are not human beings. So whatever recommendations they give or whatever they feed us, are they responsible for what they say or do? …I’m concerned that young people, they are constantly fed by junk type of things. What effect will it have? What's the impact and who is responsible for all these types of things? Because I’m a healthcare student, so for example when we go out to work as a new staff, we use AI because we ourselves are not experienced enough so we will rely on the AI. If the AI makes a mistake, we may not be able to identify the mistake, which can jeopardize the patient's safety.
Clearly, participants were aware that bad decisions can be made using AI. In such cases, it is less obvious who should be responsible for these negative outcomes and whether these bad decisions made by AI fare better than those made by humans. This also brings back the legal aspect debated among scholars about the liability of wrong decisions made by an AI-based algorithm (Giuffrida, 2019). This is important as AI is not legally recognized as a person under the law as it lacks sentience and self-awareness; it is a narrow computer-enabled simulacra of human intelligence that makes decisions leveraging the information contained in the system.
Economic inequality risk
Because AI can mimic human cognition and behavior, participants perceived the risk of AI replacing humans in the workforce. This fear of AI-caused unemployment was also observed by Aytekin et al. (2021) and The Royal Society (2017). Participants predicted that specific groups will be disadvantaged, like the low-income, less educated, and low-skilled workers—groups of people who may struggle with seeking alternative employment. Reiterated P19: “I think the people who are most affected are actually those who are from the lower income background or those who may not have that much education or skill sets. I do think that a lot more support is needed for these people.” The concern regarding economic inequality is not framed here as an individual problem but something that is going to affect society at large. Participants also contextualized the impact of AI on the workforce by drawing links to Singapore's unique labor situation of depending on workers from foreign countries. Expressed P24: At one stage you want to have foreign talents. And now you want to bring in AI. So if you bring in foreign talents and AI, then what about our locals? Where do our locals stand? Unless if there is a balance between foreign talents, AI, and our citizens, so everyone is taken care of.
Risk to the nation
Another major perceived risk emanating from the widespread deployment of AI in various fields (Schmidt et al., 2021) relates to national security. Our participants appeared to be very concerned about this with our analysis revealing 22 references for national security concerns.
National security
Although participants were worried about the risks posed by AI to national security, they also had faith in the government's ability and capacity to deal with any risks to national security without compromising individual freedom. The greatest threat to national security was posed by the rise of deepfake technology. P14 believed that deepfakes can cause great danger to the social fabric in Singapore: What happens if there is some sensitive news, especially for religion or race, especially for a country like us. Like somebody spreading something with a voiceover and a face of someone very prominent. If you are not careful, it can create a very big problem. I think it's scary because now it's machine learning. It's still very clunky…but it's only going to get even more refined and advanced. And it's a game of whack-a-mole because you take down one particular deepfake, another two or three spring up. It is almost like the Hydra, you chop off one head, two heads come up. And at this point where we talk about fake news and all of that the verification process. How do you begin to verify a video, especially when it's consumed on the fly, when it's shared and spread viral and then again talking about the corrections being issued not being so effective anymore? So, I think we are only right now at the cusp of what it is to be in the fake news era.
These opinions were shaped by recent observations of cheap or shallow fakes on social media, such as those of Singapore politicians on TikTok. Although the aforementioned discussion could also be seen as a societal problem, we chose to classify it as a national risk because of the outward-looking mentions of Singapore in reference to other countries.
Participants also drew from their observations of political happenings overseas since there have been less observations of the use of AI in the local political scene. Even without first-hand experience, participants were still extremely wary of how the nation could be affected by AI. For example, P26 questioned if the Cambridge Analytica political scandal could be replicated in Singapore: Maybe the Cambridge Analytica issue may not be the only one, but it has just been blown out. Maybe all the other political parties already have it, and we don't know. We are going on the assumption that none of the other political parties even in Singapore are doing it. Who is to say that there isn’t? We don’t know yet, unless there is a whistleblower in there, right? We don't know.
Conclusion
Risk perceptions are essential to understanding people's attitudes toward accepting and adopting new technologies. This study was based in Singapore, where the tech-savviness of its citizens was reflected in our findings with participants—none of whom were from tech or AI-related industries—displaying keen discernment on the risks of AI. Using an audience-centered sense-making approach, we found that risks to the individual, society, and nation are prominent considerations for AI acceptance. Despite recognizing the benefits of AI and automation, participants highlighted potential societal risks, such as increased economic inequality, concerns about privacy and misinformation. This echoes the concept of a “risk society” as described by sociologist Ulrich Beck, where societal progress also engenders new risks (Beck, 2009a, 2009b). Modern risks, as Beck described, are socially constructed, experienced both individually and collectively, and influenced by a person's risk-class. Rapid social and technological changes exacerbate uncertainties and anxieties and underscore the importance of trust when it comes to the perception, adoption, and regulation of new technologies like AI.
To encourage adoption, innovative ways to address each of the perceived risks—such as by allaying unfounded fears or curtailing actual risks—is necessary. Our participants framed their concerns about AI's risks in relation to society at large while simultaneously revealing how they have been subjected to constant monitoring and targeting by customized algorithms both inside and outside social media. The findings suggest a dichotomy in users’ perceptions of personalized algorithms. While these AI-based algorithms enhance user experience through tailored content, they also cause unintended consequences, such as promoting consumerism and potentially addictive behavior. These issues point to concerns about digital well-being and the potential for algorithmic manipulation (Mittelstadt et al., 2016). Although the internet seemingly offers limitless choices, the continuous influx of personalized content may paradoxically confine users’ sense of agency. This mirrors the concept of “choice architecture,” highlighting its potential downsides (Thaler and Sunstein, 2008). In this context, people's decisions are often influenced by how choices are presented.
Participants also expressed concerns about the potential privacy breaches by AI-powered devices in the form of eavesdropping leading to targeted advertising. Such fears, whether valid or not, cause some participants to avoid AI technologies altogether. These sentiments are consistent with the broader discourse on privacy and surveillance in the context of smart devices and AI (Sadowski, 2020; Zuboff, 2019). Research suggests that social media users might not be fully aware of the information that they are revealing about themselves; if they did, they might not consent to their information being studied or used for other purposes (Boyd and Crawford, 2011). In fact, users have described how “powerless” they felt in the face of the harvesting of their personal information (Andrejevic, 2014). This highlights the ethical implications of accessing user data without proper consent.
Our study suggests an interesting age-based contrast in perceptions of AI-driven misinformation risks. Younger participants perceive older adults as more susceptible to misinformation due to a presumed lack of understanding about AI-driven online recommendations. These participants are more concerned about the societal impact of misinformation, particularly on the older generation, rather than its impact on themselves. In contrast, older participants acknowledge their own vulnerability to misinformation and discuss adopting active precautions against it. They express fear of misinformation, demonstrating an awareness of the risk it poses, which contradicts the stereotype held by the younger participants.
The findings also show that participants, regardless of their age, recognize the manipulative potential of AI and the vested interests of those who control it, reflecting a critical understanding of AI. Instead of being passive consumers of content shown on news feeds, participants were often aware that tech companies often tweak algorithms to personalize content and drive them into rabbit holes. To counter personalized content, some participants attempted to outsmart the algorithms, although only a few showed interests in actively trying to tweak algorithms. This lack of widespread engagement in actively managing algorithms may indeed suggest a subtle form of manipulation by smart tech powered AI. Despite a certain level of awareness and sporadic attempts to “outsmart” the system, most users may remain subject to the influence of these algorithms, potentially confirming the ways participants are often manipulated by smart tech powered by AI (Sadowski, 2020).
Our findings could be better understood by considering the context. Singapore has a high rate of digital access with over 90% of households having access to digital devices (Infocomm Media Development Authority, 2022a) and ranks first globally for digital inclusiveness. Some studies have found that Singaporeans are highly digitally literate (Kusumastuti and Nuryani, 2020), while others have suggested that digital literacy rates are not as hopeful as imagined (Ho, 2020). Still, national digital literacy programs have been rolled out across schools and communities in Singapore (Infocomm Media Development Authority, 2022b) to bridge the existing digital divide. As Singaporeans gain digital literacy, it might be assumed that AI will be welcomed with little resistance. Yet, this study has shown that more must be done to increase public trust in AI and encourage the adoption of new technologies. For instance, Walter (2022) has suggested that certifications can be awarded to trustworthy technology by agencies that are deemed to uphold the public good. Public confidence in the Singaporean government have remained consistently high throughout the years (Ho, 2021), presenting a unique opportunity to mitigate unwarranted perceptions of AI-related risks: the government can award certifications to deserving products. In addition, efforts could be made to improve algorithmic literacy among the populace, which reflects an awareness of how algorithms, as a new form of infrastructure, shape digital experiences (Gran et al., 2021). Algorithmic literacy can assign greater agency to users and enable users to feel in control of AI, rather than being controlled by it. Together, these strategies can build trust in AI systems and encourage adoption of new AI technologies.
That said, our study has a few limitations. First, focus groups cannot replace direct observations or survey or laboratory scenarios. While we have tried to recruit a balanced sample in terms of age and gender distribution, the small sample size implies that the findings might not be extrapolatable to the larger population. Secondly, the quality of findings from focus groups are contingent on participants’ ability to recall and articulate their decision-making processes. As such, our findings reflect thematic trends and are not an exhaustive list of all the plausible AI-related risks. Thirdly, given that the recruitment process utilized a Telegram channel and specifically mentioned the study's focus on AI, our sample might be impacted by self-selection bias. Individuals who respond to our recruitment messages might possess a heightened interest in our research topic and be more tech-savvy, affecting the generalizability of our findings to the broader Singaporean population. Nonetheless, we are acutely aware of these potential biases and accounted for them in our interpretation of the results. Furthermore, as Supplementary Material A1 shows, our participants are evenly distributed in terms of education level and household income, potentially mitigating some of these concerns.
This study also suggests other avenues for future research. Being a small state, Singapore is reliant on external resources and has therefore maintained close ties with its neighboring countries. Singapore's unique global standing presents an interesting opportunity for cross-country comparisons in future research: specifically, combining survey research with focus groups involving participants from `ring countries to investigate how AI has affected their way of life in addition to measuring public attitudes toward AI. Such a comparative perspective can help identify factors that are specific to a country or part of a wider regional trend. Since the ramifications of AI are a global phenomenon, a close examination of the transnational process in different contexts can help Singapore pre-empt imminent problems. Furthermore, this perspective can also spotlight ideas about AI solutions that are transportable: by identifying countries with similar AI-related concerns as Singapore we can identify exportable ideas that are part of existing solutions in Singapore, thereby solidifying Singapore's position as a thought leader in the AI realm.
Supplemental Material
sj-docx-1-bds-10.1177_20539517231213823 - Supplemental material for Understanding user interactions and perceptions of AI risk in Singapore
Supplemental material, sj-docx-1-bds-10.1177_20539517231213823 for Understanding user interactions and perceptions of AI risk in Singapore by Taberez Ahmed Neyazi, Sheryl Wei Ting Ng, Mitchell Hobbs and Audrey Yue in Big Data & Society
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Defence Science and Technology Agency, Singapore grant [grant number DST000ECI21000711].
Supplemental material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
