Abstract
The rapid adoption of generative AI (GenAI) has intensified public discourse on its risks and benefits. However, research remains limited on how users perceive these risks and benefits across varying psychological distances and how they balance these perceptions in their adoption decisions. Drawing on construal level theory and regulatory focus theory, we conducted in-depth interviews with GenAI users (N = 30). Findings reveal that users perceive GenAI's risks and benefits across proximal and distal dimensions concurrently. In their adoption decisions, they demonstrate either promotion-focused orientations (i.e. risk downplaying and strength prioritization) to emphasize GenAI's benefits or prevention-focused orientations (i.e. privacy protection, output scrutinization, and reliance abstinence) to mitigate its risks. This study provides theoretical and practical implications for AI adoption and risk communication, contributing to a deeper understanding of how users navigate the complexities of emerging AI technologies.
Keywords
Introduction
The rapid integration of artificial intelligence (AI) into people's everyday life has sparked significant scholarly attention and public discourse. Generative AI (GenAI), which produces textual, visual, and auditory content in response to user prompts (Baum, 2025), is reshaping various domains ranging from creative arts to education and organizational management (Epstein et al., 2023). The widespread adoption of GenAI is evident globally, with ChatGPT alone attracting approximately 200 million weekly active users in 2024 (Reuters, 2024) and users in highly populated countries like China exceeding 230 million (Xinhua News Agency, 2024). However, as GenAI technologies become increasingly embedded in daily life, public attitudes toward its use continue to exhibit complexity, polarization, and diversity.
On one hand, GenAI is well-acknowledged for its potential to enhance creativity, streamline workflows, and democratize access to information (Doshi and Hauser, 2024; Holmström and Carroll, 2024). On the other hand, significant concerns about misinformation, ethical dilemmas, and existential risks, such as the hypothesized “LLMs takeover catastrophe,” in which AI systems could surpass human control, persist in public discourse (Baum, 2025). This duality of promise and peril underscores a complexity: individuals rarely evaluate GenAI in a balanced way (Nelson et al., 2020; Schwarz and Unselt, 2024) but instead engage in a process of self-negotiation, weighing opportunities against risks in ways that can tilt toward either heightened threat sensitivity or benefit optimism (Singer and Schensul, 2011). Yet existing research has largely overlooked these dynamics, focusing instead on the statistical associations between risk/benefit perceptions and adoption intentions (Fu et al., 2024; Hsu and Silalahi, 2024; Lee et al., 2024). What remains underexplored is how users actively interpret and reconcile such trade-offs, and how these interpretations can be better understood through psychological dynamics in their decision-making process related to GenAI usage.
Risk–benefit perceptions constitute a central mechanism through which trust is established, adoption choices are shaped, and societal support for AI governance is cultivated (Brauner et al., 2025; Fu et al., 2024). Understanding their asymmetry also necessitates a multidimensional perspective, as perceptions of novel technologies’ risks and benefits unfold across both individual and societal levels (Ho et al., 2023). Construal level theory (CLT; Trope and Liberman, 2010) offers a valuable framework for examining how psychological distance shapes people's mental representations of technology, thereby influencing their risk–benefit assessments and ultimate adoption decisions (Lermer et al., 2016; Schandl et al., 2024). Additionally, consistent with regulatory focus theory (RFT) and human's tendency to reduce cognitive dissonance, individuals are inherently motivated to seek gains and avoid losses (Brockner and Higgins, 2001; Higgins, 2000), while striving for consistency between cognition and behavior (Festinger, 1957). This drives them to develop cognitive and behavioral orientations that help reconcile the perceived risks and rewards of GenAI. While prior studies have examined risk–benefit trade-offs in AI-enabled technologies like autonomous vehicles and chatbots (Habib et al., 2025), little research has explored how users perceptually and behaviorally negotiate their asymmetric perceptions to make informed GenAI adoption decisions.
To address these gaps, this study draws upon CLT (Trope and Liberman, 2010) and RFT (Brockner and Higgins, 2001; Higgins, 2000) to: (a) map users’ perceptions of GenAI risks and benefits across varying psychological distances and (b) investigate the orientations emerged in balancing these perceptions in their adoption process of GenAI. Theoretically, drawing on in-depth interviews, this study proposes a conceptual framework that combines CLT and RFT to capture the multidimensional nature of user's risk–benefit perceptions toward GenAI and the dynamic processes of their self-negotiation, offering insights that quantitative studies alone cannot yield. Practically, it provides critical insights for stakeholders in technology development, policy-making, and public communication, offering a deeper understanding of how individuals navigate the uncertainties and promises of GenAI in shaping their adoption behaviors.
Literature review
Multifaceted public perception of GenAI
Risk and benefit perceptions are central to how individuals evaluate and engage with innovations (Alhakami and Slovic, 1994). Risk perception refers to assessments of potential negative outcomes (Slovic, 2000), whereas benefit perception reflects beliefs about positive consequences (Alhakami and Slovic, 1994). These judgments are shaped by personal experiences and values (Siegrist and Hartmann, 2020) and strongly influence the acceptance of novel technologies (Bearth and Siegrist, 2016). Prior research has shown that such perceptions are multidimensional, spanning technical, social, psychological, and regulatory concerns across domains including finance, transportation, and food technology (Ho and Tan, 2023; Ho et al., 2023). This highlights the importance of examining similarly multifaceted risk–benefit perceptions in the emerging context of GenAI.
Recent studies examining public responses to GenAI have identified risk and benefit perceptions as the most salient factors shaping GenAI adoption (Krieger et al., 2024). A recurring finding is the risk–benefit paradox, where users simultaneously acknowledge GenAI's advantages while remaining wary of potential harms (Fu et al., 2024; Hsu and Silalahi, 2024; Ivanov et al., 2024). Social media discourse reflects this duality: on platforms like YouTube and X, positive and neutral frames dominate alongside balanced risk narratives (Schwarz and Unselt, 2024), while ChatGPT discussions emphasize both perceived benefits (e.g. relative advantage and compatibility) and ethical concerns (Zou et al., 2025). Examining these multidimensional perceptions within a risk–benefit framework is essential for two reasons. First, risk management and governance frameworks struggle to keep pace with rapid AI development (White and Lidskog, 2022), making public perceptions critical for informing policy. Second, perceived benefits strongly shape adoption orientations (Ivanov et al., 2024). However, existing literature treats risks and benefits as broad categories, overlooking how individuals construe them at different levels of psychological distance. To address this gap, we draw on CLT to distinguish varying mental representations of GenAI's risks and benefits.
According to the CLT (Trope and Liberman, 2010), human beings mentally present objects based on different psychological distances. Mental representations, also referred to as mental construals, differ in their levels of abstraction. Abstract (high-level) construals focus on overarching and central features, whereas concrete (low-level) construals emphasize specific, subordinate, and contextual details (Trope and Liberman, 2010). Psychological distance is defined as the subjective perception of the gap between “near” and “far” across four dimensions: temporal, spatial, social, and hypothetical distance (Trope and Liberman, 2010). They also posited that as psychological distance increases, concrete details diminish in mental construals and exert less influence on evaluations, while abstract features gain prominence; conversely, as distance decreases, concrete aspects become more salient (Trope and Liberman, 2010). This framework suggests that risk and benefit perceptions vary with psychological distance (Lermer et al., 2016). Specifically, abstract thinking diminishes emotional responses and lowers risk assessments, while concrete thinking heightens emotions and amplifies perceived risks (Lermer et al., 2016). This indicates that individuals thus may perceive GenAI's risks and benefits dynamically based on their levels of mental construal.
Empirical evidence supports the categorization of risk and benefit perceptions along dimensions of psychological distance, including personal vs. societal, proximal vs. distal, and present vs. future perspectives (Ho et al., 2023; Nan, 2007; Schandl et al., 2024). In line with the concept of psychological distance and the multidimensional nature of risk and benefit perceptions identified in previous studies (Ho et al., 2023; Nan, 2007), this study conceptualizes proximal risks and benefits as those perceived to be more immediate, concrete, and personally relevant, while distal risks and benefits are viewed as future-oriented, abstract, and broadly influential at the societal level. For example, privacy violation constitutes a proximal risk because it directly threatens individuals’ personal data and sense of control in the immediate present. By contrast, unemployment caused by AI development represents a distal risk as this concern is future-oriented and abstract, reflecting broader societal transformations that may unfold over time. According to CLT, individuals’ psychological distance from an event or object shapes how concretely or abstractly they perceive its risks and benefits (Lermer et al., 2016). This pattern is especially salient for novel technologies, where people often distinguish between personal and societal implications. Research on emerging domains such as novel foods and digital medical applications shows that individuals simultaneously evaluate personal- and societal-level risks and benefits, often revealing tensions between the two (Ho et al., 2023; Nelson et al., 2020).
Scholars have accentuated that GenAI presents both advantages and challenges to its users at individual and societal levels (e.g. Doshi and Hauser, 2024). For example, while GenAI can enhance students’ learning due to its performance in knowledge provision and low cost, people were also aware of its ethical threats (e.g. rule-breaking of schooling, deception in task completion) in educational settings (Zhu et al., 2024). Moreover, while using GenAI for content creation enhances individual creativity, it may also diminish collective novelty (Doshi and Hauser, 2024). Given the intricate role GenAI played and the tendency for individuals to assess the risks and benefits of emerging technologies asymmetrically (Sartori and Bocca, 2023), influenced by psychological distance or personal or societal considerations (Doshi and Hauser, 2024), it is essential to understand how they form varying risk and benefit perceptions of GenAI at both proximal and distal dimensions. Although this understanding is indispensable for advancing the theorization of human assessment of emerging AI technologies, extant literature lacks a comprehensive exploration of how the public mentally constructs the risks and benefits of GenAI across proximal and distal psychological distances. Therefore, this study proposes two research questions:
How do users perceive the proximal and distal risks of GenAI?
How do users perceive the proximal and distal benefits of GenAI?
Risk–benefit self-negotiation in the GenAI usage
The way people perceive risks can also influence how they perceive benefits, and vice versa. If a technology is seen as highly risky, its perceived benefits may be downplayed (Bearth and Siegrist, 2016). This aligns with the notion that people do not assess risks and benefits separately; rather, their view toward a technology is a result of how they weigh its risks against its benefits (Wilson and Crouch, 2001). The risk–benefit trade-off and weighing process typically occur when individuals evaluate the potential advantages and disadvantages of a behavior. For example, severe risks associated with a particular action may be considered acceptable if the behavior provides significant benefits, especially when there are no practical means to mitigate these risks (Lee et al., 2024). Conversely, even minimal risks might be overestimated if the associated benefits are negligible, particularly when the risks cannot be easily mitigated (National Research Council, 2009; Wilson and Crouch, 2001). Hence, this study conceptualizes the risk–benefit self-negotiation as the process through which individuals exhibit orientations to minimize risks while maintaining or maximizing benefits in order to align with their diverse goals (Singer and Schensul, 2011).
Regarding the negotiation of risks and benefits in GenAI usage, users also actively seek ways to enhance its pleasurable effects while simultaneously minimizing potential harms (Fu et al., 2024; Hsu and Silalahi, 2024). GenAI users may accept privacy violation and ethical risks while using GenAI (Chan and Hu, 2023; Zhu et al., 2024) because this technology offers significant benefits in terms of usability and enjoyment (Kim et al., 2024). However, existing literature has not delineated the orientation and entanglement individuals have in their risk–benefit negotiation process when determining their use of GenAI. Understanding these orientations and related usage patterns is essential for advancing research on AI adoption behaviors and informing the development of responsible AI governance and user-centered design.
Regulatory focus theory
The risk–benefit negotiation process in GenAI usage aligns with the propositions in RFT (Higgins, 2000). RFT posits that individuals regulate their behavior out of two distinct orientations to align with personal goals and maintain a cohesive self-concept. A promotion-focused system centers on nurturance and growth, driving individuals to pursue their ideal selves by emphasizing potential gains (i.e. benefit) and striving for positive outcomes. In contrast, a prevention-focused system prioritizes safety and obligation, focusing on the avoidance of losses (i.e. risk) and maintaining the status quo through caution and vigilance (Brockner and Higgins, 2001). One key distinction between these two systems lies in the orientation they prioritize in their goal-pursuit process—eager orientation (which aligns with a promotion focus, emphasizing advancement and achieving gains) and vigilant orientation (which corresponds to a prevention focus, prioritizing caution and loss avoidance) (Higgins, 2000).
According to RFT, individuals naturally adapt their mindset and behavior to align with their regulatory orientation (Higgins, 2000), shaping how they weigh risks and benefits in their decision-making processes of GenAI adoption, such as proactive or vigilant adoption. This aligns with the tenets of cognitive dissonance theory (Festinger, 1957), which posits that humans experience uncomfortable psychological tension when holding two inconsistent cognitions. To reduce this dissonance, individuals may either adjust their behaviors or eliminate conflicting cognitions. Cognitively, the duality of GenAI usage, offering both promise and peril, creates internal tension that individuals must resolve to preserve a coherent self-concept and make cognition-consistent decisions. To manage these competing appraisals, adopters engage in risk–benefit negotiation by assessing, reinterpreting, and reweighting risks and benefits of GenAI to reduce ambivalence and orient themselves toward ideal application. Behaviorally, using GenAI entails concrete choices, such as whether to apply it to sensitive personal tasks, professional writing, or exploratory learning, each of which requires trade-offs: downplaying risks may foster efficiency and creativity, while overemphasizing them can lead to cautious underuse. In this way, behavioral negotiation reflects the motivation to balance safety with opportunity, enabling adopters to align GenAI use with their goals and sustain continued engagement. Hence, RFT resonates with the core of self-negotiation: while self-negotiation captures what individuals do when reconciling perceived risks and benefits, RFT explains how their underlying orientations can be reflected in this process. In this sense, self-negotiation describes the overall process of balancing risks and benefits, while RFT explains the motivational orientations that shape how this process occurs, making the two frameworks conceptually distinct yet mutually reinforcing.
Prior research has applied RFT to examine promotion- versus prevention-focused orientations in commercial contexts (Brockner et al., 2004). RFT likewise offers insights into users’ self-regulation and risk–benefit negotiation in adopting GenAI. Promotion-focused individuals, attuned to gains and ideals, tend to “approach” GenAI through eager strategies such as devoting time to mastering tools or placing trust in AI systems, showing greater willingness to engage for potential benefits. In contrast, prevention-focused individuals, more sensitive to losses and obligations (Higgins, 2000), are inclined to “avoid” GenAI through vigilant strategies like verifying outputs and limiting dependence (Brockner and Higgins, 2001; Chang et al., 2024). Thus, users flexibly balance GenAI's risks and benefits according to their regulatory orientation.
Scholarship stressed the significant influence of regulatory focus orientation and the risk–benefit negotiation process on technology adoption experiences (Fu et al., 2024; Haider et al., 2024; Hsu and Silalahi, 2024). For instance, consumers with a promotion-oriented mindset tend to trust blockchain technology for its transparency and cost-effectiveness, whereas prevention-oriented consumers focus on risk mitigation and thorough verification (Swazan and Youn, 2025). Regarding the adoption process of GenAI, prevention-focused ChatGPT users tend to actively seek clarification or peer input to reduce uncertainty (Chang et al., 2024). Likewise, conversational AI users with prevention focus usually adopt avoidance strategies, such as reducing use or switching to alternatives to mitigate potential risks (Habib et al., 2025). While prior studies (Fu et al., 2024; Hsu and Silalahi, 2024) have revealed the coexistence of GenAI users’ risk and benefit perceptions, few studies have examined how individuals weigh and balance these perceptions in their usage of GenAI. Hence, drawing on RFT, the current study explores their orientations and the complex processes through which users negotiate risks and benefits when making decisions related to GenAI usage. We propose:
What orientations (prevention-focused or promotion-focused) are reflected in users’ negotiations of GenAI's risk and benefits?
Methods
Data collection
To answer the research questions, we employed a qualitative research method through semi-structured, in-depth interviews. As the adoption of GenAI is still a nascent field, we selected this method as the main approach to address our research questions. In-depth interviews offered an effective exploratory framework, enabling the collection of comprehensive and nuanced data to better understand how individuals perceive the risks and benefits of GenAI and how these perceptions influence their adoption strategies (Tracy, 2013). The semi-structured format provided a balance between structured inquiry for addressing theoretical questions deductively and flexibility for inductive analysis, fostering the development of new theoretical insights.
After obtaining ethical approval from a university, participants were recruited using purposive and snowball sampling methods through social media and personal relationships. Eligibility requirements included availability, willingness to participate, and experience with GenAI tools. Recruited participants also referred others to expand the recruitment pool. Purposive sampling allowed us to target individuals directly relevant to the research objectives, while snowball sampling extended the participant group to include individuals from diverse fields who actively used GenAI. This dual approach ensured a diverse participant sample. To ensure diverse perspectives, we maintained gender balance and equal representation of participants from STEM and Arts disciplines. 1
A total of 30 participants were interviewed between October and November 2023, including 14 males and 16 females, aged between 19 and 35 years (Mage = 23.77, SDage = 4.18). Half of the participants came from STEM fields (e.g. computer science, civil engineering, mathematics, and physics), while the other half were from Arts disciplines (e.g. social sciences, humanities, communication). Participants reported using a variety of GenAI platforms, such as WenXinYiYan (Baidu), ChatGPT (OpenAI), Bard AI (Now Gemini), Claude. Nearly all participants engaged with GenAI for over 3 hours per week, utilizing it for various tasks (e.g. translation, brainstorming, text and code generation, information retrieval, writing improvement, data analysis, and companionship).
The interviews were conducted by two research assistants using a semi-structured guide developed in alignment with the study objectives and theoretical frameworks. The guide focused on participants’ risk and benefit perceptions of GenAI across personal and societal levels, their attitudes and acceptance of it. Participants provided informed consent before participating, and the interviewers fostered an open and comfortable environment by showing empathy, asking probing questions for clarity, and remaining impartial. Each participant received a CNY 50 monetary incentive for their time. The interviews, which lasted approximately 1 hour, were conducted via Tencent Meeting, recorded for transcription, and transcribed verbatim. The transcriptions were then translated from Chinese to English by research assistants, with all identifying information replaced by anonymized alphanumeric codes (e.g. P1 for Participant 1) to protect confidentiality.
Data analysis
Two research assistants, after undergoing extensive training, independently transcribed and coded the data. To ensure rigor, transparency, and reliability, the analysis followed established qualitative research standards, including credibility, dependability, transferability, and confirmability (Ou et al., 2024). A hybrid coding strategy was employed, combining open coding, axial coding, and selective coding techniques (Tracy, 2013). Our analytical approach combined both inductive and deductive thematic coding and analysis (Terry et al., 2017). This allowed participants to freely articulate their perceptions and sense-making processes regarding the risks and benefits of GenAI, while also enabling us to interpret their narratives within established theoretical frameworks. 2 This structured and theory-informed process facilitated the identification of key themes and insights into participants’ perceptions of the benefits and risks of GenAI from varied psychological distances, as well as their negotiation processes and dynamic orientations in adopting these technologies. 3
Results
Proximal and distal risk perceptions of GenAI
Regarding RQ1, our findings identified four proximal risk perceptions associated with GenAI, including ability degradation, privacy violation, misinformation exposure and social withdrawal. We also demonstrated several distal risk perceptions toward GenAI among users, such as AI misuse, inherent biases, AI divide and AI-Induced unemployment. Findings of the current study have been outlined in Figure 1.

The conceptual framework of proximal and distal risk and benefit perceptions and self-negotiation orientations among users.
Proximal risks
Ability degradation
Several participants (e.g. P13, P19, P22, P30) worried that over-reliance on GenAI weakened their problem-solving and programming skills, as they no longer needed to search for related information or understand solutions in depth. For example, P19 noted a decline in problem-solving skills, while P14 highlighted a reduced capacity to ask in-depth questions and think creatively. A computer science student (P13) explained that while GenAI improved efficiency, it diminished his coding skills because he no longer needed to carefully read, analyze, and process the code output on his own.
Privacy violation
Interviewees expressed concerns over GenAI's information security and associated risks, particularly privacy issues (P3, P10, P15, P18, P20, P21, P22, P24). P3, an algorithm development intern, shared: “I once copied my code in its entirety to GPT and asked it to correct me. Later, I asked it the same question under a different account and realized it had used my code completely.” Others worried about biometric data misuse due to the multi-modal content generation capabilities of GenAI. P15 stated: “When I upload personal information, such as my recorded videos or photos, I worry AI might use my face or voice to create something inappropriate, infringing on my privacy.”
Misinformation exposure
Interviewees (P1, P7, P8, P21, P24) also expressed concerns about the reliability of AI-generated information. P24, a medical professional, noted: “When I ask academic-related questions, it promptly provides plain conclusions. However, I believe the accuracy of these conclusions is questionable, and it's challenging to trace the source of its information.” Others warned that AI often presents misinformation in a polished manner. P8 cautioned: “If someone explores unfamiliar fields without the ability to evaluate the information, it could lead to hallucination problems, resulting in comprehension biases.” P1 added: “There may be erroneous concepts in what it explains, which can affect our learning outcomes.”
Social withdrawal
Social withdrawal refers to the avoidance of social activities with peers (Rubin et al., 2009) and the reduction of physical interactions in favor of increased engagement with technology (Chow et al., 2017). Some interviewees signified a “trade-off” between AI interactions and real-world social connections. P9 noted: “I know some people genuinely regard these large language models as friends. Personally, I rely on this technology daily for virtual interactions. However, such interactions might negatively impact face-to-face communication.” P14, a university student, added: “Many people choose to ask AI instead of engaging with those around them, reducing face-to-face communication and narrowing social interactions.” Others raised concerns about AI's impact on relationships and social cohesion. P20 speculated: “If AI-assisted robots become widespread, people may customize them for interaction, replacing traditional relationships like marriage and family.”
Distal risks
AI misuse
The moral system in societies encompasses a set of principles, rules, and norms that guide individuals in distinguishing right from wrong and desirable from undesirable AI-related behaviors (Gonzalez Fabre et al., 2021). Interviewees were skeptical about GenAI's role in promoting fairness and justice, raising concerns about its misuse for illegal activities (P13, P22, P23, P25). P23 noted, “There are open-source tools that unlock all features of GenAI, including immoral, pornographic, illegal, and violent content. This could be problematic.” Others mentioned intellectual property disputes, as AI-generated content ownership remains unclear. P14 and P23 questioned, “Does this intellectual property belong to the language model, the big dataset, or the content creator themselves?”
Inherent biases
Another distal risk observed by our participants is the inherent biases in the algorithmic system, which typically arise from the training dataset and are reflected in GenAI's output. Several participants (P1, P16, P28, P30) raised concerns about AI bias embedded within language models. For instance, they recounted instances where generated content reflected stereotypes related to gender and race, underscoring the potential for AI to perpetuate or even amplify societal biases. P1 stated: “Programmers’ own judgements may be reflected in these large language models,” P28 also echoed: “If the corpus contains misinformation, racist remarks, or other biased content, they may surface in the output, leading to biased results.”
AI divide
Interviewees pointed out the concern of AI divide, referring to the discrepancies in AI-related access, capability, or outcome among users (Wang et al., 2024), as a major societal issue, further exacerbating disparities in accessibility and literacy. Some noted that advanced AI models pose high barriers, requiring users to bypass China's firewalls, pay for premium access, or master complex prompts (P14, P25). Concerns about social stratification also emerged. P9 observed, “Knowledge acquisition outcomes can differ among AI users; this information divide can be attributed to their digital divide.” P14 added, “Adopters with high intelligence literacy and resources can leverage AI to expand their networks, widening the gap with others.”
AI-induced unemployment
AI-induced unemployment refers to AI replacing human workers, especially in routine or mechanical tasks. Nearly all interviewees expressed concerns about job loss and risk of being replaced due to AI's rapid advancement (Wissing and Reinhard, 2018), particularly for employees in logistical and content creation roles. P3 noted that future AI, like GPT-5.0, could fully automate repetitive work. Others showed concerns about risks for copywriters, illustrators, and coders as GenAI improves. Some also feared broader AI dominance (Baum, 2025), with P10 warning it might no longer follow human commands, and P4 speculating AI could evolve its own intelligence, stated: “It now has very powerful functions and can do almost anything. It may even evolve a new kind of intelligence.”
Proximal and distal benefit perceptions of GenAI
Regarding RQ2, our findings identified three proximal benefit perceptions associated with GenAI, including technological gratification, hedonic gratification and utilitarian gratification. We also demonstrated several distal benefit perceptions toward GenAI, such as economic transformation, social inclusion and health promotion.
Proximal benefits
Technological gratification
Technological gratification refers to the competence, sensitivity, and intelligence users perceived in technology (Gao, 2023). In our context of GenAI, users highlighted that the advancement of content generation involves creating multi-modal outputs based on their prompts or questions, such as explanations of terminology, reports, codes, pictures, and emails. Another form involves identifying and transforming user-provided content (e.g. text, picture, audio) into revised and improved versions. Several participants (P3, P6, P15, P19) mentioned using GenAI for tasks such as correcting code and grammatical errors, adding subtitles to recorded videos, and articles proofreading and summary. P6 specifically noted, “Every time I feed it an article in PDF format, it quickly recognizes the content I input and efficiently extracts the key points from it.”
Hedonic gratification
People's gratification for hedonic and entertaining purposes usually refers to the level of enjoyment experienced by users through engaging in certain activities, experience, or technologies (Gao, 2023). Some participants credited the conversational affordance of GenAI, noting that it is sometimes used for entertaining interactions and communication. This capability was attributed to GenAI's advancements in human-like and natural expression enabled by large language models. P28 expressed the view that the hedonic gratification provided by GenAI serves as its third major function, following thought inspiration and content improvement. Similarly, P22 elaborated: “AI role-playing has become very popular recently. With AI, you can create a character, such as a friend, and I think it can actually provide emotional value. We can communicate with it like we would with an online friend in daily exchanges.”
Utilitarian gratification
Participants primarily view GenAI as a practical tool for achieving their goals, leveraging its functionalities for learning, work, and information-seeking (P6, P9, P11, P14, P20, P23). This aligns with utilitarian gratification, which refers to individuals’ progress and achievement facilitated by technology (Gao, 2023). It includes the ability to solve complex and personalized problems through algorithms and data processing, providing accurate, timely, and consistent solutions to meet user needs in the current context (Yuan et al., 2022). For example, P9 uses ChatGPT to understand statistical models, while P14 expands on class topics. Beyond information retrieval, some find AI valuable for idea generation. P3 credits it with enhancing programming efficiency, and P6 highlights its role in stimulating deeper philosophical thinking. This reflects users’ trust in AI's ability to inspire and support intellectual exploration.
Distal benefits
Social inclusion
Interviewees also emphasized GenAI's potential to assist vulnerable groups, such as individuals with disabilities, dysfunctions, or the elderly, in both learning and daily activities, thereby reducing societal inequalities and increasing digital inclusion. This vision of broad, long-term improvements for marginalized populations reflects a distal benefit, as it involves abstract, high-level mental construals about structural societal change rather than immediate, personal outcomes. They noted that the technology's ability to perform cross-modality information transformation enhances communication and information access for the disadvantaged groups (P15). Furthermore, the provision of customized information allows individuals with varying capabilities to achieve their learning goals tailored to their specific needs and standards (P6, P9).
Economic transformation
Participants recognized GenAI's role in driving technological advancement and viewed GenAI as a facilitator for industrial creativity, workforce support and economic transformation, suggesting GenAI's capability for societal development and the prosperity of the future economy. Such expectations of large-scale structural change and long-term economic growth also represent a distal benefit. Many (P3, P4, P5, P13, P29) noted that wider AI adoption could lower labor expenses by replacing mechanical tasks. They also highlighted AI's role in economic growth through job creation. P3 remarked, “The artificial intelligence sector is a huge market, and there are many jobs related to GenAI available right now.” Additionally, participants anticipated AI-driven companies would enhance efficiency and free humans for higher-order tasks. They anticipated its integration into daily life, including AI-assisted learning, autonomous vehicles, advanced medical equipment, and “smart city management plans with the assistance of GenAI” (P26). In the virtual realm, they emphasized its support for content creation, with P2 noting its potential foundational role in the “metaverse” construction.
Health promotion
A notable distal benefit for the whole society perceived by our participants is AI's potential to democratize access to health well-being services, particularly in public healthcare and telemedicine system, by bridging gaps in affordability and availability. This also reflects a distal benefit, as it envisions abstract, long-term improvements in collective health outcomes and systemic improvement by AI technologies. Participants (P24, P25) highlighted GenAI's capability in providing basic clinical diagnoses and nursing expertise, especially through its ability to analyze multi-modal content, such as X-rays. Additionally, GenAI's conversational capabilities were recognized for offering emotional support and facilitating psychological interventions, contributing to broader social well-being and fostering psychological resilience among the public (P18, P22).
Orientations for self-negotiating risks and benefits of GenAI
RQ2 further examines how users negotiate the risks and benefits of GenAI in their adoption decisions. We found that GenAI users typically showed two types of orientations: promotion-focused and prevention-focused. The promotion-focused orientation downplays potential risks while emphasizing benefits of GenAI, whereas the prevention-focused orientation prioritizes underlying risks and adopts a cautious approach toward GenAI.
Promotion-focused orientations
We found that users expressed a promotion-focused orientation to balance their risk and benefit perceptions of GenAI when making adoption decisions. Specifically, they maximized GenAI's benefits by embracing its opportunities while downplaying potential drawbacks. The common purposes among this group of users are to improve their paperwork (P1, P14), increase work efficiency (P5, P8), cultivate personal intelligence literacy (P2) and acquire more information across various domains (P9), aligning with a promotion-focused approach in their usage.
Risk downplaying
Risk downplaying is a cognitive strategy where individuals minimize or dismiss potential threats. Regarding risks like privacy violations, misinformation, and unemployment, participants (P1, P2, P3, P5, P8, P9, P10, P14, P27) often viewed these concerns as overstated. P9 also downplayed privacy issues, stating, “I don’t really perceive this problem to a great extent because it's possible that the text itself I gave it is not particularly private.” P8 rationalized personal data exposure, saying, “Personal information leakage is actually not that serious… Everyone is running nakedly with a transparent identity.”
Misinformation concerns and unemployment were also minimized. P20 remarked, “Even without AI, the internet is awash in fake news, and this is not caused by AI.” Some downplayed unemployment risks, acknowledging job displacement but emphasizing AI adaptation. P2 noted, “It does cause some of these people to lose their jobs… But in a way, I think it's also a boost to motivate people… to empower themselves with more technological skills.” These findings suggest participants cognitively justify GenAI use by downplaying potential risks.
Strength prioritization
Users with the promotion focus usually prioritize GenAI's benefits over its risks, valuing its advantages despite potential drawbacks. Regarding AI reliance and language skill degradation, P14 noted, “Using it (GenAI) to embellish your English writing doesn’t necessarily improve your English a great deal, but it does read FANCY… I’ll probably still go back to it when I need it again.” This reflects a pragmatic approach, accepting short-term gains over long-term effects. On privacy issue and terms of use, participants expressed resigned acceptance. P5 stated, “Whether I read it or not, I have to agree… once I choose not to agree, I can’t use it anymore.” P10 added, “I noticed that the record of our conversations seems to be kept in the cloud… In fact, I don’t care too much about this; it can take my information and better train it because… it provides me with a lot of convenience.” Overall, participants exhibit a calculated compromise, prioritizing AI's utility over concerns like privacy and skill development.
Prevention-focused orientations
Our study also identified prevention-focused orientations
Privacy protection
Participants adopted various strategies to protect their personal data and mitigate privacy violation risks. Many participants (P9, P10, P11, P13, P15) selectively provided only necessary, anonymized, and fragmented information. P9, a graduate student, stressed: “I never input my research results to GPT. If these findings are plagiarized by others, my study will be in trouble!” Others (P20, P22, P23) deleted prompt records and avoided linking AI accounts to other social media to reduce traceability. Some (P21, P30) limited AI use to non-sensitive tasks, refusing to handle confidential work. These strategies reflect growing awareness and efforts to balance AI's utility with data protection.
Output scrutinization
Participants were skeptical about AI-generated information and employed verification strategies to avoid receiving misinformation. Many (P1, P10, P23, P24) cross-checked content, with P24 stating, “Although the current large language model is more advanced than previous versions, the accuracy of the information it provides still needs to be scrutinized by myself.” Some used search engines like Google and Baidu, while others relied on academic sources, fact-checking sites, or interpersonal networks for validation. To ensure data consistency, participants (P22, P25) engaged dynamically with AI by providing feedback, re-testing outputs, and requesting self-evaluations on its outputs. These findings reflect a critical yet proactive approach to maintaining information accuracy.
Reliance abstinence
Despite AI's integration into daily life, participants acknowledged the need to prevent excessive reliance, which could lead to cognitive laziness and plagiarism risks. This deliberate effort, termed reliance abstinence (Nassen et al., 2023), aims to reduce dependency on digital tools. Participants emphasized maintaining face-to-face interactions (P3, P9) and shared strategies for limiting AI reliance. P3 stated, “I should try to reduce my dependence on GPT… I now use GPT mainly to write simple codes, but important functions are written by me personally.” P10 added, “Tasks like writing code are usually something I deal with myself first… rather than letting it solve it for me in the first place.” These reflections showed users’ growing awareness of AI over-reliance, emphasizing a balance between leveraging AI and preserving personal initiative.
Our findings also revealed that regulatory focus is not necessarily fixed but can shift depending on the context of GenAI use. Several participants (e.g. P1, P3, P9, P20) demonstrated hybrid orientations as well, downplaying certain risks while simultaneously scrutinizing others. For instance, when engaging in serious or sensitive tasks such as drafting personal files, participants adopted a prevention-focused orientation, emphasizing caution and risk mitigation. In contrast, when using GenAI for less private, exploratory purposes (e.g. such as gaining domain knowledge or searching for general information), they exhibited a more promotion-focused orientation, highlighting potential benefits and opportunities. These mixed approaches suggest that regulatory focus is context-dependent and fluid, offering a more nuanced understanding of how individuals negotiate the risks and benefits of GenAI.
Discussion
Guided by CLT (Trope and Liberman, 2010) and RFT (Higgins, 2000), this study used in-depth interviews to examine public perceptions of GenAI's risks and benefits across psychological distances, and how individuals negotiate these trade-offs. Our findings show that users distinguish risks and benefits in psychologically distinct ways. Consistent with previous research (Ho and Tan, 2023), this first qualitative exploration of GenAI risk perceptions reveals a differentiation between proximal and distal risks. Proximal risks such as privacy violation, misinformation, and social withdrawal relate to personal information integrity and human–AI interaction, whereas distal risks (e.g. AI-induced unemployment, inherent biases, AI divide) are perceived as future-oriented and societal in scope, consistent with CLT (Trope and Liberman, 2010). Aligning with prior AI literature on deployment challenges and ethical dilemmas (Habib et al., 2025), our study reaffirms these concerns while extending theoretical understanding of risk perception through a psychological distance lens, highlighting its multi-layered and context-dependent nature.
Despite the distal and proximal risks of GenAI users perceived, evidence from the current study revealed an overall receptive stance among participants. Consistent with findings from Nelson et al. (2020) and Schwarz and Unselt (2024), our study also found that the public credits GenAI for its technological intelligence and creative affordances, particularly its ability to generate multi-modal content based on user prompts. From an individual and practical perspective, participants valued GenAI's capacity to address their needs in learning, work, and social activities, aligning with the notion of uses and gratification in AI adoption. Interestingly, participants also expressed optimism about GenAI's potential in a future AI-enabled society (i.e. distal benefits), envisioning its impact on economic and workforce transformation, social equity, and collective well-being. They highlighted possibilities such as expanding telemedicine, enhancing access to remote education, and providing support for mental health services—examples that illustrate how GenAI could reshape key social sectors. These perspectives resonate with recent work on AI-enabled co-creation, cooperation and learning, which emphasizes how AI can facilitate collaborative knowledge production and organization managerial innovation at scale (Doshi and Hauser, 2024; Holmström and Carroll, 2024). Taken together, these findings suggest a nuanced public perspective, one that balances awareness of risks with recognition of the long-term benefits and opportunities offered by GenAI.
More importantly, this study identifies an asymmetry between GenAI's actual performance and public perceptions (Sartori and Bocca, 2023). Although evidence suggests GenAI can enhance creativity (Doshi and Hauser, 2024), many perceive it as limiting personal development in thinking and writing. Conversely, users express optimism about joint human–AI performance in areas such as learning and psychological support, contrasting with literature questioning such partnerships (Vaccaro et al., 2024). These discrepancies call for further research on factors shaping these perceptions and their implications for AI-related attitudes and decisions.
Another contribution of this study is the categorization of orientations in users’ risk–benefit negotiation. Promotion-focused users emphasize GenAI's advantages, downplaying potential harms through strategies like risk minimization and strength prioritization. In contrast, prevention-focused users prioritize loss avoidance by protecting privacy, verifying outputs, and limiting reliance, echoing prior findings on aversion to AI (Habib et al., 2025). These insights advance understanding of how risk–benefit perceptions shape GenAI engagement, revealing a dynamic interplay between opportunity seeking and risk mitigation.
It is also important to recognize the pitfalls of both strategic orientations: promotion-focused users may underplay risks and inadvertently expose sensitive data and place heavy reliance, while prevention-focused users may emphasize caution to the extent that they miss opportunities for creativity and productivity gain. To move beyond these extremes, future research should quantitatively examine our framework by developing and validating measures of proximal and distal risk–benefit perceptions in AI context, and testing relationships that link these constructs to actual GenAI usage and interaction behaviors.
Our findings also suggest that users’ perceptions and orientations toward GenAI usage are shaped not only by individual evaluation but also by social and collective influences. Several participants reported consulting colleagues or online communities to cross-validate AI outputs, reflecting a form of social amplification of risk (Kasperson et al., 1988) in which peer discussions either heighten or attenuate perceived risks of GenAI. On the one hand, such collective AI–human interaction processes can increase confidence in AI use by diffusing uncertainty across trusted networks. On the other hand, participants also expressed concerns that reliance on GenAI could reduce the quality of social interaction, reinforcing anxieties about diminished human connection. These patterns illustrate how interpersonal and collective contexts play a critical role in shaping both risk sensitivity and benefit orientation, underscoring the importance of situating AI adoption and interaction within broader social dynamics.
Theoretical and practical implication
Drawing on the theoretical perspectives of CLT and RFT, this study is among the first to explore the multidimensional nature of individuals’ risk and benefit perceptions toward the novel technology of GenAI. Additionally, it examines the cognitive processes through which individuals navigate and negotiate these perceptions in their decision-making regarding GenAI adoption. The findings support the development of a conceptual framework that integrates multifaceted risk and benefit perceptions with psychological mechanisms of motivational orientation, drawing on CLT and RFT. CLT helps explain why users sometimes construe GenAI risks and benefits as proximal, immediate, and concrete, while at other times viewing them as distal, abstract, and future-oriented. RFT complements this by delineating users’ orientations (i.e. prevention-focused and promotion-focused) underlying their risk and benefit self-negotiations. Together, these perspectives clarify how individuals engage in self-negotiation and balancing processes that reflect their orientations during GenAI usage. In doing so, the framework advances theorization of user adoption and risk management in emerging AI technologies, contributing to broader research on risk perception, digital trust, and human–AI interaction.
This study shows that individuals exhibit different regulatory foci (promotion vs. prevention) and strategically emphasize or downplay GenAI's risks and benefits to meet their goals. These findings extend RFT by revealing how motivation shapes technology adoption through dynamic risk–benefit assessments. The study also highlights the multifaceted framework of risk–benefit perceptions across proximal and distal distances, enriching CLT by showing how perceptions of emerging technologies vary with psychological distance.
This framework also provides practical insights. For AI communicators, they could align their communication strategies with users’ regulatory focus. For promotion-focused individuals, messages should prioritize innovation, efficiency, and potential benefits. For prevention-focused individuals, messaging should emphasize privacy safeguards, misinformation prevention, and AI governance to build trust and mitigate concerns. For AI developers, one practical recommendation is to enable customizable settings that accommodate different user orientations. Prevention-focused users may prefer versions with detailed explanations and strict privacy controls, whereas promotion-focused users may opt for more advanced features with fewer restrictions. In addition, developers could design collaborative functions, such as shared prompt logs or peer review workflows that allow users to collectively validate outputs and learn from one another, thereby enhancing both trust and safety. For AI policymakers, differentiated risk management strategies are also needed. Proximal risks (e.g. misinformation and privacy violations) require immediate safeguards, while distal risks (e.g. unemployment and the AI divide) call for long-term regulatory frameworks. At the same time, policymakers could foster community forums or best-practice networks (e.g. human-in-the-loop) where users exchange experiences and responsible-use strategies.
Limitations and future research directions
This study has several limitations. First, the qualitative interview approach limits the ability to conduct statistical testing and establish empirical evidence for the relationship between differential motivations (promotion vs. prevention) and the negotiation between risks and benefits. The subjectivity inherent in in-depth interviews relies heavily on the interpretation of coders and researchers, which may impact the generalizability of the findings. Future studies could adopt quantitative methods to investigate the dynamic nature of GenAI risk and benefit perceptions and examine how individuals’ motivations drive their balancing orientations. Second, although we collected demographic information such as age, gender, academic discipline, and weekly GenAI usage, our analysis did not reveal systematic differences across these groups. A possible reason is that most participants represent first-stage adopters, who are still in an exploratory phase and thus tend to engage with GenAI in similar ways regardless of demographic background (Ibrahim et al., 2025). We acknowledge this as a limitation of the present study and suggest future research to consider that as GenAI adoption matures, demographic and experiential differences may become more salient in shaping how risks and benefits are perceived. In a similar case, the study was conducted during a period when ChatGPT-3.5 and -4 were prevalent, which may not fully capture people's perceptions of more current advanced GenAI tools, such as Google Gemini and DeepSeek. Future research could adopt a temporal or longitudinal approach to assess whether public perceptions of GenAI evolve over time as the technology advances. Third, the sampling strategies (snowball and purposive sampling), along with the limited geographical scope of this study, constrain its generalizability to broader populations and regions. Specifically, cultural norms around data privacy and public attitudes toward technological advancement may influence how GenAI is perceived. For instance, a cross-cultural study by Brauner et al. (2025) found that Chinese participants demonstrated more balanced views of AI risks and benefits, whereas German participants emphasized risks more strongly. This may reflect broader cultural values, such as individualism and techno-skepticism in Western societies that are linked to heightened privacy concerns (Barnes et al., 2024). In contrast, China's collectivist orientation and state-led AI development agenda may encourage more optimistic and pragmatic views of AI as a tool for societal and economic progress (Sindermann et al., 2022). Additionally, as our sample primarily comprised digital natives aged 19 to 35, their relative comfort with emerging technologies may further skew perceptions toward greater acceptance of GenAI. This may not fully capture attitudes held by older or less technologically engaged populations. Future research should include more diverse samples and expand beyond the Chinese cultural context to examine whether these findings hold across different cultural and geographical settings. Lastly, although we used CLT and RFT to conceptually distinguish between people's perceptions of proximal and distal risks and benefits of GenAI and their associated negotiation orientations, this categorization inevitably involves a degree of subjective interpretation. While coders adhered to shared definitions during the coding processes, the distinction is not always clear-cut. As grounded theory encourages theoretical abstraction, there is a risk that participants’ actual, conscious thoughts and behaviors may not be directly captured or systematically represented. Future research could adopt more inductive and observational designs to explore these processes in greater depth and generate richer, participant-driven insights.
Supplemental Material
sj-docx-1-bds-10.1177_20539517251410046 - Supplemental material for Between promise and peril: Users’ risk–benefit trade-offs in their generative AI usage
Supplemental material, sj-docx-1-bds-10.1177_20539517251410046 for Between promise and peril: Users’ risk–benefit trade-offs in their generative AI usage by Hongjie Tang, Mengxue Ou and Han Zheng in Big Data & Society
Footnotes
Acknowledgments
The authors sincerely appreciate the invaluable support from Ms. Chuyue Zhang, Ms. Xin Wang, Ms. Baijue Li, and Mr Haozheng Shi during data collection and coding.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Supplemental material
Supplemental material for this article is available online.
Notes
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
