Abstract
The rapid advancement of generative artificial intelligence (AI) presents organizations with unique opportunities and challenges. While companies are increasingly adopting this technology to enhance productivity and gain a competitive edge, understanding the key factors influencing organization members' use of generative AI remains critical. This study investigates the determinants of generative AI usage within organizations, focusing on the interplay of trust, perceived risk, and habit. This study addresses this gap by examining how organization members' perceptions of trust and risk associated with generative AI influence their usage decisions. Furthermore, it clarifies the role of generative AI usage habit in organization members' usage decisions. The research model was tested using data collected from 214 organization members, and partial least squares (PLS) was utilized for the analysis. While trust in generative AI did not significantly affect generative AI use, perceived risk was found to negatively affect both generative AI use and trust. This study confirmed the significant role of habit in facilitating organization members' use of generative AI. The findings of this study provide several theoretical and practical implications that encourage organization members' use of generative AI by inducing positive experiences and repeated use of the technology.
Introduction
In recent years, artificial intelligence (AI) technology has made remarkable advancements, and generative AI, in particular, has gained significant attention for its distinct characteristics and potential applications compared to traditional information systems (IS). Generative AI, based on large language models, can perform various natural language processing tasks such as text generation, translation, summarization, and question answering. Additionally, its application scope is expanding beyond individual use to organizations (Banh & Strobel, 2023). The adoption of this technology is acting as a core factor in leveraging organizational competitiveness, driving innovative changes, and moving beyond simple work automation tools (Tran & Murphy, 2023). Consequently, companies are actively integrating generative AI into their operations, and organization members are increasingly utilizing this technology to increase work efficiency and creativity (AL-khatib, 2023; Ardeliya et al., 2024). In South Korea, major corporations such as Samsung and POSCO are developing proprietary generative AI systems tailored to their specific operational needs and security standards. Samsung’s ‘Samsung Gauss’ enables employees to summarize documents, translate emails, and interact with internal knowledge systems (Korea JoongAng Daily, 2023). Similarly, POSCO E&C’s domain-specific ‘Quality AI’ supports on-site decision-making by referencing updated construction codes and manuals (The Korea Economic Daily, 2023). Thus, generative AI is emerging as a core technology for strengthening corporate competitiveness, surpassing its initial role as merely a productivity enhancement tool. However, despite the rapid adoption of generative AI by many organizations, the understanding of the usage behavior of organization members who use it remains in its early stages (Gkinko & Elbanna, 2023; Nguyen et al., 2025). In particular, research examining how organization members perceive generative AI and the factors influencing their usage decisions remains insufficient. In this context, this study aims to examine the factors affecting organization members' use of generative AI and provide implications for promoting its adoption within organizations.
Most studies on generative AI have focused on the use of generative AI technology by general users, analyzing usage behavior based on personal interest or efficiency (e.g., Aiolfi, 2023; Baek & Kim, 2023). In contrast, within organizations, generative AI is utilized as a work support tool, suggesting that more complex factors such as trust in the technology, perceived usefulness, perceived enjoyment, and perceived risk are likely to influence the formation of usage behavior (Kelley, 2022; Li et al., 2024). When utilizing generative AI for work, organization members may face serious risks such as personal information and privacy breaches, cybersecurity threats, intellectual property infringement, and the hallucination problem caused by incorrect information generation (Demir & Demir, 2023; Gkinko & Elbanna, 2023; N. Wang et al., 2025). For example, in 2025, OpenAI's upgraded Chat GPT o3 and o4-mini showed even more severe hallucination phenomena despite their enhanced performance relative to previous generations. In April 2025, TechCrunch, a US IT media outlet, reported that OpenAI's benchmark test, “Persona QA,” found o3 and o4-mini model hallucination rates of 33% and 48%, respectively (TechCrunch, 2025). This result implies that even if organization members consider generative AI as useful, anxieties, including those related to possible errors or copyright issues, may hinder trust formation. In this context, this study deeply analyzes how organization members' trust and perceived risk toward generative AI affect their usage decisions.
This study posits perceived usefulness and perceived enjoyment as key facilitators for organization members' use of generative AI. Perceived usefulness is defined as the extent to which an organization member believes that using generative AI would effectively enhance their job performance and act as an extrinsic motivation factor (Gkinko & Elbanna, 2023; Topsakal, 2024). According to a McKinsey & Company survey, utilizing generative AI in work can generate 15% to 40% more added value compared to traditional work methods (McKinsey Digital, 2023). According to a Harvard study involving 758 consultants at the Boston Consulting Group, on average, those using AI completed 12.2% more tasks and completed tasks 25.1% faster. As the use of generative AI in work increases, many companies across various industries are competing to adopt and use it. Additionally, the intrinsic pleasure or enjoyment experienced by organization members when using generative AI also acts as a key factor promoting usage behavior (J. S. Kim & Baek, 2024; Sohn & Kwon, 2020). By automating tedious and repetitive tasks such as document summarized creation and data management, organization members can focus on more interesting and valuable tasks and even form new ideas. This enjoyment derived from using generative AI is also expected to increase trust in and frequency of use of generative AI.
As organization members repeatedly utilize generative AI for job tasks, a usage habit regarding generative AI is formed. As organization members consistently use generative AI, the probability of experiencing concrete benefits, such as increased work efficiency or the generation of new ideas, increases (Dabbous et al., 2022; Venkatesh et al., 2023). This direct experience makes them recognize the actual value of generative AI much more strongly than abstract expectations, and this positive usage experience plays a crucial role in forming a favorable attitude toward generative AI (Wu et al., 2025). Moreover, trust can be further strengthened as the stability and predictability of generative AI are confirmed through repeated use. In this context, this study explored the influence of habit on perceived usefulness, perceived enjoyment, trust, perceived risk, and usage to elucidate the role of organization members' usage habits regarding generative AI.
This study aims to identify the antecedents influencing organization members' use of generative AI. Especially, this study would address the following two research questions:
Companies must understand that in establishing strategies for the adoption and utilization of generative AI, they must promote its use among organization members and manage the associated risks. Through the findings of this study, we propose ways to encourage voluntary participation and continuous utilization of generative AI by organization members by inducing delightful usage experiences and repeated use of the technology.
Literature Review and Research Model
Trust Related to Generative AI
Generative AI technology, based on large language models (LLMs), involves an artificial intelligence model that performs machine learning based on natural language processing capabilities (Jiang et al., 2025). This model is designed to process various text-centric tasks by learning from vast amounts of text data. It operates based on transformer architecture, which excels at analyzing relationships between words in a sentence and understanding context. Utilizing this methodology, it can perform various tasks such as text generation, context understanding, question answering, translation, and summarization. Representative models include OpenAI's GPT (Generative Pre-trained Transformer), Google's BERT (Bidirectional Encoder Representations from Transformers), and Meta's Llama (Large Language Model Meta AI).
ChatGPT, developed by OpenAI, is a transformer-based language model that generates text through an autoregressive method (Ray, 2023). This model generates natural-sounding text by predicting the next word based on the preceding words in a given sentence. Having undergone pre-training on a large dataset, it exhibits high performance even with small amounts of data and is particularly strong in dialogue and text generation tasks. However, this model has limitations in its ability to deeply understand bidirectional context and may face difficulties in processing long sentences or complex contexts. BERT, developed by Google AI Research, is trained through masked language modeling (MLM) and next sentence prediction (NSP). MLM enhances contextual understanding by hiding some words in a sentence and then having the model predict them, while NSP focuses on understanding the sequential relationship between two sentences. Based on this bidirectional learning method, BERT demonstrates excellent performance in various natural language processing tasks such as sentence classification and question answering (Devlin et al., 2019). Yet, disadvantages remain, such as requiring significant resources for bidirectional processing and having sentence length limitations. Llama, developed by Meta, is a large language model that uses a self-regressive method similar to GPT (Touvron et al., 2023). This model is designed to achieve high performance with relatively fewer resources and offers various model sizes, ranging from 7 billion to 65 billion parameters, providing users with options based on their needs. It also has the advantage of being open source, allowing anyone to use it freely. However, its performance may be limited compared to very large models like GPT in complex generation tasks. Nevertheless, since it was designed for research purposes, it can be effectively utilized in specific environments.
However, despite its numerous advantages, large language model-based generative AI also carries several serious risks. The first is related to personal information and privacy. Generative AI may contain personal information acquired in the process of learning from large datasets. This poses a risk of unintentionally generating or exposing sensitive personal or corporate information (Aiolfi, 2023; Huynh, 2024). Additionally, if data is not properly anonymized or is collected incorrectly, personal identification may be possible. Even anonymized data can lead to the re-identification of individuals through specific patterns using AI's analytical capabilities. Second is the cybersecurity threat associated with the potential misuse of generative AI for cyberattacks (Kim, Kim, et al., 2023). Notably, it can be used to generate sophisticated malware or scripts or to modify existing malware to make it difficult to detect. AI-based systems are also at risk of adversarial attacks or model hacking, and cloud-based AI can experience data breaches due to hacking or failed access management. Next, the possibility of intellectual property infringement must be considered. Generative AI carries the risk of unauthorized copyrighted material being included in its training data. For example, content generated based on copyrighted papers, images, or financial reports may lead to copyright issues. Users may find it difficult to verify whether the material created by generative AI infringes on copyright, potentially leading to legal disputes. Finally, there is the hallucination problem caused by the generation of incorrect information. This involves generative AI creating inaccurate information or virtual data that is not based on its training data and presenting it as factual (Hasan et al., 2021; McLean & Osei-Frimpong, 2019). This can confuse users or lead them to make incorrect decisions. This problem can become serious, especially when generating complex information that is difficult to verify. Additionally, deepfake technology can be used to forge the voices or images of specific people, enabling the spread of false information.
Prior Research on Organization Members' Use of Generative AI
Generative AI is not just an “automation tool” but is establishing itself as a core technology that fundamentally enhances corporate competitiveness and innovation capabilities (Marimon et al., 2025; Pillai et al., 2024). For organization members to work efficiently and create higher value, the adoption of generative AI is becoming an essential choice. According to a survey examining companies in the Asia-Pacific region regarding the adoption of generative AI conducted by International Data Corporation Korea, an IT market analysis and consulting firm, 59.5% stated that they had already adopted and utilized generative AI in their businesses (Maeil Business Newspaper, 2024). In particular, the adoption rate of generative AI by Korean companies was 72%, which is 12.5% higher than the average adoption rate in Asia. Furthermore, 78% of Korean companies evaluated that their work productivity had improved due to the adoption of generative AI. Recently, companies have been developing their own generative AI to reflect their specific requirements and for use in specialized industries. For example, LG Electronics introduced a generative AI system that can generate SQL code, allowing employees in charge of product planning and development to analyze large data sets without specialized IT knowledge. Hanwha developed its own generative AI that learned legal commentaries and regulations to improve the accuracy of searches for construction-related legal provisions. In this context, understanding the antecedents that can promote the use of generative AI by organization members is important for both industry and academia to improve inefficient work patterns and enhance work efficiency.
Most research on the adoption or use of generative AI has often focused on general users (e.g., Aiolfi, 2023; Baek & Kim, 2023). General users tend to utilize generative AI for personal needs, such as changing images to a Ghibli style or searching for specific information. In other words, because they voluntarily decide to use generative AI based on their interest in new technology or personal needs, the enjoyment or satisfaction level provided by generative AI is used as a major cue for usage. However, the decision to adopt generative AI by individuals within a company differs significantly from the decisions of general users to adopt generative AI. Unlike the IT adoption of general users, employees utilize IT services provided by the company for work, so the extent to which it helps their work plays a crucial role in its adoption. Additionally, trust in the provided functions acts as a facilitating factor that leads to the use of generative AI. Kelley (2022) explored the essential functions or facilitating factors that companies must consider to effectively introduce AI into their organizations. Pillai et al. (2024) suggested that generative AI has unique characteristics that distinguish it from traditional enterprise software. They argued that generative AI can achieve situational awareness through interactive communication with organization members and provide more personalized results through machine learning. If a company introduces an innovative system, organization members may feel inconvenienced when using it or hesitate to use it due to concerns about personal data protection. Chen et al. (2024) examined the effects of beneficial and hindering factors in the decision to adopt AI-based chatbots for use by organization members in their work. Chang et al. (2024) also argued that companies should comprehensively consider the facilitating and hindering factors associated with adopting generative AI when introducing it. Manresa et al. (2025) applied the stimulus-organism-response (S-O-R) model to explain employee’s adoption and leverage toward generative AI based within work environments.
Research Model
This study examines the decision to use generative AI based on organization members' trust in and perceived risk associated with generative AI. Perceived usefulness and perceived enjoyment provided by generative AI are considered major facilitators. In addition, to analyze the impact of organization members' repeated use of generative AI, generative AI usage habit is considered a major antecedent factor. The proposed research model is presented in Figure 1.

Research model.
Trust
Trust refers to an individual's belief or confidence that a particular technology or IS will operate as intended, provide predictable outcomes, and not harm the user (Mittendorf, 2018). Several studies on IS have revealed that trust in technology serves as a key factor in the formation of users' usage intentions (Hasan et al., 2021; B. Kim et al., 2025). Mostafa and Kasamani (2022) argued that trust formation plays an important role in the process of interacting with and forming relationships with AI-based technologies. These studies found that as trust in technologies such as chatbots increases, users develop positive emotions toward the technologies, ultimately leading to increased engagement. Additionally, Asan et al. (2020) empirically showed that the uncertainty or risks associated with the results provided by generative AI affect trust and consequently influence usage intentions. Hasan et al. (2021) found that in intelligent voice assistant services, as users' trust in the service increases, brand loyalty also increases. Kim et al. (2023) empirically demonstrated that when users were exposed to ChatGPT quality issues, their intention to use associated travel recommendations and satisfaction significantly decreased.
The hallucination phenomenon and ethical issues associated with generative AI affect organization members' trust in it (Asan et al., 2020; Qin et al., 2020). Machine learning-based AI relies on previously collected data, and when these data have problems or biases, it may present inaccurate predictions or conclusions (Manresa et al., 2025; Zhang et al., 2023). Moreover, because AI utilizes complex algorithms, it can be difficult to understand why or how certain conclusions were reached. Recently, explainable AI analysis methodologies have been developed to alleviate these concerns, but it remains difficult to clearly explain the reasons for predictions or outcomes. Therefore, organization members' trust in the results provided by generative AI acts as a key factor in the formation of their willingness to use it. Baek and Kim (2023) analyzed the characteristics provided by generative AI based on the U&G model. They showed that work efficiency and personalization affect trust, which ultimately positively affects continuous usage intention. Demir and Demir (2023) indicated that concerns about the accuracy and ethical issues associated with the information provided by generative AI can weaken trust. Gkinko and Elbanna (2023) stated that organization members' trust in AI is the most important factor affecting companies' successful adoption of AI in their organizations. Marimon et al. (2025) also found that optimizing trust in generative AI is crucial for enhancing organization members' engagement and performance. In this study, it is also expected that the greater the level of organization member’s trust has in generative AI, the more frequent and longer the duration of his or her use would be.
Perceived Risk
Privacy risk associated with technology is defined as the extent to which technology collects information about individuals beyond their scope of control or is used without their consent, leading to threats to personal privacy (Kim, Chen, et al., 2023; McLean & Osei-Frimpong, 2019). George (2004) defined perceived risk as encompassing consumers' overall concerns about the negative collection of personal information, information misuse, and privacy breaches, which act as a hindering factor that negatively affects customers' decision-making. Im et al. (2008) empirically demonstrated that when users decide to use technology, perceived risk arises from concerns about the outcomes generated by the technology and from uncertainty about the technology itself, with both weakening the intention to use the technology. Min and Kim (2015) found that social media customers make sharing decisions by assessing the benefits they can gain from sharing personal information, such as photos and videos, and the risks due to privacy breaches. Hasan et al. (2021) empirically showed that the perceived risk users feel from unauthorized data use hinders the use of intelligent voice assistants. In particular, they indicated that most users' concerns about risks when sharing personal information are amplified because they do not know what information companies collect and how they use it.
Regarding generative AI, organization members tend to provide various information and grant broad permissions in the process of using it. They are especially likely to share sensitive information about companies and themselves because they provide information to receive work assistance (Cheatham et al., 2019; McLean & Osei-Frimpong, 2019). Furthermore, because it is difficult to know the algorithms by which generative AI produces specific results, organization members find it difficult to know whether there are problems with the generated results or whether their personal information has been used without their consent. Aiolfi (2023) also found that the perceived privacy invasion risk associated with using AI-based speakers can reduce the usefulness and enjoyment of the service. Huynh (2024) suggested that concerns about the use of sensitive personal information increase the risk of using generative AI and act as a factor hindering trust and usage. Therefore, this study also expects that the risk of generative AI perceived by organization members would act as a barrier factor in reducing the use of generative AI.
Perceived Usefulness
Perceived usefulness is defined as the degree to which a user believes that using a particular IS will effectively enhance their job performance (Davis, 1989). It is considered a key factor explaining usage intention related to using new technologies or innovative services. In the technology acceptance model (TAM) and the unified theory of acceptance and use of technology (UTAUT), perceived usefulness is considered an extrinsic motivator for technology acceptance and has been applied in various adoption contexts (Davis, 1989; Venkatesh, 2021). Even in the generative AI adoption context, generative AI's usefulness would play a crucial role in forming positive attitudes among organization members. Generative AI can enhance the efficiency of organization members by automating and supporting various work processes within the organization. In particular, it can automate repetitive tasks for organization members, and recently has even performed creative tasks. Thus, generative AI can enhance the efficiency of organization members by automating and supporting various work processes within an organization. In particular, it can reduce repetitive tasks, promote creative work, and support decision-making. For example, AI can automatically generate drafts of reports, emails, and contracts, reducing document-related work time, and can quickly identify trends in reports and markets. The more organization members perceive that they can efficiently accomplish their tasks by using generative AI, the more their trust in and use of the service will increase. Gkinko and Elbanna (2023) found that trust in AI increases if it provides accurate answers to organization members' questions and helps improve their work. If organization members can efficiently perform their tasks using generative AI, high trust in it will develop, and they will try to form a continuous relationship with generative AI. Topsakal (2024) showed that perceived usefulness significantly affects the intention to use AI-based support systems when customers plan trips. Therefore, perceived usefulness of generative AI among organization members would play a key role in building trust and increasing usage.
Perceived Enjoyment
Perceived enjoyment refers to the extent to which the activity of using an information system is perceived as enjoyable in its own right, apart from any performance consequences that may be anticipated (Davis et al., 1992). Self-determination theory suggests that people's intrinsic motivation is a key factor driving their behavior (Deci et al., 2017). Furthermore, to continuously induce system usage, a pleasant system usage experience must be provided. Van der Heijden (2004) suggested that hedonic factors, such as pleasure or fun derived from information systems, play a major role in forming the intention to use information systems. B. Kim et al. (2009) found that as experience with a service increases, the effect of perceived enjoyment on behavioral intention also increases. Aiolfi (2023) found that users can feel enjoyment while interacting with AI-based smart speakers and that these hedonic factors increase usage intention.
In the organizational environment, an enjoyable generative AI use experience will also play a crucial role in forming usage intention. Generative AI helps organization members focus on more interesting and creative tasks by assisting with repetitive tasks such as writing reports and emails. It can analyze individual organization members' areas of expertise or interests and recommend customized content or reports. Interaction with generative AI provides organization members with an interesting experience beyond its function to improve work efficiency. Sohn and Kwon (2020) showed that in the decision to purchase AI-based products, users' perceived enjoyment explains a significant portion of the variance in usage intention relative to usefulness, ease of use, and value. J. S. Kim and Baek (2024) found that the hedonic value provided by generative AI, such as pleasure, playfulness, and fun, plays a key role in forming continuous usage intention. Li et al. (2024) found that the more organization members experience enjoyment while interacting with generative AI during work tasks, the more their trust in generative AI increases, along with their intention to use the service. In this study, it is also expected that the user experience formed through interaction with generative AI evokes fun and positive emotions, ultimately leading to increased trust in and intention to use generative AI.
Generative AI Usage Habit
Habit refers to repetitive behavior patterns that occur automatically rather than through conscious awareness (Triandis, 1971). When a specific behavior is frequently performed, habitual behavior tends to appear in similar future situations (Aarts et al., 1998; B. Kim, 2017). Several habit studies have shown that behaviors that people have frequently repeated increase the probability of positive attitudes and continuous behavioral intentions (B. Kim, 2012; Liao et al., 2006). B. Kim (2012) found that usage habits explain a significant portion of customers' usage behavior in continuously used information services. Liao et al. (2006) found that repeated use of websites leads to the formation of habits, which ultimately increases usage intention. When users frequently use information systems, their hesitancy in using them decreases, and they can efficiently utilize the provided functions. Gefen (2003) empirically demonstrated that the repeated use of information systems enriches knowledge about the technology by developing habit forming, which enhances the perceived usefulness of the technology. Furthermore, when people repeatedly interact with a specific object, they can feel enjoyment from the interaction (Klimmt et al., 2006). Yen and Wu (2016) suggested that even among information services used for practical purposes, service usage habit enhances perceived usefulness and perceived enjoyment, ultimately increasing continuous usage. Similarly, Nikolopoulou et al. (2021) found that service usage habit played a key role in teachers' intention to use mobile internet for education.
In the generative AI environment, organization members are more likely to ask questions that can elicit accurate answers when using generative AI repeatedly rather than initially. If organization members can achieve their goals faster and more accurately through generative AI, their concerns about using the technology will decrease, and they will be able to derive enjoyment from its use. Polites and Karahanna (2013) found that organization members form usage habits when they repeatedly use enterprise systems over a long period. They empirically showed that usage habits associated with organizational systems increase work efficiency and ultimately improve corporate performance. Dabbous et al. (2022) found that when innovative technologies such as AI are introduced in companies, organization members' usage habits regarding the technology significantly affect their usage intentions. Venkatesh et al. (2023) conducted a longitudinal study on new technology use by organization members in seven organizations and found that usage habit explains a significant portion of actual usage variance. Once a habit is formed through the repeated use of generative AI, organization members will unconsciously utilize generative AI for their work. Therefore, although the initial use of generative AI may seem inconvenient and raise privacy concerns, repeated use would lead to the perception that using generative AI is more beneficial and enjoyable. Therefore, the following hypotheses were established.
Research Methodology
Measurement Items and Scales
This study employed a cross-sectional survey to investigate the factors affecting organization members' use of generative AI. To ensure the content validity and reliability of the questionnaire items, measurement items developed in prior research in the fields of information systems and marketing were utilized. These measurement items were then modified to fit the context of organization members' use of generative AI within an organizational environment. The Institutional Review Board exempted the study from review because the risk posed to research participants and the public was deemed minimal. The survey was structured into three parts. The first part explained the purpose of the study. The second part assessed whether respondents had experience using generative AI and inquired about the generative AI they primarily used. Those employees without experience using generative AI for work were excluded from the research model analysis. This section also includes the survey questions proposed in the research model regarding generative AI. The final part gathered demographic information about the respondents. All responses were treated anonymously, and consent for statistical use was obtained from the participants.
Survey questions regarding the use of generative AI were derived from Durcikova et al. (2011). Trust was referenced per Mittendorf (2018). Survey items for perceived risk utilized those proposed by Hansen et al. (2018). Measurement items for perceived usefulness and perceived enjoyment were adapted according to B. Kim (2012). Survey items concerning generative AI usage habit utilized those proposed by Limayem et al. (2007). Two researchers in the field of information systems reviewed the survey questions, and minor modifications were made to the format and wording of the scales to derive the final survey instrument. All survey items were measured using a 7-point Likert-type scale, ranging from 1 (
List of Model Constructs and Items.
Survey Administration and Sample
Our primary target participants are organizational members who possess sufficient knowledge of generative AI. Therefore, the survey was conducted among employees working in Seoul or the Seoul metropolitan area who are utilizing generative AI across various industries. Participants were sourced from the researchers' existing personal and professional connections, including academic peers, research support staff, and other colleagues. The individuals were contacted directly, at which time the study's objectives and timeline were detailed. Any queries regarding the data collection process were also addressed. Involvement was entirely optional, with participants receiving guarantees of both confidentiality and anonymity. We utilized the G-Power tool to perform both a priori and post hoc power analyses, confirming the suitability of our sample size. Following the recommendations of Cohen (1988) and Benitez et al. (2020), the a priori calculation was based on a medium effect size (
Sample’s Profile.
Research Results
Measurement Model
In the measurement model, convergent validity, reliability, discriminant validity, and common method bias were assessed. Convergent validity was evaluated by examining the factor loadings of each construct, which should exceed .70 (Hair et al., 1998). As shown in Table 3, all factor loadings were greater than .70, indicating that convergent validity was established. Next, reliability was assessed using composite reliability (CR) and average variance extracted (AVE), as suggested by Fornell and Larcker (1981). Thus, when the CR exceeds .70 and the AVE exceeds .50, reliability is considered acceptable. As presented in Table 3, the CR and AVE values for all constructs in the proposed research model exceeded these thresholds, indicating satisfactory reliability. Third, discriminant validity was evaluated by comparing the square root of the AVE for each construct with the correlations between that construct and others (Fornell and Larcker, 1981). Table 4 shows the results of the discriminant validity analysis. The square roots of the AVEs, which appear on the diagonal, were greater than the inter-construct correlations, demonstrating adequate discriminant validity. Finally, since all responses were collected through self-report surveys at a single point in time, we examined whether common method bias was present. Using Harman's single-factor test (Harman, 1967), we found that a single factor accounted for 41.65% of the total variance. Because this value is below the commonly accepted threshold of 50%, common method bias is not likely to be a significant concern.
Scale Reliabilities.
Correlation Matrix and Discriminant Assessment.
In summary, the measurement model satisfied the criteria for convergent validity, reliability, discriminant validity, and common method bias, allowing for further analysis of the structural model in the next section.
Research Model Analysis
The results of the research model analysis are presented in Figure 2. Contrary to expectations, trust did not have a significant effect on generative AI use. However, as anticipated, perceived risk had a negatively significant effect on both generative AI use and trust. Perceived usefulness had a significant positive effect on generative AI use but did not significantly affect trust. Additionally, as expected, perceived enjoyment was found to significantly affect both generative AI use and trust. Finally, Generative AI usage had a significant positive effect on its use, trust, perceived usefulness, and perceived enjoyment, but did not significantly affect perceived risk. The proposed research model explained 44.5% of the variance in generative AI use and 48.6% of the variance in trust. A summary of the analysis results is presented in Table 5.

Analysis results.
Summary of the Results.
Conclusion
Summary of Results
This study explored key factors affecting organization members' use of generative AI. Our research model incorporated the idea that the decision regarding whether organization members use generative AI involves a complex consideration of trust in generative AI and the potential risks associated with its use. We considered perceived usefulness and perceived enjoyment as motivational factors that encourage organization members' use of generative AI. Furthermore, we examined the effects of established habits, formed through repeated use of generative AI, on the decision to use it. The proposed research model explains 44.5% of the variance in generative AI use, indicating that it effectively explains the mechanism affecting generative AI usage among organization members. Notably, this is the first study to explore the role of habit in employees' Generative AI usage.
Theoretical and Practical Implications
This study provides several theoretical and practical implications. First, trust did not significantly influence generative AI use. This contrasts with previous studies suggesting that trust in new technologies increases users' or organization members' intention to use them. For example, Gibbard et al. (2025) revealed that increasing trust in workplace generative AI is identified as a key factor in promoting employee usage. Generally, greater trust in innovative technologies tends to increase user satisfaction, usage intention, and actual use. However, our findings highlight the need to understand the context and environment of generative AI use among organization members. Several studies on information systems have shown that enterprise IS use is influenced by organization members' autonomy (Durcikova et al., 2011; Nguyen et al., 2025). If an organization encourages the use of generative AI or integrates it into specific tasks, members may be obligated to use it regardless of their trust in the technology. Alternatively, the influence of other factors might have been so strong that it overshadowed the effect of trust. Thus, for organization members, perceived usefulness and effectiveness of generative AI in performing their tasks may be a much stronger motivator than trust. Companies adopting generative AI, especially during early stages when the technology's reliability may not be fully established, could consider utilizing it for repetitive or less critical tasks. In other words, in the initial adoption phase, it is difficult to trust the results or the processes by which generative AI generates results. Therefore, it is necessary to accumulate experience using generative AI for support purposes. Since trust is a factor that builds over time, it is expected that as trust grows, generative AI can be utilized not only for support functions but also for innovative and creative tasks.
Second, perceived risk was found to negatively affect both generative AI use and trust. This implies that various risk factors, such as generative AI's output inaccuracy or concerns about privacy infringement, weaken trust among organization members and ultimately reduce their use. A 2024 Deloitte survey of 2,770 leaders across 14 countries, including the US, UK, India, and Japan, regarding generative AI adoption, revealed that 60% of leaders were concerned about the use of sensitive data and privacy breaches (Deloitte, 2024). Prior studies on generative AI have reported similar findings. X. Wang et al. (2024) found that organization members' concerns about privacy invasion or risks associated with AI-based chatbots are major obstacles to adoption. Pillai et al. (2024) also found that when companies adopt AI-based chatbots, addressing organization members' lack of trust and concerns about privacy is crucial for improving adoption intentions. Therefore, companies introducing generative AI must understand that the more their organization members worry about sensitive data being used for generative AI training or being leaked externally, the less they will use it. For example, to alleviate concerns about external data leakage, adopting companies must develop company-specific generative AI. Additionally, they must inform organization members about technical regulations to prevent data leakage, such as data encryption, access control management, and enhanced network security. By addressing the concerns that organization members have regarding generative AI usage, its usage can be increased.
Third, perceived usefulness had a significant positive effect on generative AI use but did not significantly affect trust. Our findings highlight that perceived usefulness is a key driver in promoting generative AI usage. This means that if organization members perceive generative AI as helping them perform their tasks efficiently, its use will increase. While similar to prior research on generative AI, this study is significant in that it examined the usefulness of generative AI for organization members within a corporate environment, rather than general users (Gkinko & Elbanna, 2023; Topsakal, 2024). B. Kim et al. (2025) showed that perceived usability significantly influences user satisfaction and continued usage toward ChatGPT. A 2023 study by the Nielsen Norman Group, which conducted three experiments on the effects of generative AI on organizational productivity, showed that generative AI significantly reduced organization members' task completion time and increased productivity by an average of 66%, while also improving the quality of work outcomes (Nielsen, 2023). Notably, works with lower proficiency levels experienced a significant increase in productivity with the support of generative AI. However, perceived usefulness of generative AI did not significantly affect trust. Although organization members found generative AI helpful in their work, this usefulness may not necessarily translate into trust in the technology itself. In particular, organization members may question the accuracy or consistency of the outputs provided by generative AI due to hallucination issues. Therefore, organizations adopting generative AI must emphasize tangible benefits such as improved work efficiency, time savings, and increased productivity during the initial adoption phase to encourage organization members to use AI. However, if organization members lack trust in generative AI, they will hesitate to utilize it for critical tasks or decision-making processes. Therefore, introducing generative AI in a phased manner according to the difficulty and importance of each task will increase its overall utilization by organization members.
Perceived enjoyment of generative AI had a significant positive effect on both trust and generative AI usage. In line with our findings, B. Kim et al. (2025) found the perceived enjoyment is pivotal in developing user’s continuance intention toward generative AI. When organization members experience fun and enjoyment in the process of applying generative AI to their work, the experience is perceived more positively, which acts as an important factor in increasing its use. Recently, generative AI has gone beyond simply supporting repetitive tasks and now provides creative results and new ideas. For example, NVIDIA's generative AI platform, “BioNeMo,” is driving creative innovation in the drug development field (Clifford, 2023). It analyzes protein structures, designs molecules, and generates chemistry through generative AI, enabling the rapid and efficient selection of new drug candidates. This allows companies to reduce the effort and cost associated with identifying new drug candidates while generating more creative and innovative outcomes. Organization members can experience the joy of satisfying curiosity and acquiring new knowledge through the results provided by generative AI, and in this process, perceived enjoyment acts as a key motivator that strengthens trust and usage behavior. Companies adopting generative AI should strive to make organization members perceive generative AI not only as a work tool or problem-solving method but also as a medium that provides enjoyable experiences. In other words, companies must establish environments where employees can creatively and autonomously experiment with and utilize AI, rather than simply mandating productivity improvement as the sole goal of generative AI adoption. Thus, it is crucial to understand that the enjoyment that organization members feel during the generative AI learning and usage process is central to its successful adoption.
Generative AI usage habits were analyzed and found to play a crucial role in the formation of organization members' generative AI usage, trust, perceived usefulness, and perceived enjoyment. Our analysis results were consistent with those of other studies (Gefen, 2003; Yoo & Cho, 2018), indicating that generative AI usage habits have a significant positive effect on usage, perceived usefulness, perceived enjoyment, and trust. When organization members frequently use generative AI in their work, they tend to use it naturally without conscious deliberation. Additionally, when they have to conduct similar tasks in the future, they will frequently utilize generative AI with less hesitation or doubt. In other words, generative AI usage habits increase the amount and frequency of use. Regarding usefulness, the more frequently organization members use generative AI, the greater their level of understanding of prompts and interfaces becomes. Additionally, organization members increasingly become proficient in utilizing various provided functions, and work efficiency increases over time. Therefore, when organization members accumulate positive usage experiences, such as improved work efficiency and the generation of new ideas through generative AI, not only does their trust in generative AI increase, but their enjoyment derived from its use increases. Therefore, generative AI usage plays an important role in the formation of trust, perceived usefulness, and perceived enjoyment. Companies adopting generative AI must provide educational programs or specific usage method scenarios during the early stages to help organization members more easily utilize generative AI in their work. For example, providing specific usage scenarios and templates for utilizing generative AI for each job within the organization will increase the likelihood that organization members will frequently use it in their work. When these positive usage experiences are repeated, habits will form, and organization members will naturally utilize generative AI in various tasks and situations within the organization.
Limitations and Future Research Directions
This study has the following limitations. First, it utilized a cross-sectional survey, necessitating an examination of the dynamic effects of the antecedents of generative AI use. However, as organization members accumulate experience with generative AI, the effects of these antecedents may change during the adoption process. Therefore, future research should conduct longitudinal studies examining the dynamic effects of the antecedents affecting generative AI use. Second, the scope of this study was limited to organization members in Seoul or the Seoul metropolitan, thus restricting the generalizability of the findings. Seoul has a developed IT and service industry, and organization members there tend to be familiar with new IS services. However, it is important to understand that organization members' perceptions of generative AI may vary depending on region and their familiarity with IT services. Future research should re-verify the proposed research model by considering various regions and levels of IT service utilization. Finally, organizational culture, such as work autonomy and work type, may affect the extent to which generative AI is utilized in work. Therefore, future research exploring how the utilization and perception of generative AI change according to organizational culture would also be meaningful.
Footnotes
Ethical Considerations
This study is exempt from IRB review under Article 13, Paragraph 2 of the Enforcement Rules of the Bioethics and Safety Act of the Republic of Korea, as it qualifies as “research that does not collect or record personally identifiable information, even if the research involves direct interaction with subjects, provided that the subjects are unspecified and no sensitive information as defined by Article 23 of the Personal Information Protection Act is collected or recorded.” All procedures performed in the study were in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Consent to Participate
Written informed consent was obtained from all participants during the survey period. The participation was wholly voluntary, without any risks, and did not involve any form of compensation.
Author Contributions
All authors jointly supervised and contributed to this work.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by a research grant from Seoul Women's University (2025-0276).
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data Availability Statement
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
