Abstract
Artificial intelligence (AI) has a growing presence in Australian workplaces. While early assessments focused on job automation and productivity gains, a growing body of evidence points to AI affecting workplace relationships, worker autonomy and psychosocial well-being. This paper examines the relational risks of AI in Australian workplaces, drawing on national and international literature. Businesses in Australia adopt AI technologies for data entry automation, document processing, fraud detection and Generative AI tools. Promising operational efficiency, these innovations also introduce risks of algorithmic management, the erosion of tacit knowledge, digital incivility and the devaluation of human labour. Current governance frameworks fail to sufficiently address these relational harms. This paper makes three contributions. First, it identified AI relational risks affecting workplace dynamics and worker agency. Second, it identifies gaps in Australia's policy response, particularly in the integration of AI-related risks into Work Health and Safety (WHS) regulations. Third, it proposes a framework for managing relational risks grounded in job crafting, participatory oversight and expanded WHS definitions. In doing so, it positions the worker not as a passive recipient of AI impacts but as a co-designer of workplace transformation.
Introduction
Advanced technologies are no longer aspirational add-ons in Australian workplaces, they are embedded and their adoption is accelerating. Recent advances in artificial intelligence (AI) have added impetus to the deployment of highly complex technologies into offices and factories, involving computer systems that can perform tasks typically requiring human intelligence, including pattern recognition, decision-making and prediction (Russell and Norvig, 2020). These technologies are deployed in workplaces at increasing scale and are reaching into new industries (e.g. hospitality, cp. Lance, 2025). Most recently, Generative AI (GenAI) that creates new content such as text, images or code, and Large Language Models (LLMs) that use vast text datasets to understand and generate human-like language, have added further layers of AI capability and usability. AI applications now extend from operating and connecting machinery and processing data files, such as customer records, to managing, supervising and watching people (Baiocco et al., 2022). For this algorithmic management, organisations use software to assign tasks, monitor performance, evaluate outcomes and make decisions about work processes and workers without human intermediaries (Mateescu and Nguyen, 2019). 1
The projected further surge in AI adoption in the workplace remains framed primarily in instrumental or economic terms, as a potential accelerator of productivity, efficiency or employment. However, AI adoption is driven by cost-savings motives as well as (or instead of) strategies to augment human labour (AiGroup, 2024; Caldwell et al., 2025). Acemoglu and Restrepo (2020) have termed this the ‘wrong kind of AI’: systems that displace labour rather than complement or enrich it. Without augmentation strategies, such as process innovation, market expansion or workforce upskilling, AI adoption risks triggering job loss, role fragmentation and structural dislocation.
The literature on labour market impacts reflects this tension as it discusses the risks and opportunities arising from AI (Autor et al., 2020; Pezzinelli et al., 2023). However, much of this discourse remains anchored in macroeconomic forecasts and structural modelling, leaving unexplored the relational risks AI poses inside organisations. Relational risks of AI refer to the ways AI systems disrupt, mediate or transform workplace relationships, social dynamics and power structures (Cebulla et al., 2023). These risks encompass changes to worker-supervisor relationships, peer interactions, worker autonomy and the social fabric of organisational life. Unlike technical or economic risks, relational risks focus on how AI affects the human experience of work, including trust, dignity, authority, communication, social cohesion and well-being in workplace settings (e.g. Ehrhardt and Ragins, 2019; Hanc et al., 2024). Relational risks are therefore not merely about how many jobs AI may destroy or create, and for whom, but about how AI tools alter how tasks are coordinated, how authority is exercised, and how workers relate to one another and interact with the technology. These transformations are often subtle yet profound, with consequences for worker well-being, workplace cohesion and organisational culture.
Relational risks are, or should be, central to Work Health and Safety (WHS). They are the focus of this contribution, which has less to say about the future shape of the Australian industrial relations (IR) system or the role of trade unions therein as they may adapt to technological change (e.g. Mpedi and Tshilidzi, 2025) or risk being consumed by it (e.g. Wiggin, 2025). This is not to disregard the sociopolitical setting in which the propagated transition to an AI-driven economy takes place (e.g. Verdegem, 2024). In Australia, as elsewhere, strong ideological proponents and commercial interests herald AI adoption as the solution to stagnant productivity (BCA, 2025), including in public services (Jobberns and Guihot, 2024). Such promotion is often driven by hope and expectation rather than firm evidence, a clear conceptual pathway towards achieving that objective (Coyle, 2025) or a notion of how its benefits might be shared (Stanford, 2025).
It is nonetheless hard to imagine an outcome other than a further acceleration of AI adoption. Australia is thus at a critical juncture. Despite the proliferation of ethical frameworks, public consultations and voluntary safety standards, regulatory oversight of AI's impact on workplace relationships is lacking and, where emerging, is fragmented.
This paper explores how Australian policy and workplace practices can evolve to manage the relational risks posed by AI in ways that centre worker agency and ensure organisational accountability. This paper offers three core contributions:
Conceptual: It maps relational risks of AI, such as algorithmic management, erosion of autonomy and incivility, linking these to known stressors in workplace relations. Analytical: It evaluates Australia's current regulatory responses, highlighting institutional gaps and blind spots that prevent a more holistic risk response. Prescriptive: It proposes a three-pronged policy framework: codifying relational risks into WHS regulations, institutionalising job crafting as a form of workplace resilience, and expanding the legal definition of ‘safe work’ to account for AI-induced psychosocial dynamics.
The following sections first present an overview of the literature on the take-up of AI, its impact on labour markets and workplaces, and the associated AI ethics debate. Thereafter, relational risks associated with AI are mapped, illustrating the potential for disrupting and reshaping established relationships, revaluing labour and creating new divisions. The fourth section turns to exploring the current policy environment, including the role and positioning of WHS. The penultimate section proposes a framework for the responsible use of AI in the workplace before the final section concludes, noting the role of social institutions in managing the risks of workplace AI.
Background and literature review
AI's entry into the workplace has unfolded with remarkable speed and with multiple applications (Rashid and Kausik, 2024) but uneven governance. In Australia, while early industry surveys reported minimal adoption, the landscape has shifted dramatically. According to the Australian Bureau of Statistics (ABS, 2023), by 2022 over 80% of Australian businesses reported using some form of Information and Communication Technology, including cybersecurity software (63%), cloud platforms (59%) and public digital interfaces (38%). By September 2024, one-third of small- and medium-sized businesses in Australia reported adopting AI technologies for tasks ranging from document processing and fraud detection to marketing analytics and the deployment of GenAI assistants (DISR, 2024a). These figures, although impressive, likely reveal only the tip of an emerging iceberg, given Australia's historical lag in adopting frontier technologies relative to global peers (Cebulla, 2024; Nguyen and Hambur, 2023).
AI's rise has coincided with an evolving debate about its labour market effects. While early projections feared mass unemployment concentrated among low-skilled workers, subsequent evidence shows that high-skilled and mid-skilled jobs are equally at risk (Filippi et al., 2023; Georgieff, 2024; Lassébie and Quintini, 2022). Most recently, GenAI has been found to have the potential to further re-stratify occupational status, disproportionately benefiting those already situated at the upper end of the earnings spectrum (Eloundou et al., 2024; Gmyrek et al., 2025; Zarifhonarvar, 2024).
A concomitant erosion of occupational boundaries has a destabilising effect on how labour is organised and rewarded. Jerman et al. (2020) and Mason et al. (2022) observe that workers are increasingly required to adapt to new technological roles that span traditional domains: factory staff handling digital systems or office employees engaging in automated workflows. In such hybrid workplaces, distinguishing between an office job and a factory job becomes difficult as technology is used by staff assigned to one but not the other, yet crossing those functional boundaries.
Moreover, AI's effects are not distributed evenly. Demographic disparities are already evident in exposure to displacement. Women, older workers, and those with fewer formal qualifications face heightened vulnerability, not just to job loss but also long-term earnings suppression and skill redundancy (Lane, 2024; Peetz and Murray, 2019; Petersen et al., 2022). These risks affect not just employment status but also an individual's sense of control, competence and belonging in the workplace.
At the policy level, regulatory proposals have emerged in response but remain mostly aspirational. Australia's AI Ethics Framework (DISR, 2019) sets out high-level principles such as fairness, transparency, accountability, but lacks enforcement mechanisms. This is true not just for Australia but, with few exceptions, globally (Corrêa et al., 2023; Prem, 2023). Even when frameworks do exist, they often define AI risks narrowly, centring bias, data leakage or algorithmic unfairness while sidelining the relational transformations AI engenders and which include:
the restructuring of managerial authority through algorithmic decision-making (Dupuis, 2025; Krzywdzinski et al., 2025); the reduction of informal, spontaneous workplace interactions in favour of data-driven oversight (Jarrahi et al., 2021); and the loss of agency as workers are expected to follow system-generated cues, deadlines and feedback loops (Darr, 2018; Malone et al., 2025).
These dynamics define how AI is experienced by workers on a day-to-day basis. As Oosthuizen (2019) observes, AI integration may boost operational efficiency but simultaneously foster job insecurity and psychological strain. Prunkl (2024) goes further, suggesting that AI's control structures could pose existential threats to worker autonomy. AI deployed to optimise and accelerate processes at work risk entailing workers losing control over tasks and schedules, which are now dictated by AI tools.
Importantly, the argument here is not just that AI is psychologically stressful or ethically ambiguous. It is that AI is relationally transformative, that is, a system that rewires the social dynamics of work. It alters who holds authority, who is visible or invisible, and how performance is judged and contested. This point is reinforced by Schafheitle et al. (2021), who describe the emergence of ‘two-leader’ dynamics, conflicts between AI tools and human supervisors that confuse accountability and dilute supervisory trust. Supervisors become system navigators instead of being mentors or overseers, eroding relational authority and flattening professional interactions.
Despite mounting evidence, relational risks remain largely invisible in public policy and organisational design. Governmental and industry strategies emphasise digital skills, reskilling initiatives, and cybersecurity protocols, while overlooking how AI tools mediate interaction, affect worker identity and introduce new forms of organisational friction. Taken together, this literature points to a pressing need for a revised lens on AI in the workplace; one that moves beyond macroeconomic modelling or ethics checklists, and towards an integrated understanding of how AI reshapes human relationships at work.
Industrial relations scholarship and AI in the workplace
IR scholarship has been slow to engage with AI's workplace implications. A systematic literature review by Pereira et al. (2023) found one article on AI workplace impacts published in a dedicated human resource management journal, none in an IR journal, whilst most articles were published in information management journals or journals concerned with AI ethics and corporate social responsibility. With regard to employee and labour relations, the authors note a total absence of ‘individual and team levels…as units of analysis’ (Pereira et al., 2023: 14) and a focus on organisational processes instead. Most studies (42%) were, in fact, concerned with training and development, that is, preparing for the further facilitating of AI embedding in workplaces. Likewise, a systematic review by Bankins et al. (2024) found prominent themes discussed in the literature included human–AI collaboration, worker attitudes towards AI and algorithmic management of platform-based work, but found no cluster exploring intra-workplace relations.
A research gap is also discernible in WHS contexts, where the integration of AI-specific risks remains minimal. To the extent that occupational research has addressed AI, the focus has typically been on the potential for AI to reduce accident risks in workplaces (Huber et al., 2025; Shah and Mishra, 2024). Whilst insightful, such narrow focus becomes problematic as AI tools reshape the social dynamics of workplaces as well as their physical designs.
Relational risks of AI in workplaces
This section looks more closely at how those social dynamics might be reshaped. It maps five prominent risks of AI tools and explores how they reshape relations in the workplace: algorithmic management, the devaluation of tacit knowledge, the revaluation of labour, technology-facilitated incivility and the fragmentation of supervisory relationships.
Algorithmic management and the erosion of autonomy
AI tools increasingly take over core managerial tasks such as performance monitoring, scheduling and task delegation. These systems promise efficiency, but often at the cost of worker autonomy and discretion. Workers may find their tasks dictated by AI tools, with AI micro-management creating high-pressure working environments and reduced discretion.
As Dupuis (2025) shows for the manufacturing sector, algorithmic control can intensify conflict, undermining union power and reconfiguring workplace regimes. While real-time data insights can streamline operations, they also risk imposing unrealistic performance targets, often without transparency or negotiation (Kim and Lee, 2024; Lițan, 2025). A recent Senate inquiry into workplace surveillance in Victoria, Australia, revealed that AI tools were being used to collect biometric data, log keystrokes and assess voice tone – all without clear guidelines for worker consent (Parliament of Victoria, 2025).
These technologies tend to favour rigid optimisation over contextual judgement. As Boyd and Andalibi (2023) find, tools that appear emotion-aware or responsive are often poorly calibrated, operate without nuanced understanding of human states, and ultimately impose additional emotional labour on workers. The result is a subtle, persistent undermining of worker agency, where employees are held to account by metrics they do not control and evaluated by systems they may not understand.
Devaluation of tacit knowledge
AI not only replaces routine tasks but also encroaches on domains once considered uniquely human, notably intuition, improvisation and context-sensitive decision-making. Historically, tacit knowledge served as a form of informal workplace authority. As AI gets more deeply integrated into workplaces and work processes, the critical issue may not be whether biology prevails over technology (Stewart, 2025), but whether those operating technology believe in and act upon this distinction.
Research in Industry 4.0 contexts shows that as AI is trained on large, contextual datasets, it can approximate or simulate tacit judgement (Fenoglio et al., 2022; Zaoui Seghroucheni et al., 2025). This creates a perception that human insight is no longer necessary, – or worse, a liability. The displacement is not always immediate, but perceptual: workers lose informal status and their experiential knowledge is excluded from organisational decision-making.
This erosion of tacit knowledge is especially damaging in environments reliant on collaborative expertise. Schultz et al. (2025) warn of ‘algorithmic dehumanization’, where overreliance on machine outputs diminishes the recognition of human judgement, empathy and context. The worker becomes a by-stander in the process of production or service delivery.
Revaluation of labour
AI technologies reconfigure value hierarchies inside organisations. As systems become central to workflows, new forms of labour, such as data curation, systems navigation or algorithm interpretation, gain prominence. Workers in roles not easily integrated with AI tools, by contrast, may see their value diminished. Some occupations may be at risk of becoming redundant, whilst others will grow in numbers and proportions, and new jobs emerge.
Australia's updated Occupation Standard Classification (ABS, 2024) already reflects this shift, with formal recognition of roles such as data analyst, architect and cybersecurity officer. However, historical research on technological innovations cautions that those retrenched by automation rarely transition into such new roles. Feigenbaum and Gross (2024) show that the mechanisation of telephone operations in the U.S. created job booms, but for new entrants, not those displaced. Similarly, Schneider (2025) finds that the destruction of hand-spinning occupations in Britain created structural dislocation that lasted generations. 2 The psychological and social effects of this revaluation are complex. Workers may retain their jobs but lose influence, recognition or identity (Selenko et al., 2022). Revaluing labour in AI-adopting workplaces benefits some but not others (cp. The Adaptivist Group, 2025), causing or exacerbating job security/insecurity divides. It may also create insider-outsider divisions between those operating AI tools and those subjected to those operations, or between those accepting AI and others more sceptical (e.g. Xu et al., 2025).
Technology-facilitated workplace incivility
AI also introduces novel forms of incivility – machine-mediated, human-triggered or both. This includes the use of surveillance tools for micro-harassment, algorithmic bias in promotions, or even generative technologies to impersonate or embarrass colleagues. The use of deepfakes does not stop outside the office door or factory gate.
Koukopoulos et al. (2025) frame this phenomenon as ‘technology-facilitated abuse’, a rising but poorly defined category of workplace harm. O’Keeffe et al. (2024) identify patterns of harmful or hazardous behaviour enabled by individuals (managers, workers, customers) and system settings: organisational culture, physical working environments and technology. Crossing the individual and system settings is the use of technology by individuals to harass co-workers (Flynn et al., 2024).
Incivility in AI-driven workplaces is not always dramatic. It may involve subtler dynamics: alienation from human contact, the absence of empathy in system feedback, or the quiet resignation to opaque procedures. These experiences can degrade organisational trust and mental health, even in the absence of explicit misconduct. Such systems amplify bias and erode workplace culture as a result of diminishing human interaction.
Fragmentation of supervisory relationships
One insidious relational risk of AI integration is the restructuring of traditional supervisory relationships. Supervisors, once mentors or conflict brokers, are being redefined as algorithm interpreters. In the new world of AI-driven offices and factories, supervisors lean on dashboards and automated prompts rather than informal check-ins or nuanced feedback, spending more time interpreting AI outputs, resolving algorithmic-human conflicts and managing exceptions.
This ‘two-leader’ phenomenon, where the AI tool and the human supervisor issue parallel or conflicting guidance, creates confusion and conflict (Schafheitle et al., 2021). Employees must negotiate trust among three actors: their supervisor, the algorithm and themselves. In many cases, AI tools are treated as infallible, rendering human discretion suspect.
Jarrahi et al. (2021) describe this shift as redistributing authority, where the locus of control moves away from human relationships towards impersonal systems. This erosion of supervisory authority destabilises traditional structures of mentorship, performance management and conflict resolution. Managers no longer own decisions; they facilitate them on behalf of systems they may not fully understand.
Policy landscape and regulatory gaps
The five relational risks are not mutually exclusive. They interact and compound one another, producing workplaces that are more efficient on paper but potentially more alienating in practice. The accelerating integration of AI into Australian workplaces has outpaced the development of binding, workplace-specific regulatory responses. While national conversations about AI governance have grown in sophistication, focusing on algorithmic transparency, fairness and cybersecurity, these frameworks often neglect the relational, psychosocial and labour-process impacts of AI adoption.
Voluntary ethics and fragmented governance
Australia's flagship AI governance document, the AI Ethics Principles (DISR, 2019), sets out eight high-level values: fairness, transparency, privacy protection and accountability among them. These principles have informed a number of sector-specific frameworks, including those for schools (Commonwealth of Australia, 2023) and the public service (Australian Government, 2024), but remain non-binding. Governance of AI in business settings has so far remained voluntary and reliant on managerial awareness and competency.
This voluntarism persists in other frameworks. The Voluntary AI Safety Standard (NAIC, 2024) provides practical advice on responsible innovation and AI use, asking businesses to consider: ‘What is our risk appetite for AI use? Have we updated our risk appetite statement?’ (AICD, 2024: 28). These prompts reflect a risk-management approach, but they do not explicitly account for relational dynamics or psychosocial risks arising in AI-mediated environments.
Crucially, this soft governance approach does not require meaningful workforce consultation. Caldwell et al. (2025) stress the importance of engaging workers in discussions around AI risks, yet most firms in Australia have not institutionalised such engagement. In fact, a recent study found that Australian organisations, on average, adopt only 12 of 38 recognised ‘responsible AI’ practices—and 67% of businesses surveyed were unaware of the existence of the AI Ethics Principles (Fifth Quadrant, 2024). This demonstrates a profound awareness and capacity gap, particularly in small and medium enterprises, which represent the bulk of new AI adopters (DISR, 2024b).
Inadequate integration with WHS systems
The Work Health and Safety Act 2011 requires persons conducting a business or undertaking (PCBUs) to eliminate or minimise risks to health and safety. These obligations encompass not only physical safety but also psychosocial hazards, including stress, harassment and poor work design.
However, the unpredictable nature of the newly emerging AI risks poses challenges to employee relations, exacerbated by the opacity typical for much of AI-driven automation. On the one hand, we have solid evidence of adverse effects of ‘digital Taylorism’ (Bowles and O’Hanlon, 2025; Noponen et al., 2024) on worker wellbeing through work intensification that forces greater speed into workplaces (OECD, 2023) and onto gig workers (Vignola et al., 2023). The 2024 dispute at the Australian retailer Woolworths over automated warehouse management systems illustrates such instance as workers, pushed to work faster by systems they could not influence, resorted to industrial action (Barnes, 2024).
On the other hand, we have emerging paradoxes that may not be fully understood, such as counterproductive work behaviour as communication with colleagues is replaced with AI interactions (Meng et al., 2025); automation that, contrary to intention, increases workloads (e.g. Gallani, 2024) or causes ‘AI paternalism’, which in this example (Almyranti et al., 2024: 16) relates to an over-reliance on ‘AI to the detriment of patients' lived experiences and clinical judgment’.
The WHS system does not currently treat algorithmic oversight, digital surveillance or loss of worker autonomy as codified hazards. Although guidance is available from Safe Work Australia (SWA, n.d.), it has not kept pace with the relational complexities introduced by AI tools.
The 2025 report from the House of Representatives Standing Committee on Employment, Education and Training on the Future of Work recommends a corrective: ‘that the Australian Government work with Safe Work Australia to develop a Code of Practice that identifies and addresses specific work health and safety risks associated with AI and [automated decision-making]’ (Parliament of Australia, 2025: xv).
The AI WHS Scorecard (Cebulla et al., 2023), developed to support workplace self-assessment, may aid in this process. It identifies hazards across AI design, implementation and operation phases; it encourages relational awareness, highlighting risks to communication, worker autonomy and trust. But as its authors note, it is designed to be a living document that will need updating as AI develops and encroaches further into workplaces. The Scorecard also lacks recommendations for mitigating or eliminating the hazards and relational risks it might help to identify.
Limited institutional channels for worker voice
One mitigating tool is workforce consultation on AI adoption, but it has remained ad hoc and management-dependent (Wilkinson et al., 2022). While relational risks are increasingly acknowledged in ethical AI discourse (Haipeter et al., 2024), Australian organisations have not yet embedded these concerns into formal consultation requirements or industrial agreements.
Good employee relations are guided by principles of fairness and equity, transparent communications, trust and respect; and participation and voice in decision-making processes through union representation or direct consultation. Where there is a lack of robust consultation mechanisms, this is problematic in the face of increasing opacity in AI decision-making. As LLM and predictive systems become more autonomous, their operations grow less explainable even to those who deploy them (see especially, NIST, 2023: 38–39). In such a setting, worker trust and procedural transparency become essential governance elements but are often absent.
International comparisons: the EU AI Act
In contrast, the European Union has moved toward binding regulation. The AI Act, adopted in 2024, classifies workplace-related AI systems (e.g. emotion recognition, algorithmic hiring) as ‘high-risk’ and mandates risk assessment, documentation and human oversight (Official Journal of the European Union, 2024). It also prohibits certain uses outright, including ‘emotion recognition in workplaces and education institutions’. The Act (Art 26.7) also requires that ‘[b]efore putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system’.
While the EU's regime is not without limitations – particularly in ensuring post-deployment monitoring and worker participation – it represents a significant step beyond Australia's current policy posture. Where Australia offers guidelines, the EU mandates compliance; where Australia encourages transparency, the EU enforces it.
The EU's approach could provide a template for Australia, particularly in linking AI risk classification with labour market impact assessments and sectoral regulation. A more explicit focus on human oversight, contestability and consultation would help bridge the current gap between ethical aspiration and institutional implementation in Australian workplaces.
Towards a responsible AI workplace framework
While Australia has taken commendable first steps towards responsible AI governance, its current regulatory apparatus lacks the legally binding, workplace-specific and relationally sensitive mechanisms necessary to mitigate emerging risks. Bridging this gap requires a shift in how AI is conceptualised, not just as a technical tool or economic input, but as a social actor with the power to shape working relationships, identities and hierarchies.
This section proposes a three-pronged framework for embedding relational risk governance into Australia's workplace systems. The framework is intended to be anticipatory, empowering both institutions and individuals to co-manage AI adoption through participatory, accountable and adaptive mechanisms.
Codify relational AI risks into Work Health and Safety standards
The first step is to formally incorporate relational AI risks into WHS legislation and guidance. While the 2012 WHS Act already includes psychosocial hazards, it does not explicitly address those introduced by algorithmic control, digital surveillance or the erosion of supervisory authority. An AI Code of Practice as recommended by the House of Representatives Standing Committee on Employment, Education and Training (Parliament of Australia, 2025) should:
Define AI-specific psychosocial risks such as job insecurity due to opaque decision-making, deskilling from automation or emotional distress from algorithmic evaluation. Require relational risk assessments during system procurement and rollout, including workforce consultation and mitigation planning. Mandate review protocols for AI tools affecting work design, staffing decisions or worker performance metrics.
SWA already requires PCBUs to identify, assess and control workplace hazards. These existing structures can be adapted to recognise algorithmic-induced stressors just as they do chemical or ergonomic ones. International precedents, such as the EU's ban on workplace emotion recognition (EU AI Act, 2024), suggest that such boundaries are both feasible and politically defensible.
Institutionalise worker-led job crafting and AI co-governance
Relational risks are not just external threats; they are internal to the dynamics of power and control in the workplace. One underexplored remedy lies in empowering workers to shape their interaction with AI systems, a process known as job crafting.
Job crafting refers to the self-initiated changes workers make to their task boundaries, relationships or cognitive framing to enhance meaning, satisfaction and alignment with strengths. Workers are given opportunities to adapt to the presence of technologies in ways that extract maximum value for them. Building on Li et al. (2024) and Perez et al. (2022), this framework proposes that job crafting be institutionalised in the AI adoption process through:
AI integration workshops: participatory sessions where employees and technical teams jointly map out task changes, risk perceptions and creative adaptations. Workplace AI review committees: cross-functional bodies with authority to approve, contest or revise AI deployment plans based on relational impacts. Crafting audits: periodic surveys or performance reviews to monitor how AI is shaping job roles, worker autonomy and informal authority structures.
These mechanisms treat the workforce as co-designers not end-users of AI integration. They also build on existing IR infrastructure, including union representation and safety committees. When job crafting is legitimised and supported, it enables workers to transform potential threats into sources of meaning and resilience.
Moreover, job crafting can help balance asymmetries of control. If AI tools optimise for organisational goals (efficiency, compliance), job crafting optimises for worker values (dignity, purpose, agency). The goal is not to eliminate AI's role, but to co-produce a workplace that reflects operational accountability.
Expand the legal and normative definition of safe work
The final component of the framework addresses a foundational problem: the conceptual narrowness of what constitutes ‘safe work’. Under current regulations, safe work is generally interpreted as freedom from physical injury, extreme stress or overt harassment. But AI-mediated work introduces new risks that are less visible but no less real: dehumanisation, exclusion from decision-making and loss of role identity.
Therefore, this framework calls for a broadened understanding of workplace safety that includes:
Dignity and autonomy as protected workplace conditions (Bal, 2017). Human oversight as a non-negotiable element in high-risk AI systems (aligning with Kochan et al., 2024; NAIC, 2024). AI contestability rights, allowing employees to challenge algorithmic decisions affecting their status, pay or roles (ETUC, 2025).
This expansion would future-proof WHS standards by recognising that emerging technologies do not merely automate tasks but reshape the very conditions under which labour occurs. Such an approach complements technical governance (e.g. AI audits, documentation) with sociotechnical oversight, ensuring that human values remain central in the AI era.
Collectively, the three pillars of this policy model – codified WHS standards, job crafting mechanisms and an expanded definition of safe work – offer a comprehensive response to the relational risks posed by AI. They embed accountability, co-governance and adaptability into AI deployment processes, transforming risk management from a compliance task into a relational design challenge. In doing so, they reposition the worker not as a passive subject of technological change but as an active stakeholder in shaping the future of work.
Conclusion
AI is no longer a peripheral concern for Australian workplaces. It is a structuring force, shaping not only how work is done, but how it is felt, valued and governed. This paper argues that the most pressing risks introduced by AI are not merely technical or economic, but relational. They manifest in the erosion of worker autonomy, the displacement of tacit knowledge, the fragmentation of supervisory relationships and the emergence of new forms of technology-facilitated incivility. These effects are deeply social, often subtle, and frequently overlooked in both policy design and organisational strategy.
Australia's current AI governance framework is insufficiently equipped to address relational risks. Ethical principles, whilst valuable, are voluntary and lack enforcement. The WHS system, though broad in mandate, has not yet integrated AI-specific relational hazards into its regulatory schema. Meanwhile, most businesses remain unaware of or unprepared for the social consequences of AI adoption – let alone equipped to manage them through inclusive, participatory processes.
This paper proposes a three-pronged framework for embedding relational risk governance into Australian workplace policy, namely (i) codifying AI relational risks into WHS standards, (ii) institutionalising worker-led job crafting and co-governance and (iii) expanding the legal and normative definition of safe work to include dignity, autonomy and procedural fairness in AI-mediated decisions.
This framework addresses a core insight: AI tools do not merely automate, they reconfigure. They change how decisions are made, who holds authority, how performance is interpreted, and what kinds of labour are seen as legitimate. As such, they must be governed not only through audits and algorithms but through social institutions, norms and participatory mechanisms that foreground the human experience of work.
Footnotes
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
