Abstract
The development and diffusion of artificial intelligence (AI) technologies in workplaces are transforming the nature of work practices and their constituent skill requirements. This dual transformation is challenging for workers, organisations and societies, who are faced with the need to develop and enhance extant and new skills required to succeed in increasingly AI-mediated work settings. Although literature has recognised skills as a key factor in the development and uptake of AI technologies, there has been paucity of empirical research on the precise nature of skill requirements in AI-mediated workplaces. This commentary argues that to advance our understanding of skill requirements in AI-mediated workplaces, an integrative, multidisciplinary, multimethod and multistakeholder approach is required. The commentary proposes an agenda for future research in this societally important but poorly understood area.
Introduction
On 11 January 2023, a marketing agency Codeword announced the recruitment of ‘the world's first AI interns’. 1 Named Aiden and Aiko, the artificial intelligence (AI) ‘interns’ will be full members of the company's 106 staff: Aiko will join the design team, and Aiden will work in the editorial team. Their tasks will include creating slides, writing entries for the company's blog and social media and analysing trends and news. The performance of the AI ‘interns’ will be regularly evaluated as they progress towards a potential full-time role. The vendors’ promotion of AI tools under the guise of ‘digital employees’ or ‘colleagues’ is well-documented in the literature. Scholars have critiqued the destabilising effect of such nomenclature on human dignity in the workplace, evidencing the non-trivial amount of human input that goes into training, validating and calibrating such tools to keep them running (Newlands, 2021). The Codeword's announcement is mute on what would be required of Aiden's and Aiko's human co-workers, managers and clients to work with these AI ‘interns’.
This example is one of a myriad of recent transformations in work practices brought about by the development and diffusion of AI in organisations. Economists have shown that technological advancements require specialised, ‘higher-level’ knowledge and abilities leading to new skill demands, a pattern termed ‘skill-based technological change’ (e.g. Autor and Dorn, 2013). AI technologies fit this pattern generating skill gaps and potentially requiring new, AI-specific skills. This technological transformation challenges workers, organisations and societies. At the individual level, workers are faced with the need to enhance their skills to succeed in today's AI-mediated workplaces and the labour market of the future, where AI is set to play an ever more central role (McKinsey, 2021). At the organisational level, the embedding of AI has been uneven, with the availability of skills to develop, implement and use AI recognised as a key barrier to their uptake (Brynjolfsson and Mitchell, 2017; Frank et al., 2019). At the societal level, educational institutions and policymakers have been unable to adequately keep up with these transformations generating knee-jerk, piecemeal, technologically determinist responses poorly substantiated by evidence (Cedefop, 2018).
Whilst the literature has recognised the importance of skills as a key factor in the development and adoption of AI, there has been paucity of empirical research on the nature of skill requirements in AI-mediated workplaces. Recent publications, including in this journal, have highlighted the skill gaps that arise when humans interact with AI, for example, when workers seek to make sense of AI-generated predictions (Borch and Min, 2022) or when they validate AI's decisions without understanding the basis of the decisions (Anthony, 2021). The lack of understanding of skill requirements, particularly the scarcity of micro-level data such as AI-focused skill taxonomies, has been recognised as a key barrier to advancing our understanding and measurement of the impact of AI on the future of work more broadly (Frank et al., 2019). Furthermore, the absence of an empirically grounded understanding of exactly what skills are required in AI-mediated workplaces hinders the development of effective interventions to enhance workforce's requisite skills.
Conceptually, the present discourse on AI and skills has at least two limitations. First, the term ‘skill’ is often used too loosely. For example, sometimes ‘skill’ is used as a catch-all term to denote occupations/fields such as ‘data science’ or ‘programming’, which are aggregates of skills, knowledge and qualifications rather than a single skill (e.g. Stephany, 2021). Furthermore, the discussion tends to be limited to technical skills required to develop and implement AI solutions such as machine learning [sic], NLP or data visualisation techniques (e.g. Marr, 2022). Often, the term ‘skill’ is used interchangeably with ‘competencies’ or ‘capabilities’, which are broader concepts comprising of but not limited to skills (Long et al., 2020). The variation in and inconsistency of use is partly due to the notion of skill having been historically studied within diverse disciplines including economics, anthropology, psychology, education and sociology, each of which has produced their own conceptualisations, definitions and typologies of skills, often in a non-generative manner. The conceptual imprecision and inconsistency, far from being purely of academic concern, are a practical hindrance, for they can lead to ineffectual solutions and poor policy. Whilst a detailed discussion of definitions of skills is beyond the scope of this commentary, to clarify, here I use the term ‘skills’ to mean the ability to apply knowledge to work tasks (Cedefop, 2014), comprising core skills (the technical know-how of a particular occupation, e.g. for a data scientist the ability to identify and apply correct techniques and software to analyse big data), transversal skills (such as communication, teamwork and problem-solving) and dispositions (such as self-efficacy, creativity, and emotional intelligence). The more nuanced way of thinking about skills afforded by a broader typology such as this one helps expand the current policy and practice discourse beyond the limited – and limiting – focus on technical skills, by emphasising also the uniquely human, interactional, creative and emotional capabilities that will be relevant in AI-mediated workplaces.
Second, the present discourse on AI and skills is focused on generic ‘digital skills’, a notion which originated in the earlier waves of technological development (e.g. Vuorikari et al., 2022). However, generic digital skills are only partially relevant because AI technologies are hypothesised to necessitate new, AI-specific skills such as ‘intelligent interrogation’ (the ability to correctly formulate questions to AI) or ‘fusion skills’ (the integration of human skills and machine capabilities within a business process to generate results superior to what humans or machines could produce alone) (Daugherty et al., 2019).
This commentary argues that to address these limitations and to develop an empirically grounded understanding of the transformation of skills brought about by the development and diffusion of AI technologies, an integrative approach is required. By ‘integrative’, I mean an approach that combines the perspectives of different actors/stakeholders and scientific disciplines using multimethod research designs. In doing so, the commentary responds to Jarrahi et al. (2022) who in this journal called for more research on the complex ways in which AI and humans mutually augment across contexts and who highlighted AI skills as an important aspect to be investigated.
Research agenda
I propose that at least four dimensions of integration are central to future research in this area: (a) integrative exploration of work practices and associated skill requirements across the AI production chain, comprising curation of big data underpinning AI, development of AI, and end-use of AI; (b) integrating insights from social sciences, humanities and engineering/computer science; (c) integrating quantitative and qualitative methodologies; and (d) integrating perspectives of different stakeholders within a participatory co-design framework.
Dimension 1: Integrative exploration of work practices and skill requirements across the AI production chain
Future research should examine differential skill requirements of different types of frontline actors across the AI production chain (Hoffmann and Nurski, 2021) comprising three stages and corresponding roles: production of big data for AI (roles: e.g. data labellers/verifiers), design and development of AI (roles: e.g. machine learning engineers and data scientists) and end-use of AI (roles: e.g. workers using AI-based solutions to carry out their work tasks or managers/leaders making decisions about organisational implementation of AI). How do skill requirements vary between these stages and roles in the AI production chain? Regarding the first stage, AI's reliance on big data has led to the expansion of work practices focused on data production: validation, labelling and verification (Newlands, 2021). Although data production is sometimes partially done internally by AI developers and companies, often it is outsourced to online platforms such as Clickworker (Tubaro et al., 2020). Importantly, workers undertaking AI data production on these platforms are a double-edged case of actors who are both producers (trainers) of AI and at the same time end-users of AI, because their platform work itself is algorithmically supervised (Jarrahi et al., 2021). Data production work is fundamental to the quality of AI solutions, but the model of outsourcing to online platforms it relies on makes the actual work practice, the workers and their skill requirements opaque and poorly understood (Newlands, 2021). In terms of the second stage of the AI production chain, the implementation of AI technologies requires organisations to have specific software development capabilities as there are sometimes no ‘off-the-shelf’ solutions for organisations to use. Organisations then must either create an in-house AI team or procure AI expertise whilst building an understanding of the associated ethical, legal and technical issues applicable to their own organisational context (McKinsey, 2021). The skill requirements of these intra-company actors are also poorly understood but are likely to span technical, organisational, managerial, human and ethical dimensions. As these examples illustrate, because the different actors in the AI production chain experience different work practices in relation to AI, it is plausible that they will face different and distinct skill gaps and requirements. Furthermore, different types of AI technologies may necessitate different skills; for example, in generative AI, formulating instructions (prompts) has been emerging as a key skill, but it may be less relevant in non-communicative forms of AI; future research should produce differentiated analysis of such AI-specific skills.
Dimension 2: Integrating insights from social sciences, humanities and computer science/engineering
Research on AI in the computer science/engineering, social sciences and humanities has proceeded largely in parallel, with little generative knowledge building across the disciplines (Dwivedi et al., 2019). Future research on skill requirements in AI-mediated workplaces must be multidisciplinary because no discipline has the required conceptual, theoretical and methodological instrumentation to address this problem alone. In particular, future research on AI and skills should seek to bring together the emergent findings, insights and theoretical perspectives from different disciplines. There are several ways in which such multidisciplinary integration can be operationalised (Bergmann et al., 2012). For example, future research could focus on the development of integrative theoretical frameworks and models bringing together the relevant concepts, constructs and theories from across computer and social sciences and humanities. Multidisciplinary integration could be fostered through joint formulation of hypotheses or research questions, with input from non-academic stakeholders. A key mechanism could be the development of research infrastructures and processes that are supportive of integration, for example, multidisciplinary SocSci/CompSci and intersectoral R&D consortia and research centres.
Dimension 3: Integrating quantitative and qualitative methodologies
Literature on skill measurement has recognised that the ‘reliance on single-type research design could lead to seriously distorted conclusions concerning the skill phenomenon’ (Spenner, 1990: p. 8). Therefore, to enable an accurate analysis of transformation of skills in AI-mediated workplaces, future research should integrate cross-sectional designs within and across units with longitudinal, panel and time-series studies using mixed methods. In addition, causal designs that could help shed light on the causes and consequences of skill transformation on the uptake of AI in the workplace would be required (Rahwan et al., 2019).
Extant methods of skill analysis span direct measures, for example, those based on expert ratings and self-report data (e.g. interview and survey), and indirect measures, such as extrapolation of skill data from wage data and data on educational attainment or secondary analysis of expert-based occupational classification schemes such as ONET (Spenner, 1990). Newer, indirect methods for measuring AI-related skills have emerged within computational social science, for example, data mining from online job profiles and curriculum vitaes (CVs) (Cedefop, 2021), online labour platforms (Stephany, 2021) or the use of machine learning [sic] to identify AI skill gaps (Whiting, 2023). Whilst indirect measures can be informative, future research on AI and skills must use more direct measures to collect data from frontline actors across the AI production chain. To this end, in situ, ethnographic and interview-based research designs could be particularly helpful in developing a nuanced, holistic and contextualised understanding of the impact of AI on skills (Grigoropoulou and Small, 2022). Examples of such in-depth, immersive studies in AI-mediated workplaces have been published in this journal and elsewhere (e.g. Borch and Min, 2022; Borg, 2021); such designs should be extended to examine skill requirements.
Dimension 4: Integrating perspectives of different stakeholders within a participatory co-design framework
Future research on AI and skills should seek to integratively analyse the perspectives of different stakeholders through participatory co-design approaches. Publications in this journal have pointed out the importance of multistakeholder, participatory research to better understand the implications of AI diffusion in the workplace (Jarrahi et al., 2021; McCosker et al., 2022). Participatory co-design helps bring together researchers from various disciplines, technology developers, users and policymakers to address research questions by combining and further developing the various stakeholders’ understandings of the problem space (Sanders and Stappers, 2008). Participatory co-design is a systematic, multistage process of problem exploration and definition, followed by development, testing and evaluation of solutions in practice (Steen, 2013). The involvement of non-academic stakeholders, particularly the actors in the AI production chain, is critical because they possess important first-hand knowledge of the work and its skill requirements.
Beyond helping develop the scientific understanding of skill requirements in AI-mediated workplaces, participatory research could devise design principles, policy recommendations and toolkits to foster the development of those skills on the ground. Importantly, a participatory co-design approach, with an early involvement of stakeholders and potential users at each step, would foster the eventual use and application of the research outputs in practice leading to effective and lasting skill enhancement and skill development in AI-mediated work.
Conclusions
This commentary highlighted and problematised a societally important but under-researched topic: advancing the empirical understanding of skill requirements in emergent AI-mediated work practices. The commentary proposed an integrative research agenda to guide future studies in this area, specifying four dimensions of integration. The potential outputs from such integrative research could include empirical typologies and micro-level taxonomies of skills for AI-mediated workplaces differentiated by type of key frontline actor within the AI production chain. The outputs could include rich descriptions of opportunities and challenges of AI for work and skill development that draw on a nuanced, contextualised understanding of frontline actors’ perspectives and experiences. Finally, future integrative research would produce higher-level policy and practice recommendations and guidelines on the design of skill-enhancing AI technologies and skill development interventions, differentially addressing organisational leaders, government bodies and regulators, as well as educational and vocational training institutions.
Footnotes
Acknowledgements
The ideas of differential analysis of skill requirements across the AI production chain and participatory co-design research presented in this paper originated in the ‘Skills, Artificial Intelligence and Labour’ (SKAIL) project funded by the Volkswagen Foundation in 2020–2021. I gratefully acknowledge the input of my project collaborators in the development of these ideas: Martin Krzywdzinski (Wissenschaftszentrum Berlin and Weizenbaum Institute, Germany), David Guile and Miguel Rodrigues (University College London) and Christian Meske (Ruhr University Bochum, Germany). I thank Dr Lena Hercberga for her help in editing and proofreading the manuscript. I am grateful to the journal editors Dr Matthew Zook and Dr Rocco Bellanova and the two anonymous referees for the peer review of this piece. I dedicate this paper to the Armenian child born this afternoon (19 September 2023) in a bomb shelter in Stepanakert, the capital of the Nagorno-Karabakh (Artsakh) Republic, following Azerbaijan's unprovoked war against the 120,000 indigenous Armenians of Nagorno-Karabakh (
).
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Parts of the research presented here were generated with funding from the Volkswagen Foundation (grant A130825).
