Abstract
This case study uses a comparative framework to examine the ethical implications of artificial intelligence (AI) in music reconstruction, expanding on prior research comparing audience sentiments across professional and amateur musical contexts. It investigates two distinct applications of AI in the domain of music. Study A utilises qualitative sentiment analysis of YouTube comments to assess spontaneous public responses to an AI-enhanced version of Queen's The Night Comes Down (2024 Mix – Single Version), analysing the top 220 most-engaged comments collected in March 2025. Study B, based on previously collected data, employs a focus group methodology to examine structured discussions among experts regarding the AI-assisted reconstruction of semi-professional Austrian bands (Schattenparker aus Wien and The Banana Trees). A comparative analysis reveals a sharp divergence: while Study A showed predominantly negative reactions – commenters criticised pitch correction and AI-generated visuals as threats to authenticity, artistic integrity and Freddie Mercury's legacy – Study B participants largely welcomed AI as a democratising tool for enhancing and preserving amateur recordings. This contrast highlights the complex ethical and cultural tensions surrounding AI's role in creative industries, particularly the balance between innovation, artistic authenticity and audience trust. While AI-assisted reconstructions hold potential for democratising music production and archival preservation, these divergent responses underscore the need for clearer ethical guidelines. To address these concerns, we propose a voluntary labelling scheme as a metaphorical traffic-light to disclose the degree of AI intervention. We further recommend future empirical research to test such transparency mechanisms across genres, cultures and production settings.
Keywords
Introduction
The accelerated development of artificial intelligence (AI) in music production and reconstruction has significantly reshaped creative processes and altered the way audiences engage with music (Agwan et al., 2023; Bonini and Magaudda, 2024). While AI offers significant advantages – such as democratising high-quality music production and enabling the preservation of historical recordings – it also raises important ethical concerns about authenticity, artistic integrity, authorship and audience trust (Berkowitz, 2024; Pfeiffer and Krishna, 2024). Building on previous empirical research, this study aims to investigate these ethical challenges in greater depth, focusing on how audiences perceive AI-driven interventions in professional versus amateur music contexts.
The fundamental distinction between professional and amateur contexts extends beyond artist recognition to encompass different ethical frameworks, audience expectations and authenticity concepts. Professional contexts involve established artists with a significant cultural legacy, where audiences prioritise historical fidelity and original artistic intention. Amateur contexts feature emerging or semi-professional musicians where audiences emphasise creative democratisation and accessibility over strict historical preservation.
Artificial intelligence–driven techniques are increasingly prevalent in music production, ranging from basic audio enhancements to complex interventions such as virtual instrumentation, voice cloning and generative composition (Katyal et al., 2024). However, AI differs fundamentally from historical enhancement methods. Traditional studio techniques such as analog summing, reverb effects and multi-track recording required continuous human operation and creative decision-making, focusing on improving existing performances rather than generating new content. Contemporary AI systems can autonomously generate musical elements and alter vocal characteristics, challenging traditional notions of authorship and creative control.
However, audience reception of these technologies varies significantly based on factors such as an artist's stature, historical legacy and the perceived authenticity of artistic expression. Authenticity in music encompasses the genuine expression of an artist's intention, creative ability and emotional truth. However, in the digital age, this concept has grown increasingly complex, as technological mediation – particularly through AI – challenges traditional notions of artistic creation and disrupts established frameworks of authorship and legitimacy. Established industry acts like Queen have employed AI-driven remastering for legacy recordings, igniting intense debates over the ethics of altering iconic performances (NME.com, 2025). In contrast, semi-professional and amateur musicians – often operating with limited resources – have leveraged AI to enhance sound quality, restore ageing recordings and incorporate innovative musical elements that would otherwise be unattainable due to technical or financial constraints (Pfeiffer and Krishna, 2024; Vanka et al., 2023).
Through an in-depth sentiment analysis of audience comments on Queen's controversial AI-enhanced release, Study A captures authentic audience reactions to perceived artistic disruptions caused by AI technologies. These results are contrasted with Study B – previous focus group findings from case studies involving Austrian amateur bands, Schattenparker aus Wien and The Banana Trees, where AI's role was generally seen positively, valued primarily for its enhancement capabilities and preservation of historical recordings. In the case of the semi-professional band whose archival recordings were reworked using AI-assisted techniques, a fictional AI persona, Aria Turing, was introduced as a credited co-producer to systematically delineate AI's contributions within the creative process and enhance transparency. This methodological approach not only acknowledges AI's influence in the reconstruction of musical works but also serves as a critical framework for evaluating the ethical complexities of authorship, authenticity and artistic integrity in AI-assisted music production.
This comparative analysis offers significant insights into the shifting dynamics of audience perception across professional and amateur musical contexts. Beyond contrasting audience responses, this paper aims to advance a deeper understanding of the ethical repercussions associated with AI-driven interventions in the arts. In doing so, it contributes substantively to ongoing debates surrounding AI's cultural and creative impact, while also advocating for pragmatic measures – such as transparent disclosure frameworks – that uphold ethical integrity and promote accountability in practice.
Literature review
Artificial intelligence has significantly altered creative and technical processes across various domains (Yang, 2025) with music production being a prominent example. Contemporary AI technologies have evolved beyond traditional digital audio workstations, taking on more complex roles such as generative composition, mastering, voice cloning and virtual instrumentation (Katyal et al., 2024). Platforms such as Suno AI enable rapid generation of complete songs, while technologies like Moises facilitate the separation of audio into distinct instrumental stems, expanding possibilities for music reconstruction and preservation.
As AI technologies continue to advance, discussions around their impact on music, creativity and artistic authenticity have intensified. While AI presents new opportunities for democratising high-quality music production and preserving historical recordings, it also raises ethical concerns about authorship, artistic integrity and audience trust (Berkowitz, 2024; Bonini and Magaudda, 2024). Some artists and industry figures worry about AI's potential to overshadow human creativity, diluting the authenticity of artistic works (Aniftos, 2023). Others, however, highlight AI's role in enhancing accessibility and diversifying music creation, as long as these tools are used with proper permissions and ethical considerations (Huppe, 2024).
The debate over whether AI democratises or diminishes musical creativity has become central to contemporary discourse. Proponents argue that AI tools remove traditional barriers to music creation, potentially addressing educational inequalities and enabling individuals with limited technical skills to realise their creative visions (Ardeliya et al., 2024; Sawant and Bargavi, 2024; Vengathattil, 2025). Critics, however, question whether such democratisation maintains genuine musical understanding and skill development, suggesting that AI may enable creation without fostering authentic musical knowledge (Berkowitz, 2024; Hong et al., 2022; Oksanen et al., 2023; Sadovenko, 2024).
One of the most notable controversies surrounding AI in music involved an AI-generated track mimicking Drake and The Weeknd, which was removed following legal and ethical backlash (NCC News, 2023). Similarly, when Queen released an AI-enhanced remix of historical recordings, audiences expressed strong disapproval, particularly regarding the use of autotuning and its perceived impact on Freddie Mercury's legacy (NME.com, 2025; YouTube, 2025). The music industry has responded by reinforcing copyright protections and ethical safeguards. Sony Music Group, for example, issued warnings to over 700 companies against unauthorised use of its content for AI training, emphasising the need to protect songwriters and recording artists’ rights (NBC News, 2025). Meanwhile, major artists such as Stevie Wonder and Billie Eilish have voiced concerns over AI's potential to undermine human artistic contributions, calling for clear ethical and regulatory frameworks (Billboard, 2025; Euronews, 2024).
Despite these controversies, emerging research underscores AI's potential benefits, particularly in empowering amateur musicians. Studies suggest that AI-assisted tools can enhance sound quality, restore old recordings and introduce new creative possibilities, making professional-level music production more accessible (Vanka et al., 2023; Zhou, 2023). However, audience reception to AI's role in music varies significantly depending on the artist's stature and the context of its application. Research indicates that listeners of iconic professional artists tend to resist AI interventions due to concerns about historical authenticity and legacy preservation (Berkowitz, 2024; Bonini and Magaudda, 2024). In contrast, audiences of amateur and semi-professional musicians often view AI as a valuable tool for improving music production quality and reviving otherwise inaccessible recordings (Agwan et al., 2023; Pfeiffer and Krishna, 2024).
Building on this discourse, Pfeiffer and Krishna (2024) conducted case studies of semi-professional Austrian bands Schattenparker aus Wien and The Banana Trees, exploring audience reception to AI-assisted reconstructions (Study B). Their research found that AI interventions in amateur music were generally well received, with audiences appreciating improvements in audio quality and creative expression despite significant algorithmic involvement. A critical component of this study was the introduction of a fictional AI persona, Aria Turing, credited as a co-producer in projects where AI played a substantive generative role. This approach served both as a transparency measure and as an acknowledgment of AI's influence in the creative process. The practice provided an experimental framework for delineating AI's contributions while addressing ethical concerns surrounding authorship and artistic integrity. The contrast between professional and amateur contexts highlights the complexity of AI's ethical and cultural implications in music.
To deepen this analysis, this case study involved a qualitative sentiment analysis of 220 user-generated comments from the official YouTube release of Queen's The Night Comes Down (2024 Mix – Single Version). These comments, ranked by user engagement, provide direct insights into audience attitudes towards AI-enhanced music in a professional setting. By systematically evaluating these reactions, this study aims to further contextualise how AI-driven interventions shape public perception. Comparing these findings to prior research on Schattenparker aus Wien and The Banana Trees offers a more comprehensive understanding of the ethical tensions at play, particularly in relation to artistic integrity and authenticity. Moreover, the insights gained from this sentiment analysis can inform future transparency frameworks, such as a proposed voluntary traffic-light disclosure system, to ensure AI's ethical and responsible integration into music production.
Methodology
This study adopts a dual-method qualitative design that integrates computational sentiment analysis with structured focus group research. The framework acknowledges that the two case studies examine fundamentally different applications of AI in music, with distinct parameters for audience engagement and response. Despite their divergence in method, participant composition and type of AI use, both studies address a shared thematic concern: how audiences receive and interpret AI-mediated music. Study A applies large-scale sentiment analysis to YouTube comments responding to a professionally produced, AI-enhanced remix of an iconic Queen recording. Study B draws on previously collected focus group data in which domain experts reflect on AI-assisted reconstructions of semi-professional bands. This methodological divergence is intentional and analytically valuable.
Framed as an exercise in methodological pluralism, the study leverages the complementary strengths of computational and qualitative paradigms. Study A captures bottom-up, unmoderated public discourse – spontaneous, affective reactions within a mass-access digital environment. In contrast, Study B elicits top-down, context-rich reflections from informed participants, generating more deliberative insights into creative agency, authorship and ethical complexity. Rather than pursuing a like-for-like comparison, this context-sensitive, comparative framework highlights how different critical environments – public forums versus structured expert discussions – condition perceptions of AI's role in music. This juxtaposition enables a multidimensional, critically triangulated understanding of audience attitudes across both professional and amateur contexts. It reveals how factors such as artistic legacy, consent and transparency shape acceptance or resistance to AI interventions, contributing to ongoing debates about authenticity, innovation and responsibility in AI-assisted creative practices.
The framework also accounts for the fact that these studies investigate different forms of AI integration in music, each governed by distinct parameters for audience interaction and feedback. While they address divergent use cases and audience dynamics, both are methodologically linked by a shared analytical concern: how AI-generated or AI-assisted music is perceived and interpreted. Despite their epistemological differences – one capturing spontaneous public sentiment, the other eliciting expert reflection – this divergence makes comparative analysis especially valuable. Juxtaposing these perspectives allows for a more nuanced understanding of how context, consent and cultural significance shape audience responses to AI in music, offering a multidimensional view of the ethical and creative tensions involved.
Sentiment analysis of audience comments (Queen video)
A qualitative sentiment analysis was conducted on user-generated audience comments from the official YouTube release of Queen's AI-enhanced remix, specifically from the video ‘Queen – The Night Comes Down (2024 Mix – Single Version)’. The top 220 user-generated comments, ranked by engagement, were manually collected and analyzed to assess audience sentiment regarding AI interventions in professional music production. This approach captures spontaneous, organic reactions from a broad and diverse audience, offering valuable insights into real-world perceptions of AI's role in altering legacy recordings.
Comments were systematically collected based on user engagement metrics – primarily likes and replies – with data extraction conducted in March 2025, approximately six months after the video's release. This timeframe ensured the dataset reflected sustained audience engagement rather than immediate, reactionary responses. All comments were transcribed verbatim, anonymised beyond publicly visible usernames and subjected to qualitative content analysis using open and axial coding methods (Krueger and Casey, 2015).
The initial open coding phase identified emergent themes related to audience sentiment (negative, neutral, positive), thematic content (AI, autotuning, authenticity, artistic integrity, legacy considerations) and emotional tone (disappointment, anger, enthusiasm). Comments explicitly referencing specific AI interventions – such as pitch correction and AI-generated visuals – were categorised separately for focused analysis. Additionally, quantitative frequency counts were conducted to contextualise qualitative results, highlighting prevalent themes and recurring terminologies in audience discourse. Direct quotations were incorporated to transparently illustrate representative sentiments, ensuring the analysis remained grounded in audience perspectives.
Focus group analysis (amateur bands)
Complementing the sentiment analysis, this study utilised previously conducted focus group data from research involving Austrian semi-professional bands, Schattenparker aus Wien and The Banana Trees (Pfeiffer and Krishna, 2024). A structured, moderated focus group session was conducted, lasting approximately 40 min, with five purposefully selected participants: a music teacher and amateur musician (P1), a small record label owner (P2), an AI expert and music enthusiast (P3), an amateur musician and IT specialist (P4) and an amateur musician and lawyer (P5). Participants were recruited from professional networks, ensuring diverse perspectives on the implications of AI technologies in music production.
The focus group session followed a structured format, guided by eleven open-ended questions that explored the ethical, creative and legal dimensions of AI-assisted music reconstruction. To support informed engagement, participants were presented with AI-generated reconstructions produced using tools such as Suno V3.5 and Cubase. This allowed for direct experiential assessment of AI's role in music production and enabled critical reflections on authenticity, artistic integrity, creative agency and authorship.
The discussions were transcribed and analysed using thematic content analysis, which surfaced recurring patterns in participants’ attitudes, concerns and expectations regarding AI's place in non-commercial musical contexts. The focus group setting offered a controlled yet open environment for deep exploration of AI's perceived value, particularly its capacity to enhance or compromise creative processes. Ethical guidelines were rigorously followed throughout, with participant anonymity maintained at all stages of data collection, analysis and reporting. The study also maintained full transparency regarding its scope, aims and methodological boundaries.
Results
The results are presented in two parts. First, we analyse audience sentiment towards Queen's AI-enhanced release, revealing public reactions in a commercial legacy context. Second, we summarise focus group insights from semi-professional bands, capturing attitudes towards AI in amateur music reconstruction. Together, these findings highlight key contrasts in how AI may be perceived across different creative settings.
Sentiment analysis of Queen’s AI-enhanced release
Analysis of the top 220 user-generated comments from Queen's official YouTube video, The Night Comes Down (2024 Mix – Single Version), revealed predominantly negative audience sentiment regarding the use of AI technologies. Audiences frequently expressed strong dissatisfaction, specifically targeting AI interventions such as pitch correction on Freddie Mercury's original vocals and AI-generated visual elements.
Recurring thematic concerns included authenticity, artistic integrity and legacy considerations. Commenters often employed emotionally charged language, describing AI enhancements as ‘unforgivable’, ‘soulless’, and ‘offensive’. A representative example includes comments such as ‘The imperfections are what make it real’ (@theonlyredspecial), emphasising commentator resistance towards digital manipulation of Mercury's original performances.
Many commenters explicitly referenced Mercury's professional background and argued against the AI interventions as disrespectful. One audience member captured widespread dissatisfaction: ‘Freddie was a graphic designer and there's no respect for his profession. Shame on Queen Prod for using AI, looks awful’ (@AlanLambertChavez8929). Commenters further expressed disappointment and confusion regarding perceived inconsistency from surviving band members Brian May and Roger Taylor on AI usage, amplifying feelings of disillusionment.
Criticism extended to AI-generated visuals, frequently described negatively as ‘creepy’, ‘lifeless’ or unsettling – exemplified by comments such as ‘Nah bro, the AI images at the end ain’t it, wtf was that?’ (@CaviloraProductions). A smaller number of positive or neutral comments were identified, primarily appreciating audio remaster quality, though these were overshadowed significantly by negative reactions.
Broader ethical concerns were consistently expressed, highlighting audience awareness of the impact of AI on professional artists. Commenter @inde-cipher summarised a prevalent ethical concern: ‘When you use AI “art”, it communicates cheapness, laziness and apathy to the viewer’.
These commentator reactions highlight an ethical tension between technological innovation and respect for artistic legacy, particularly when AI is used to modify the vocal and visual elements of a deceased artist without their consent. The strong resistance to perceived tampering with Freddie Mercury's original performance reveals deep discomfort with the cultural implications of reanimating iconic figures through AI, raising questions about authorship, posthumous representation and the boundaries of creative intervention.
Focus group analysis: Amateur band case studies
In contrast to the negative reactions associated with the Queen release, previously conducted focus group analyses of amateur bands Schattenparker aus Wien and The Banana Trees revealed generally positive sentiments towards AI-assisted reconstruction efforts (Pfeiffer and Krishna, 2024). Participants from diverse backgrounds, including musicians, AI specialists and legal experts, broadly recognised the creative and practical benefits of AI technologies.
Participants explicitly praised AI's role in enhancing audio quality, revitalising old recordings and introducing innovative musical elements previously unattainable due to financial or technical constraints. Examples from songs such as Lamahirte and Lied gegen den Wind demonstrated how AI interventions, including virtual instrumentation and mastering improvements through Suno V3.5 and Cubase, enhanced rather than compromised the perceived authenticity and emotional resonance of the music.
Despite positive feedback, participants expressed measured caution regarding transparency and ethical considerations. Concerns about authorship attribution and intellectual property were particularly prominent. The idea of transparently crediting significant AI contributions through a fictional AI co-producer, Aria Turing, was generally welcomed as a pragmatic solution to ethical transparency concerns. Participants widely supported and strongly endorsed a voluntary traffic-light disclosure system – green for minimal AI involvement, yellow for moderate use and red for extensive AI integration – as a clear and ethical framework to enhance transparency, preserve artistic integrity and manage audience expectations in AI-assisted music production. However, discussions acknowledged the challenge of standardisation and enforcement, reflecting the complexity of ethically navigating audience expectations in AI-assisted music production.
Findings
The juxtaposition of sentiment analysis from Queen's AI-enhanced remix and focus group, which centred on amateur band reconstructions, reveals critical divergences in how audiences respond to AI in music production – particularly around issues of authenticity, artistic integrity and legacy preservation.
The sentiment analysis of Queen's AI-enhanced remix reveals concern over authenticity, authorship and the perceived violation of artistic integrity in professional music contexts – where AI is frequently viewed as an intrusive force modifying canonical works. Audience responses to Queen's officially released remix, particularly in the YouTube comments section, were markedly negative. Concerns were primarily rooted in the perceived erosion of historical authenticity and artistic integrity (NME.com, 2025; YouTube, 2025). Commenters expressed strong emotional attachment to the original work, emphasising that AI-driven pitch correction and synthetic visuals diminished the humanity and uniqueness of Freddie Mercury's performance. These interventions were perceived as disrespectful, violating the aesthetic and ethical boundaries that many fans associate with Queen's legacy. Such reactions reflect broader cultural anxieties about AI's role in reconfiguring historically significant art and resonate with scholarly critiques surrounding the ethics of posthumous AI collaboration and creative consent (Berkowitz, 2024; Bonini and Magaudda, 2024). In this commercial context, AI was not embraced as a creative tool but rather condemned as a mechanism of distortion – intensifying feelings of emotional betrayal and highlighting unresolved tensions between innovation, ownership and cultural memory.
In contrast to the critical reception of AI in Queen's commercial remix, the amateur music context – exemplified by the bands Schattenparker aus Wien and The Banana Trees – elicited markedly more positive attitudes towards AI applications (Pfeiffer and Krishna, 2024). Focus group discussions revealed that, in non-commercial settings, AI was largely viewed as a constructive, enabling force, particularly in democratising access to music production and restoring culturally significant recordings that would otherwise remain lost.
Participants acknowledged AI's capacity to overcome technical limitations, enhance audio quality and support artistic expression among under-resourced musicians. Rather than expressing fear of AI replacing human creativity, participants framed it as a tool for expanding creative possibilities – especially in cases where traditional studio resources were inaccessible. This perception aligns with scholarly views on AI as a democratising force in the arts (Agwan et al., 2023; Vanka et al., 2023). While participants raised legitimate concerns about authorship and transparency, these were approached pragmatically. Proposals such as clearly labelling AI contributions and assigning fictional AI credits reflect a nuanced willingness to ethically integrate AI into creative workflows. In this grassroots context, AI was not perceived as a threat to artistic integrity but as a collaborator in the co-creation of meaningful, accessible music.
The Queen case provokes backlash tied to legacy disruption and the perceived dehumanisation of a cultural icon, whereas the amateur-band case is marked by optimism around creative empowerment and restoration. Although the contexts differ, both converge on a key imperative: transparent communication of AI's role. This finding underscores that the legitimacy of AI-assisted music may hinge not merely on technological execution but on its ethical framing and cultural positioning in the creative ecosystem.
Authorship and consent emerge as central axes of tension. Commenters on the Queen remix viewed AI alterations as a violation of Freddie Mercury's artistic agency, questioning whether surviving band members or producers had the ethical right to reinterpret his legacy without explicit consent – raising fundamental ethical questions about posthumous artistic rights. By contrast, focus group participants welcomed AI as a means of extending creative agency, emphasising their active involvement in shaping the final product.
Across both cases, transparent disclosure of AI involvement emerged as essential for audience trust. In the Queen remix, the absence of clear labelling – particularly around vocal processing and AI-generated visuals – amplified feelings of deception. Focus-group participants therefore advocated explicit labelling protocols, endorsing the traffic-light model (minimal, moderate, extensive AI involvement) and, in some instances, fictional AI credits to clarify authorship. These findings confirm that resistance might be mitigated when audiences understand AI's precise role.
Backlash to the Queen remix underscores the emotional and cultural stakes of AI in legacy music: Mercury's voice symbolises authenticity, loss and fixed cultural memory, making AI enhancements seem like violations of a sacred archive. Grassroots projects lack this nostalgic baggage; AI is instead welcomed as a tool of innovation to overcome technical limitations and enable creative experimentation. Acceptance of AI thus depends on emotional proximity and symbolic weight – an instance of cultural gatekeeping that defines what is authentic or sacred.
Audience reception of AI-assisted music is deeply context-dependent, shaped by cultural significance, expectations of authenticity and the degree of creative consent. In commercial, legacy-laden settings – exemplified by Queen's remix – commentators condemned AI interventions as breaches of historical fidelity and artistic integrity. Conversely, participants engaging with AI-enhanced amateur recordings viewed the technology as a creative enabler that supports accessibility, restoration and collaborative innovation. These divergent reactions show that acceptance may hinge less on the technology itself than on who authorises its use, how transparently it is disclosed, and what cultural stakes are involved.
Triangulating YouTube sentiment with expert focus group insights offers a robust framework for understanding the ethical and emotional dimensions of AI in music. Across contexts, one key principle emerges: transparent disclosure can promote trust. Clear labelling – such as the proposed traffic-light system – can support ethical innovation by signalling AI involvement, respecting legacy and ensuring consent-driven creativity.
Implications and recommendations
This study demonstrates that the ethical legitimacy of AI in music may depend less on the technology itself than on its cultural, emotional and historical context. Contextual sensitivity is essential: AI interventions in legacy recordings – especially those involving deceased artists – require heightened ethical scrutiny due to their symbolic weight, while in grassroots settings; AI is more often perceived as a tool for creative empowerment. Transparency is non-negotiable; audience trust depends on clear disclosure. Producers and platforms should implement consistent labelling – such as the proposed traffic-light system – to indicate the degree of AI manipulation. Consent and creative agency remain central: particularly in commercial and posthumous contexts, consent must reflect not only legal clearance but also the moral and artistic intent of the original creator. Cultural institutions and digital platforms should work to demystify AI's role through accessible resources and public dialogue around authorship, labour and value. Finally, industry-wide ethical standards are urgently needed – covering transparency, consent, posthumous use and attribution – developed collaboratively by artists, technologists, legal experts and cultural theorists to ensure they are both principled and practically enforceable.
Future research
Building on the findings of this comparative analysis, several directions for future research emerge.
First, further empirical studies are needed to examine audience perceptions across diverse musical genres, age groups, fanbases and cultural contexts. Such research could refine our understanding of how different variables shape responses to AI interventions and identify more nuanced factors influencing acceptance or resistance.
Second, practical frameworks for transparent disclosure – such as the proposed voluntary traffic-light system – should be empirically tested across a broader spectrum of musical production, from mainstream industry settings to independent and grassroots communities. Assessing the effectiveness of such measures in shaping audience trust could inform clearer ethical standards for AI-assisted music.
Additionally, future studies should examine the legal and ethical implications of AI-generated content on artists’ rights, intellectual property law and industry economics. While existing legislation addresses aspects of AI authorship, further research is needed to support the standardisation of consent, attribution and licensing practices – ensuring that legal frameworks evolve in step with technological innovation to safeguard creative rights and promote responsible use.
Finally, interdisciplinary research – involving sociologists, cultural theorists, legal scholars, technologists and music professionals – is essential to deepen our understanding of how AI affects musical creativity, audience engagement and artistic identity. Such research could support the development of ethically grounded policies and guide responsible creative practices as AI is increasingly integrated into musical production.
Footnotes
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
