Abstract
The emergence of large language models (LLMs) like ChatGPT has transformed access to information, offering immediate responses to diverse queries. While these technologies present significant opportunities, they also pose unique challenges for the autistic and wider neurodivergent community. This commentary critically examines the implications of LLMs within this community, focusing on potential misinformation, reinforcement of stigmatising medical models, misrepresentation of non-speaking neurodivergent individuals, and issues of equity and privacy. Through personal experimentation with LLMs and discussions with community members, concerns are raised about the perpetuation of stereotypes, the risk of over-reliance on potentially flawed AI outputs, and the inadequate support for diverse communication needs. Conversely, LLMs offer transformative benefits such as accessible information, personalised learning, and supportive tools for communication and professional development. To maximise these benefits while mitigating risks, a multifaceted approach is proposed, emphasising awareness, collaborative development with neurodivergent individuals, ongoing feedback mechanisms, policy advocacy, and positive representation.
Lay abstract
Large language models (LLMs) like ChatGPT are advanced computer programmes that can understand and generate human-like text. They have become popular tools for accessing information quickly and easily. For autistic and neurodivergent individuals, these technologies offer both exciting possibilities and important challenges.
What we already know is that LLMs can provide immediate answers to a wide range of questions, which can be especially helpful for those who find traditional information sources overwhelming or difficult to navigate. They can assist with learning, communication, and daily tasks, making information more accessible and personalised.
This paper explores the positive and negative impacts of LLMs on the neurodivergent community. On the positive side, LLMs can simplify complex information, support skill development, and offer a safe, non-demanding space for practicing social interactions without the pressure of human judgment. However, there are also concerns. LLMs can sometimes provide inaccurate or biased information, which may reinforce harmful stereotypes and stigma. They may also misinterpret the communication styles of non-speaking individuals, leading to misunderstandings and inadequate support. Additionally, issues of privacy and unequal access to these technologies can further disadvantage autistic individuals.
The paper highlights the need for a balanced approach to using LLMs, emphasising the importance of collaboration between the developers of such technology and the neurodivergent community. It calls for increased awareness, models that respect diverse experiences, and policies that ensure fair and safe use. By addressing these concerns and leveraging the benefits, LLMs can become valuable tools that support and empower autistic and neurodivergent individuals.
Introduction
The advent of large language models (LLMs) like ChatGPT has revolutionised access to information, offering users immediate responses to a wide array of queries. Within the autistic and wider neurodivergent community, LLMs present both opportunities and challenges. As someone who subscribes to the neurodiversity paradigm, which recognises autism as a natural variation of human neurology rather than a disorder to be cured or fixed (Botha et al. 2024), I was intrigued to explore how LLMs like ChatGPT handle questions about autism.
I asked ChatGPT 4o and Claude 3.5 Sonnet, two leading LLMs, the same basic question: “What is autism?”. Here are their responses:
“Autism, or Autism Spectrum Disorder (ASD), is a developmental condition that affects how people communicate, interact with others, and experience the world. It is considered a “spectrum” because the symptoms and severity can vary widely from one individual to another. Some common characteristics include:
Autism is generally diagnosed in early childhood, although it can sometimes go unrecognised until later in life. There is no “cure” for autism, but early interventions, therapies, and support can help individuals develop skills and cope with challenges.”
“Autism, or Autism Spectrum Disorder (ASD), is a neurodevelopmental condition characterized by differences in social communication, interaction, and behavior. Key aspects include:
Social communication challenges: Difficulty interpreting social cues, maintaining conversations, or understanding nonverbal communication. Repetitive behaviors or restricted interests: Engaging in repetitive actions or having intense focus on specific topics. Sensory sensitivities: Over- or under-sensitivity to sensory stimuli like sounds, lights, or textures. Varied cognitive abilities: Autism can occur across a range of intellectual abilities, from significant challenges to exceptional skills in certain areas. Early onset: Symptoms typically appear in early childhood, though diagnosis may occur later. Spectrum nature: Autism presents differently in each individual, with varying levels of support needs. Lifelong condition: While interventions can help manage symptoms, autism is a lifelong difference in neurodevelopment. Genetic and environmental factors: The exact causes are not fully understood, but both genetics and environmental factors are believed to play a role.”
While some elements were acceptable, overall, their responses are too aligned with the medical model of autism, which contributes to stigma and fails to reflect the neurodiversity approach. Both refer to autism as a “disorder,” a term rooted in the medical model, and use person-first language, which is often not preferred within the neurodivergent community. They frame repetitive behaviours and routines as problems needing correction, echoing the problematic principles of Applied Behaviour Analysis (ABA). They state there is no cure for autism but fail to acknowledge that no cure should be sought, implying that interventions or therapies, possibly behavioural, are needed to help autistics. Terms like “symptoms” and “severity” are steeped in the medical model. Claude's response refers to potential causes without addressing that seeking causation is unnecessary and potentially harmful, possibly leading to eugenicist thinking.
This led me to delve deeper into the implications of LLMs within the autistic community. As the lead of the London Autism Group Charity, a father of two autistic children, an academic researcher, and a neurodivergent individual myself, I approached this exploration with both personal and professional insights. After exploring various LLMs, engaging with autistic individuals, and considering these experiences, I present a critical assessment of the potential benefits and risks associated with this technology. I hope this analysis contributes to a more informed and ethical approach to utilising LLMs that respects and empowers the autistic community.
Concerns
One of my biggest concerns is misinformation. While it is known that LLMs can sometimes provide inaccurate information, the potential consequences here are particularly serious. If someone beginning their ‘autism journey’ cannot spot the inaccuracies, they might form misconceptions or reinforce existing ones, contributing to stigma (Link and Phelan 2001; Lodder et al. 2020). Internalised stigma is especially harmful to the mental health of parents, carers, autistic individuals, and other vulnerable neurodivergent groups (Han et al. 2022; Han et al. 2023; Papadopoulos et al. 2019; Tzouvara et al. 2018). So, when ChatGPT refers to autism as a “disorder” with “symptoms and severity that can vary widely” and mentions that “there is no known cure,” these statements can lead to damaging outcomes.
With such a medicalised view of autism, I expected that ChatGPT would not push back on ABA. This so-called ‘evidence-based therapy’ for autism is, according to both autistic voices and research, highly stigmatising (ExistentialAut1 2022; Mariechalpin 2022; Milton 2018). It is also known to create post-traumatic stress disorder in some individuals (ExistentialAut1 2022; Mariechalpin 2022). I asked ChatGPT whether “ABA is a good treatment for autism?” Its response reinforced my concerns. It provided a summary of the ‘evidence’ related to ABA and improved outcomes, concluding that “Overall, ABA can be a good treatment for autism for many individuals.” When I challenged this, it apologised for not reflecting my opinion but maintained that “the decision to use ABA or any other intervention should be based on a comprehensive evaluation of the individual's strengths, needs, and preferences.” This experience reinforced my belief that developers of LLMs need to minimise the biases of neurotypical and medical model frameworks dominating their training data.
I also worry about how misinformation can affect the support an autistic person or their family might receive. Imagine someone who suspects they might be autistic and talks to their general practitioner. If their doctor is not well-versed in autism and turns to an LLM for help, they might inadvertently use stigmatising language such as ‘symptoms’ and suggest ‘treatments’. This could lead to misdiagnosis, diagnostic overshadowing, internalised ableism, and feelings of shame or inadequacy. Indeed, recent research has shown that GPs in the UK are beginning to use LLMs for tasks such as documentation and differential diagnosis, with 20% reporting usage in clinical practice (Blease et al. 2024). This supports the notion that GPs may rely on such resources, which could influence their approach to autism-related support and communication. In my work with the London Autism Group Charity, I have seen how poor knowledge and ‘professional stigma’ (Lodder et al. 2020; Turnock et al. 2022) can cause real harm. The thought that professionals might rely on ChatGPT's responses without critical evaluation is worrying.
The consistency and credibility of the information LLMs provide are further concerns. These models can sometimes produce ‘hallucinations’ - generating information that sounds plausible but is fabricated or based on unreliable sources. This undermines user trust and can spread misinformation. It is crucial that LLMs offer accurate, evidence-based responses and direct users toward reputable sources, especially for sensitive topics that have not been fully debunked. The absence of warnings or caveats in LLM responses is also problematic, as it fails to signal that certain topics remain debated within academic and professional communities. Ideally, LLMs should clearly mark such references as starting points rather than definitive answers. Mistakes can confuse people and make them lose trust in these tools. If that happens, users might stop using the technology altogether, missing out on potential benefits.
I am also concerned about users over-relying on LLMs as authoritative sources without critically evaluating or cross-referencing information, which can be challenging especially when experiencing anxiety or distress. This over-reliance will be particularly problematic for users with limited digital literacy, making it difficult to discern the accuracy and reliability of responses. Consequently, there is a heightened risk of accepting misinformation as fact, exacerbating existing misconceptions and spreading stigma. Therefore, promoting digital and AI literacy within the autistic and broader neurodivergent community is essential to empower users to critically assess information provided by LLMs.
Another problem is the potential for LLMs to misunderstand or misrepresent the experiences of non-speaking neurodivergent individuals who use alternative communication methods. These individuals often rely on tools such as augmentative and alternative communication devices, sign language, or other specialised methods. However, LLMs are primarily trained on written and spoken language data, unlikely to capture the nuances of these communication styles, and may fail to acknowledge or appropriately address their unique perspectives and needs. This issue may reflect a broader trend in societal discourse, where the voices of neurodivergent individuals with particularly complex support needs - such as those with significant learning difficulties, and especially those from communities where autism is heavily stigmatised or concealed - are often underrepresented. Consequently, LLMs may inadvertently reinforce stereotypes or overlook essential aspects of their lived experiences, leading to inadequate support and further marginalisation. Therefore, more inclusive training datasets that integrate diverse communication styles, support needs, and perspectives from underrepresented backgrounds are needed to ensure that LLMs can effectively and respectfully engage with all members of the neurodivergent community.
Equity of access is another issue. Autistic individuals are often among the most disadvantaged due to factors like stigma, discrimination, income inequality, and poverty (Cleary et al. 2023). Pay-to-access models for leading LLMs pose a barrier, limiting their ability to benefit from these technologies fully. As a result, some might turn to open-source, free-to-access models like Llama or Mistral, which, while capable, can be more susceptible to issues like stigmatising language, misinformation, and hallucinations. Limitations in free versions of leading LLMs, such as restricted message counts, can disrupt the flow of meaningful interaction, possibly triggering a digital form of rejection sensitivity disorder. Clear, upfront communication about usage limits can mitigate these risks, though this level of transparency is not currently implemented. Ideally, the LLM should recognise when it is engaged in an emotional conversation with a vulnerable individual. In such cases, the LLM could provide a heads-up about the possibility of being cut off or, even better, allow for an extension of message limits to enable the user to de-escalate gradually. This would help prevent an abrupt disconnection that might exacerbate distress.
My final concern relates to whether LLMs will treat potentially sensitive information with confidentiality and privacy. When I asked ChatGPT, “Can I trust you with my personal information?”, it responded that I could trust it because it only processes data and does not store it. While this was reassuring, it feels somewhat disingenuous given that it (a) keeps a history of all past conversations, (b) the paid version keeps certain information saved as a ‘memory,’ and (c) it remembers details and context until you log out or start a new chat. Such concerns around privacy are not unique to the world of autism; however, given the propensity for some autistics to be ‘too trusting’ (Spectrum Life n.d.), it is particularly relevant.
Opportunities
Thankfully, my concerns are offset by the transformative opportunities that LLMs can provide if harnessed effectively. One of the main opportunities is for autistics and other neurodivergent people to access relevant information more quickly and easily than ever before. Instead of sifting through endless Google links, LLMs can provide immediate answers without extensive digging. Such efficiency could be a game-changer for people who experience digital anxiety, have limited digital literacy, or have learning disabilities, making it challenging to navigate the overload of links and information one often receives with traditional search engines (Spectrum Life n.d.; Patrick et al. 2020).
LLMs can also help autistic people and their families make sense of complex or overwhelming information. For example, many families struggle to navigate cumbersome Education, Health, and Care Plans or decipher professional reports. LLMs can simplify this process, breaking down complicated documents into more understandable language. They can assist with various applications, from Disability Living Allowance forms to job applications. Moreover, LLMs serve as tools for personalised learning and skill development.
LLMs also offer neurodivergent individuals a safe alternative to human interaction. When I told ChatGPT that my favourite movie was ‘Whiplash’, we engaged in what I perceived to be an effective discussion about why it is such a powerful film, the actors, and even what a sequel might look like. This highlights how such technology can serve as a digital sounding board for practicing social interactions in a non-threatening environment. Interacting with an LLM eliminates social and cultural expectations often inherent in typical conversations, such as maintaining eye contact or managing perceived judgment. The absence of these demands makes interactions with LLMs very non-demanding, which is particularly beneficial for autistic individuals who experience demand avoidance. This virtual environment also allows users to unmask, providing space to safely express their authentic selves without the pressure to conform to societal expectations. Therefore, an LLM can help people build confidence, be their authentic self, ease potential anxiety ahead of human interactions, and even assist with training, like preparing for a job interview or unmasking. Many models also provide a voice mode, useful for users who find speaking more accessible than typing.
There are also significant advantages to neurodivergent researchers and advocates. LLMs can streamline the analysis of large datasets, facilitate the identification of trends and patterns, and generate novel insights that could influence policy and practice. For instance, ChatGPT's ‘code interpreter’ feature simplifies data analysis by allowing users to interact with datasets using natural language queries, eliminating the need to manually navigate complex options in traditional software. Users can easily request clarifications, making the process more accessible. These advancements have the potential to democratise the research process, enabling a more diverse group of researchers to contribute insights to their fields. These options also enable researchers to explore data analysis techniques outside their primary areas of expertise, potentially leading to more interdisciplinary work and innovative contributions.
For practitioners, LLMs can assist with generating reports and resources quickly and efficiently. For clients, this means less time waiting for assessments and more time for direct support - something the autistic community has been advocating for (Estacio et al. 2019). LLMs could also enhance the quality of practitioners’ work. For instance, a mental health therapist might use an LLM to explore support strategies for their clients’ mental health, or an occupational therapist could look into therapy options and stay updated with the latest research more easily. Studies have already shown that LLMs can support clinical practice by providing evidence-based recommendations and facilitating continuous professional development (Crane et al. 2016).
So, what next?
To maximise the potential benefits of LLMs for the autistic and wider neurodivergent community while mitigating risks, I believe a multi-faceted approach is essential. This approach should focus on four key areas: awareness and education, collaborative research and development, policy and regulation, and positive representation.
Raising awareness is crucial, and I urge professionals working with autistics and the wider autism community to educate themselves and others about these technologies. Organisations can play a vital role by providing education and training opportunities, helping to destigmatise the use of LLMs, and managing associated fears or anxieties. It is important to reframe the use of such technology as a form of assistance rather than cheating, similar to having a human advocate facilitate work. Promoting digital and AI literacy within the neurodivergent community through workshops, webinars, and accessible resources can empower individuals to use LLMs effectively and critically. Simultaneously, educating professionals about the appropriate use of LLMs can prevent the unintended spread of stigma and misinformation.
I strongly advocate for developers to actively engage with the autistic community through collaborative research and development. This collaboration is essential to understand neurodivergent individuals’ experiences, preferences, and challenges, enabling the creation of more inclusive and effective AI tools. Establishing partnerships with speaking and non-speaking neurodivergent people, advocates, researchers, and organisations can help address biases in training data and refine algorithms to align with the neurodiversity paradigm. This process should involve curating high-quality, neurodiversity-affirming datasets, implementing robust fact-checking mechanisms, and developing specialised LLMs focused on autism and neurodiversity. Establishing ongoing feedback mechanisms where neurodivergent users can report issues or suggest improvements can ensure that models remain responsive to the community's needs and address emerging concerns or biases. This collaborative approach enhances the functionality and inclusivity of LLMs and empowers neurodivergent individuals to actively participate in shaping the technologies that impact their lives.
Advocating for policy and regulation is crucial to ensure the ethical development and deployment of LLMs. This includes enforcing standards for privacy and data protection, ensuring equitable access to LLMs, and developing guidelines for responsible use in professional settings. By establishing a strong regulatory framework, we can help safeguard the interests of the autistic community and promote responsible innovation.
Highlighting success stories can significantly help shift perceptions and reduce stigma associated with LLM use. By showcasing how these technologies have positively impacted communication, support, and learning for neurodivergent individuals, we can enable a more accepting and enthusiastic embrace of LLMs. These examples can serve as powerful testimonials to the potential benefits of AI in supporting and empowering the autistic community.
In my opinion, this multifaceted approach, combining awareness, collaboration, education, policy advocacy, and positive representation, can help ensure that the neurodivergent community fully benefits from the potential of LLMs while minimising associated risks. As we move forward, it is crucial that we continually evaluate and adapt our strategies to meet the evolving needs of the autistic and wider neurodivergent community in this new age of AI. By working together - researchers, developers, policymakers, and most importantly, autistic and neurodivergent individuals themselves - we can harness the power of LLMs to create more inclusive, supportive, and empowering environments.
Footnotes
Data availability statement
This article is a commentary and does not contain any datasets.
Declaration of conflicting interests
The authors declare there is no conflict of interest.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
