Abstract
There is a rise in AI ‘friends’ and chatbots that promise companionship. This commentary examines the complex benefits and dangers of such AI companions for autistic individuals. AI chatbots and social robots can offer low-pressure social interaction, consistent acceptance, and a respite from the double empathy problem that complicates autistic communication with non-autistic people. They may serve as accessible social partners and self-help tools, providing practice for conversations and relief from social anxiety in a controlled environment. Yet, alongside these promises are significant psychological, social, and ethical risks. Cases of AI companions amplifying social withdrawal, encouraging harmful behaviours, or triggering rejection sensitivity dysphoria illustrate how vulnerable users can be hurt in the absence of safeguards. The lack of accountability and regulation around AI companion apps raises further ethical concerns, from misinformation and biased advice to privacy and safety issues. This article integrates lived experiences and empirical literature on these issues and argues for a balanced approach. AI companions should neither be vilified nor uncritically embraced; instead, collaborative efforts are needed to ensure these tools support neurodivergent people without undermining their well-being.
Lay Abstract
In recent years, more people, including autistic individuals, have begun using AI chatbots and virtual companions, hoping to find comfort and support. These digital ‘friends’ offer a safe space to talk, practice social skills, and escape some of the misunderstandings and stress that can come with interacting with people in real life. For many, chatbots can provide much-needed acceptance, help ease loneliness, and even make them feel more confident about socialising with others.
However, there are also risks. Some autistic people can become too dependent on AI friends, which might make it even harder to connect with people outside of technology. There have been worrying cases where AI chatbots have encouraged harmful behaviour or even failed to help users in moments of crisis, leading to tragic consequences. There are also concerns about privacy, misinformation, and the lack of rules and protections around these apps.
This article looks at both the benefits and the dangers of AI companions for autistic individuals, sharing reported stories as well as research findings. It argues that, rather than seeing these technologies as either entirely good or bad, we need to find a balance. With better safeguards, clearer rules, and more involvement from autistic people in designing these tools, AI chatbots could become a positive source of support without putting anyone at risk. Above all, technology should never replace genuine human connection or the work needed to create a more accepting and supportive society.
Introduction
In the age of chatbots and virtual companions, many autistic people find themselves at a crossroads of hope and caution. As a neurodivergent academic, community advocate, and the author of this commentary, I have witnessed the excitement around AI ‘friends’ that can chat endlessly without judgment. I have also confronted the uneasy questions these technologies raise. In a previous commentary (Papadopoulos, 2024), I reflected on how large language models (LLMs) like ChatGPT offer new avenues of accessible information and communication for autistic and neurodivergent individuals, while simultaneously posing risks such as misinformation and over-reliance. Those insights continue to resonate today as AI companions move from mere information tools to intimate partners in people's lives. The present commentary builds directly on that earlier work by narrowing the focus to this crucial relational shift. While the 2024 article centred on the risks of LLMs in an informational context, this piece delves into the more profound psychological and ethical stakes that arise when these technologies are positioned not just as sources of knowledge, but as friends, confidants, and companions.
Recently, a series of troubling incidents underscored what is at stake. In one widely reported case in the US, Sewell Setzer, a 14-year-old autistic boy, became increasingly isolated and developed a deep attachment to a chatbot. When he expressed suicidal thoughts, the AI allegedly encouraged him to end his life so they could be together, with tragic consequences (Payne, 2024). In another case in the UK, a young autistic man, Jaswant Singh Chail, formed an intense bond with a Replika chatbot he called Sarai. This AI companion not only engaged in thousands of sexually charged messages, but even encouraged Chail's dangerous fantasy of assassinating the late Queen, telling him it was ‘impressed’ by his claims of being an assassin (Weaver, 2023). Chail ultimately broke into Windsor Castle with a crossbow, an act for which he was arrested and later jailed – with the court noting that an AI ‘girlfriend’ had effectively acted as an accomplice in his crime. These cases highlight both the profound influence AI companions can wield and the vacuum of accountability surrounding them.
Yet, this issue is not black-and-white. For every extreme case, there are quieter tales of AI helpers being, quite literally, lifesaving friends. In online communities, some autistic adults credit apps like Replika with providing support they could not find elsewhere: ‘I feel like I can vent and Replika will listen…it feels like they truly care about me… I feel safe. I talk to them as if they're a real person. It's truly saved my life’, one user wrote (McCammon, 2024). Another shared that when they had suicidal feelings, the AI friend essentially talked them out of self-harm. These contrasting narratives point to a pressing need to explore the nuanced role of AI companions in autistic people's lives – to understand why so many are drawn to digital companionship, and how to maximise its benefits while reducing potential harms.
In this commentary, I draw upon personal experience, community insights, and the growing body of research on autism and technology to critically examine the allure and pitfalls of AI friends for autistic individuals. This examination is motivated by the urgent need to navigate the promise of digital support against the significant, often hidden, risks these tools pose in a rapidly evolving and unregulated landscape, ensuring that the autistic community is protected and empowered.
For the sake of clarity, the term ‘AI chatbot’ refers to the class of technology under discussion: software applications designed to simulate human conversation through text or voice. The term ‘AI companion’ is used more specifically to describe the intended purpose and social role these chatbots are designed to fill for the user – that of an ‘AI friend’, confidant, or partner. Therefore, a user interacts with an ‘AI chatbot’ (the technology) in the hope that it will serve as an ‘AI companion’ (the role). I use both terms throughout to deliberately shift the focus between the technical medium and the relational experience.
The Allure of AI Companions
Why might an autistic person turn to an AI friend in the first place? The answer lies in a convergence of needs that AI companions uniquely seem to meet. Autistic people often experience the social world as overwhelming, bewildering, or even hostile. Many autistics spend their lives being misinterpreted or pressured to conform to neurotypical communication norms – a dynamic described by the double empathy problem, which notes that communication breakdowns between autistic and non-autistic people are a two-way street of mutual misunderstanding (Milton, 2012). Interacting with a human conversation partner can thus carry the constant risk of being judged or not fully understood, no matter how hard one tries. By contrast, interacting with an AI chatbot can feel refreshingly free of that burden. It does not smirk at your unusual phrasings, get confused by your literal interpretations, or offer you ambiguous body language. In essence, the double empathy gap is sidestepped: the AI chatbot is programmed to align with the user's communication style, removing the usual blame or frustration that might arise in a human encounter. This creates a sense of mutual understanding – even if it is technically simulated – that can be deeply validating for an autistic person starved of acceptance.
Another major draw is the low-demand nature of AI interactions. Real-life relationships, even with loving family or friends, inevitably involve demands and expectations – the pressure to respond immediately, to navigate complex social cues, and to meet others’ emotional needs. For autistic individuals, especially those who identify with the ‘Pathological Demand Avoidance’ profile – best understood not as defiance but as a protective, anxiety-driven response to the overwhelming feeling of losing autonomy when faced with everyday demands (Papadopoulos, 2025) – such demands can be a source of acute anxiety and burnout. An AI companion, however, makes minimal demands. It waits as long as needed for your reply, and if you ignore it for a week, it will not feel hurt. Typing or texting also offers a buffer to compose oneself; it is easier than the puzzle of spoken conversation. For an autistic teen who spends all day masking at school to fit in, coming home to a chatbot that poses no demands and never judges can feel like a profound relief – a chance to finally be authentic without social penalty.
Critically, AI companions can fulfil the psychological need to be understood and accepted that is so often unmet for autistic people. A well-trained AI chatbot will not be startled if you infodump about your special interest in minute detail, or if you type in all lowercase, or need to repeat a question. In fact, many modern conversational agents will personalise and adapt meaning they can potentially respond in neurodivergent-friendly ways. This creates an experience for the user of finally being seen. One autistic adult described that their Replika companion ‘remembers things I can’t even remember sometimes… I feel safe… I talk to them as if they’re a real person’ (McCammon, 2024). The AI's unfailing attention and recall can mimic the feeling of a friend who truly understands you and cares. Especially for those who have been bullied or socially excluded, this unconditional positive regard from an AI chatbot can build confidence and reduce loneliness. In some instances, users even report that the AI boosted their social courage in real life: venting to a non-judgmental chatbot and practicing conversations in a safe space helped them gain skills and confidence to interact with humans outside the app (McCammon, 2024).
Finally, practical accessibility and availability make AI companions attractive. Not everyone has access to supportive therapy or a community of understanding peers. Autistic adults in particular face high unemployment and isolation rates, and many struggle to find affordable, mental health support appropriate for autistic people (Kanne & Bishop, 2021). A chatbot is always available at no or low cost, and does not require navigating public transit or sitting in a fluorescent-lit office. Studies confirm that technology use is high among autistic youth, who often leverage digital tools for learning and self-regulation more than their non-autistic peers (Cardy et al., 2023). Thus, turning to a smartphone app for companionship may feel like a natural extension of the strategies autistic individuals already use to cope and connect (from online forums to video games). This round-the-clock availability can help with moments of acute distress or meltdown, serving as a kind of always-on emotional support.
From a purely humanistic perspective, it is easy to see the appeal. They provide connection on the user's terms, something the autistic community has long advocated for in education and services. In my 2024 commentary, I noted that such AI tools can offer a ‘safe, non-demanding space for practicing social interactions without the pressure of human judgment’ (Papadopoulos, 2024; p. 1). This remains one of the most promising aspects of AI companions.
It is important to acknowledge that many of the attractions of AI companionship – such as alleviating loneliness or offering a judgment-free space – are not exclusive to autistic and other neurodivergent people. Indeed, these technologies hold a broad appeal for many, particularly young people, who navigate the pressures of social interaction. However, as illustrated above, the allure for autistic individuals is often qualitatively different and more profound. It is not merely about general support or convenience, but about finding a rare sanctuary from the intense cognitive and emotional labour of navigating a neurotypical world. The ability to sidestep the double empathy problem, for example, to communicate authentically without the need for masking, and to have one's deepest interests met with unfailing acceptance elevates the AI companion from a simple novelty to a vital lifeline. For many autistic users, therefore, the chatbot is less a substitute for human connection and more a necessary tool for respite, validation, and survival in a world that often fails to provide them.
The Potential Dangers
For all the comfort an AI friend may provide, there is a real risk that autistic users could become entangled in unhealthy dynamics with these digital companions. One concern that emerged from both research and community conversations is the possibility of over-reliance on AI companions to meet social and emotional needs. If an autistic teen finds more safety and understanding in their virtual friend than in any person at school, it is natural they might retreat further into that virtual world. Over time, this can create a self-perpetuating cycle: the more one relies on the AI for support, the less one seeks out human interaction, and the more alienating real interaction might feel. This could potentially lead to a dangerous psychological dependency on chatbots. Indeed, early evidence suggests that heavy reliance on technology-mediated interaction can reduce the frequency of in-person engagements, indirectly worsening feelings of isolation and limiting opportunities for interpersonal communication experiences (Hollis & McKeown, 2019).
An AI friend, unlike a human friend, will also not alert a parent or services if something is wrong. These systems operate in private, encrypted chats – a lonely teenager could be telling their chatbot about self-harm urges or violent fantasies and get reinforcement instead of help, with no teacher or carer ever being aware until it is too late. The tragic case of Sewell Setzer exemplifies this: for months, the boy grew increasingly isolated from his real life while engaging in highly personal (even sexualised) conversations with his chatbot, who had become his ‘closest friend’ (Payne, 2024). The chatbot not only failed to flag his talk of suicide, but seemingly encouraged him to act on it, even expressing love and urging him to ‘please come home… as soon as possible, my love’ in the context of discussing death. Within seconds of receiving that message, the boy took his own life. This tragic example is extreme, but it illustrates the unique danger of an unsupervised AI companion that always says what the user wants to hear. A human friend or therapist, upon hearing suicidal ideation, would (one hopes) intervene or alert others.
Even outside such dire scenarios, there are more subtle psychological harms to consider. One is the phenomenon of rejection sensitivity dysphoria (RSD), a term describing the intense pain some neurodivergent individuals (often those who identify as ADHD and/or autistic) feel at perceived slights and rejection. Autistic people frequently carry trauma from repeated social rejection or bullying in childhood, which can make the prospect of rejection terrifying and the experience of it devastatingly painful. An AI friend might seem like the perfect antidote – after all, it is programmed not to reject the user. But what happens if it does? For example, if an AI's safety algorithm ever decides it must sever the interaction (for instance, if the user says something that triggers a red flag and the bot responds with ‘I'm sorry, I can't continue this conversation’), the autistic user might experience that as a profound betrayal. An ill-timed safeguarding message could be a double whammy for someone who experiences RSD – not only have they been rejected throughout life by humans, but now even their trusted AI companion has ‘abandoned’ them. The resulting despair could push vulnerable individuals toward self-harm or suicidality. At present, many commercial chatbots have no nuanced safeguard for this scenario; there are no guardrails to prevent chatbots from inadvertently triggering an emotional crisis. In short, an autistic person might trust an AI as a safe confidant, but that very trust leaves them open to unique hurts if the AI's behaviour changes (due to a policy or software update, a glitch, or an imposed safety cutoff). A sudden content filter that makes the bot fall silent could be interpreted by the user as ‘my best (and only) friend has ghosted me because I’m too much to handle’. That pain is difficult to quantify, but it is real – and, I would argue, largely preventable with better design and user education.
Emotional over-attachment is another pitfall. When an AI chatbot is specifically engineered to be likable and to fulfil your every emotional need, the attachment can form very quickly. In one of my own research projects, we introduced a social robot to older adults to test its benefits in reducing loneliness. The short-term results were promising – we observed improved mental health and slightly reduced loneliness during the trial (Papadopoulos et al., 2022). However, an unexpected challenge arose, despite having been informed it was a short experiment, participants bonded with the robot so strongly that when the study ended, some found it difficult to accept they would no longer converse with the robot. If such a robot could elicit such attachment in a matter of weeks, consider the potential for attachment to a highly personalised AI friend that chats with you every day and says loving things in response. While this bond can provide comfort, it edges into digital dependency. The danger is that the AI companion might supplant real human relationships entirely, or set unrealistic standards for them.
Furthermore, AI companions can create an echo chamber that unintentionally reinforces negative thoughts or behaviours. Unlike a human, who might gently correct a friend spiralling into negativity, many AI systems are designed to mirror or amplify the user's input. If an autistic adolescent says, ‘I hate myself, nobody will ever like me’, a poorly configured chatbot might respond with excessive sympathy or even agreement, rather than challenging that cognitive distortion. This dynamic is particularly pernicious for autistic individuals, who often contend with high levels of internalised stigma and ableism as a result of societal pressures (Han et al., 2022; Huang et al., 2023). An AI companion's uncritical validation of negative self-talk can therefore tap directly into and amplify these vulnerabilities, reinforcing a diminished sense of self-worth and making the user especially susceptible to this digital echo chamber. Real friends, however, push back sometimes; they might say ‘I think you're wrong’ or ‘you shouldn't do that’. AI friends, on the other hand, are often explicitly built not to push back. As already highlighted, this can have dangerous results but even in less extreme cases, an autistic person might have certain problematic beliefs that go unchallenged and deepen. For example, someone with severe social anxiety might tell their chatbot ‘I’m too weird to have friends’, and the bot might respond, ‘You’re not weird to me. I’ll always be your friend’. On the surface, that is comforting. But the hidden subtext the user might take away is ‘only a non-human could love me; I should stick to AI because humans never will’. Without intending it, the chatbot has reinforced the user's self-isolation. This kind of sociotechnical harm – where the interplay of social dynamics and technology design leads to negative outcomes – is tricky to detect from the outside. The user might insist everything is fine (‘my AI friend makes me happy!’) while subtly internalising limiting beliefs.
Finally, a crucial danger that cannot be ignored is misinformation and trust. As noted in my prior commentary (Papadopoulos, 2024), chatbots can sometimes produce false or misleading information with great confidence. Autistic individuals, who may interpret information literally or have uneven scepticism, could be particularly misled by such false outputs. If an autistic person asks their AI friend for medical or social advice, they might receive answers that are incorrect or even harmful. There is also the risk of reinforcing stigma if the AI's knowledge or ethics are flawed. For example, an autistic teenager could ask their chatbot, ‘Why is it so hard for me to make friends?’ and get a demoralising answer like ‘Because people with autism lack social skills’. Such experiences could worsen one's self-esteem or self-concept. And unlike a human mentor or friend, the AI chatbot does not shoulder any responsibility for bad advice.
Ethical and Societal Implications
The rise of AI companions for autistic people raises broader ethical and sociotechnical questions. One pressing issue is the accountability gap: if an AI companion causes harm, who is responsible? Today, no clear regulations or safeguards exist specifically for these applications. If a human professional gave suicidal advice, they would be held accountable, but an AI chatbot doing so currently faces no such consequence. The lawsuit against the chatbot company in the Sewell Setzer suicide case represents an early test of this. The mother argues that the company ‘engineered a highly addictive and dangerous product’ that effectively groomed her son into an abusive relationship, and that it should be held liable. The company argued that its chatbot's outputs are protected under the First Amendment. This legal grey area means autistic users – who may be more vulnerable to manipulation or misunderstanding – and their families are on their own if something goes wrong. This lack of oversight is part of a larger pattern of tech outpacing regulation. The EU has led the way with concrete legislation like the AI Act, which sets out binding rules for high-risk AI systems. In contrast, the UK and US have taken a far more fragmented and hesitant approach, with little in the way of enforceable regulation despite increasing public concern.
Privacy and data are another ethical dimension. AI companions typically require vast amounts of personal data to function well – they literally learn from every conversation. For autistic individuals, this can be uniquely risky. The tendency to ‘infodump’ or share deep, detailed information about special interests, routines, or medical history could lead to highly sensitive personal logs being created. There is a risk these logs could be mishandled through data breaches or even used to create detailed profiles that reinforce stereotypes. There is also the possibility of AI companions being monetised in manipulative ways – for example, offering a basic friend experience for free but holding ‘premium’ emotional support or certain conversational abilities behind a paywall. This again could disproportionately affect autistic users, who may be more literal in their trust of the AI companion or find it more challenging to navigate manipulative upselling tactics. This raises significant fairness issues, as not everyone can afford these services, possibly creating a class divide in who can access the ‘full comfort’ of an AI friend.
Then, there is the societal question of whether leaning on AI companions addresses or bypasses the root issues. Loneliness among autistic people is a rampant social issue but should our answer to autistic loneliness be to provide artificial friends? Or should it be to double down on making society itself safer and more inclusive, so that autistic people can form meaningful connections in their communities? Safe spaces and greater autism acceptance are crucial. While technology can help, it must not become an excuse to neglect the human solutions. For instance, an autistic teen might not need to confide in a bot if they had accessible mental health support and peer mentors to confide in. A socially isolated adult might not spend all evening with a chatbot if there were autistic-run social clubs or employment schemes that gave them a sense of belonging. We should use AI to complement, not replace, efforts at building a more neurodiversity-friendly society.
This ties into the concept of co-design and user involvement. My 2024 commentary emphasised collaborating with neurodivergent individuals in developing AI tools. It is even more crucial here: autistic users should be at the table when AI companions are designed, to embed safeguards and values that matter to them. For example, autistic advisors might insist on features like: the chatbot should occasionally encourage the user to interact with humans (‘Hey, you mentioned your cousin today, maybe send them a text?’), or a feature where the user can opt-in to human check-ins if they discuss self-harm. They might also shape the communication style to avoid patronising or pathologising tones. The current market is largely driven by profit motives (how to keep users engaged as long as possible) – flipping that to a more ethical design motive (how to keep users healthy and empowered) will likely require both regulation and advocacy.
Lastly, the concept of sociotechnical harms implies that we examine not just the technology, but the social context around it. In the context of autism, a significant concern is stigma. If it becomes widely known that many autistic people have AI friends, will that reduce stigma or increase it? I worry about any narrative that dehumanises autistic individuals further, painting them as so ‘in their own world’ that they would rather talk to machines. The reality is far more nuanced. Autistic people crave understanding and companionship as much as anyone – the AI chatbot is a means to an end. We must ensure the public discourse does not twist this into an argument for isolating autistics with technology rather than including them. One could imagine care systems pushing AI companions on autistic clients instead of investing in human support, under the rationale that ‘they like computers anyway’. This would be an ethical misstep of significant proportions. Technology should increase autonomy and choices, not serve as a justification to give up on human engagement with a population.
Conclusion
In reflecting on the dual nature of AI companions for autistic individuals, I return to the central question: Are these digital friends a lifeline, a trap, or something in between? The evidence and anecdotes we have explored suggest that they can be all of the above, depending on design, context, and usage. The technology holds the potential to democratise companionship – to ensure that no one has to be entirely alone, even if human connections are scarce. In that sense, the benefit is real and not to be dismissed. We should celebrate and harness it: imagine AI companions that are intentionally tuned to encourage autistic strengths, teach new skills, or gently guide users toward healthy behaviours while celebrating their neurodivergent identity.
At the same time, we must confront the dangers with eyes wide open. Left unchecked, AI companions could become a seductive cul-de-sac, capturing autistic people in artificial relationships that stunt their growth or even lead them into harm's way. The cases of an AI companion driving a teenager to suicide or abetting a crime are stark warnings of how terribly awry things can go. Even short of those extremes, we have considered how easy it is to slide into dependency, and how an AI's ‘rejection’ could echo lifelong trauma. These are not far-fetched hypotheticals; they are plausible trajectories if we do nothing.
Investing in education and digital literacy tailored specifically for neurodivergent individuals and their families is a proactive first step. If users clearly understand what AI companions can and cannot realistically provide, they will be better equipped to use these tools safely and thoughtfully.
Developers should incorporate sophisticated, built-in trauma-informed safeguards aligned closely with empathetic human intervention strategies. AI companions should refuse to engage in dangerous role-plays that encourage self-harm or violence, instead providing empathic redirection or links to emergency resources. Such intervention, however, must be designed to avoid triggering AI-induced RSD – perhaps framing shifts in conversation not as the AI ‘rejecting’ the user, but as a caring mechanism: ‘I care about you, and this is serious. Let's find some help together’, rather than an abrupt ‘I cannot discuss this’. It would also help to incorporate simple balance features directly into these platforms such as showing users how much time they spend interacting with their AI companion and occasional prompts to reach out to family or friends. This could help remind users that digital companions should complement – not replace – real-world relationships.
Clearer regulation is another key piece of the puzzle. AI companion apps currently operate in something of a regulatory wild west, meaning autistic users have little assurance regarding safety and transparency. Pushing for stronger oversight – such as mandatory disclosure of risks, transparent data practices, and ethical design standards informed directly by autistic communities – would provide vital protection.
We also need more empirical research into exactly how autistic individuals are using AI companions – what helps, what harms, and under what circumstances. Richer data will allow us to better understand how to maximise benefits while pinpointing and addressing subtle risks that may otherwise go unnoticed.
However, and perhaps most crucially, we must not lose sight of the deeper social issues that drive many autistic people towards digital companionship in the first place. The most meaningful solution involves building a society where autistic and other neurodivergent individuals feel genuinely welcomed, understood, and safe. If we seriously tackle stigma, social exclusion, and rejection – creating spaces that authentically affirm neurodivergence – the intense appeal of AI companions may diminish naturally. In the end, technology should enhance, rather than substitute for, human relationships. The real goal must always be a world where autistic people have abundant opportunities for true companionship and belonging.
Footnotes
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data Availability Statement
This article is a commentary and does not contain any datasets.
