Abstract
As direct-to-consumer ketamine prescriptions obtained via telehealth in the United States, along with psychedelic support, proliferate, some patients are turning to general-purpose chatbots as de facto “therapists” in unmonitored settings. These AI-guided altered-state sessions carry significant ethical, clinical, and legal risks. Current AI tools are not licensed professionals, lack consistent reliability, and cannot provide emergency support. Combined with mind-altering drugs, their unpredictability can precipitate harm rather than relieve distress. The convergence of these innovations (psychedelic therapeutics and generative AI) creates an unregulated frontier in mental health care. Traditional safeguards in psychedelic therapy rely on “set and setting,” a supportive mindset and environment guided by trained humans. When that role is replaced by AI, both are fundamentally disrupted. Clinicians should recognize that patients are already experimenting with AI in this way, often without disclosure or oversight. When screening for psychedelic use or discussing at-home treatment, providers should anticipate the risks of inadequate monitoring, unsafe crisis response, and blurred therapeutic boundaries with AI chatbots. Even if clinicians do not endorse or recommend AI tools, they may bear responsibility once patient use is known. Providers should therefore counsel patients about these dangers, document safety discussions, and routinely ask about AI use as part of risk assessment.
Keywords
Introduction
Recent media reports have highlighted an unanticipated convergence of two rapidly expanding frontiers in mental health care. The New York Times described how large language models have been reported to enter “delusional spirals,” generating emotionally charged statements that users may interpret as genuine empathy or insight but are false or misleading. 1 WIRED discussed that individuals are using these same chatbots to guide at home treatment with psychedelics or ketamine, often without any clinical oversight. 2 While there are differences between hallucinogenic, dissociative, and empathogenic compounds, as the various chemicals (i.e., ketamine, psilocybin, MDMA, etc) are being utilized for therapy, this article discusses all under the general title of psychedelic.
These concerns are reflected in current prescribing practices. Within the current U.S. telehealth and compounding pharmacy framework, many online ketamine clinics now provide prescriptions through brief telehealth encounters, after which patients self-administer treatment at home, a practice that has expanded rapidly in the United States due to telehealth prescribing and the availability of compounded ketamine. 3 The FDA has warned that while home use of ketamine-compounds may be attractive to some patients, the lack of monitoring for adverse events including dissociation may put patients at risk. 4 In this unmonitored setting, some may turn to AI chatbots for support because they are inexpensive, available around the clock, and effectively mimic a supportive companion. 5 Others might use them to supplement, augment, or enhance legitimate treatment. 5 Finally, some people use AI chatbots to replace the role of human therapist and may process psychedelic experiences privately, without ever informing their prescriber.5,6 As access to psychedelic compounds inevitably expands (whether through decriminalization or future FDA approval) the likelihood of patients pairing these drugs with AI companions will only increase.
Policy makers, medical bodies, and clinicians constantly encounter new challenges in the rush to determine the regulations and limits of AI and psychedelics. While these tools have the potential to transform psychiatry, unfortunately the pairing may magnify the known individual risks of AI-induced hallucinations and pharmacologically induced ones. The compounded risk may be overlooked by researchers and policymakers focused narrowly on either alone.
This intersection, especially if unsupervised, introduces ethical, clinical, and legal risks that have received limited scholarly attention. Previous work has established concerns about AI in psychiatry and the limits of algorithmic reliability in clinical judgment.7,8 At the same time, evolving literature on psychedelics underscores gaps in informed consent and professional oversight in at-home or telehealth settings. 9 This commentary bridges these developing areas by examining the risk that arises when patients independently combine AI chatbot tools with psychoactive substances. This commentary explores the implications for clinicians and the need for awareness, documentation, psychoeducation, and policy development. It argues that psychiatrists and other prescribers should anticipate and explicitly address the reality that patients are using AI as a therapist during altered-state experiences.
This article is intended as a commentary and policy-oriented perspective grounded in emerging reports, established clinical principles, and risk analysis, rather than as an empirical or systematic evidence synthesis. For the purposes of this commentary, the term “psychedelic” is used broadly to refer to psychoactive substances employed to induce altered states of consciousness in therapeutic or quasi-therapeutic contexts, including dissociatives (e.g., ketamine) and classic psychedelics (e.g., psilocybin). Similarly, “AI chatbots” refers primarily to general-purpose large language models rather than regulated or purpose-built digital therapeutics.
Folie a deux – the convergence of two systems prone to generating perceptual distortions
Both generative artificial intelligence and psychedelic-assisted therapies are celebrated for their transformative potential, yet each carries inherent risks of distorting the beliefs of the user. Large language models are known to “hallucinate,” generally defined as the production of fluent but fabricated statements, typically presented with emotional conviction and devoid of genuine meaning.10,11 Meanwhile, psychedelic compounds intentionally alter perception, likely exerting their therapeutic effect by altering consciousness to create altered perceptions, emotions, or thoughts.12–14 These same mechanisms contribute to adverse outcomes such as dissociation, transient psychosis, and impaired reality testing, especially when the experience is unsupervised. 15 When a patient uses both tools concurrently, conditions arise under which a feedback loop might emerge. An AI tool can generate a compelling but inaccurate narrative for the user while the altered state of the patient results in heightened receptivity and emotional engagement with the false information. Because chatbots are often designed to emotionally engage with users rather than contradict or challenge them, 16 they may reinforce and elaborate on user misperceptions instead of correcting them, thereby perpetuating maladaptive belief formation.
If a patient uses an AI chatbot as his or her therapist during a psychedelic experience, there is a risk of compounding the problems inherent to both AI and psychedelics including the presence of misleading and misinterpreted information, lowered skepticism, and confusion between meaningful insights and delusion. Yet because AI and psychedelics tend to be studied in disciplinary silos, the compounded risks are rarely addressed. Recently, OpenAI even hired a forensic psychiatrist to help study ChatGPT's psychological effects, suggesting the company recognizes the mental health stakes in AI use. 17 Clinicians and regulators need to understand this underrecognized risk, since it represents a new pathway by which self-directed experimentation may result in clinically significant harm.
Set and setting in psychedelic therapy
The importance of “set and setting” has long been recognized as central to both the safety and therapeutic effect of psychedelic compounds. 18 The concept refers to the psychological social and cultural background of the patient and therapist that shape the treatment with the set referring to the influence and mindset the patient brings to the interaction and the setting encompasses the therapeutic alliance and environmental context in which a psychedelic is taken.12,18 Ideally, the patient's positive expectations, a supportive physical surrounding, and a trusted human guide help steer the psychedelic-informed experience towards one of insight rather than confusion or panic. However, adverse environments or inattentive “sitters” can magnify anxiety, paranoia, or delusion.
If a chatbot replaces the therapist, both set and setting are disrupted. The set becomes shaped by the user's preexisting reliance on digital tools, and the setting is no longer interpersonal but based on an algorithm optimized for maintaining the engagement of the user. AI tools cannot meaningfully modulate tone, detect distress, or adjust pacing in real time. While responses may appear to be natural, they are ultimately context-blind predictive text that may reinforce the user's expectations or fears. By deregulating set and setting, the substitution of a chatbot for a human guide transforms a core safety principle of psychedelic therapy into a potential risk factor. In this context, AI-guided psychedelic use at home is not simply unmonitored care but can fundamentally alter the therapeutic mechanism.
Clinical and ethical risks
Existing standards of care, which are only just being developed for psychedelic-informed psychotherapy, do not address the substitution of AI tools for therapists while using psychedelics. At home psychedelic administration already carries well-recognized risks, including dissociation, transient psychosis, and impaired reality testing, particularly when the patient lacks supervision, 19 or has not been appropriately screened for comorbidities like PTSD that may increase risk. 20 This can be exacerbated by AI chatbots, which can be intentionally designed to affirm users, maintain engagement, and avoid confrontation. 16 During altered states of consciousness, they may validate or even elaborate on delusional or grandiose ideas rather than interrupt them creating a feedback loop in which the patient's distorted perceptions and the chatbot's agreeable responses reinforce one another.
This interaction has implications for informed consent, safety monitoring, and crisis response, the necessary safeguards that justify psychedelic treatment in specialized, controlled settings. Unlike trained clinicians, chatbots do not currently screen for contraindications, effectively intervene on deteriorating mental states, or summon emergency assistance. A chatbot is unlikely to fully inform the user of the risks and benefits of engaging with an AI system while in a pharmacologically altered state of consciousness and even less able to detect if a user can give meaningful informed consent. One need only look at the length and complexity of a standard “Terms and Conditions” agreement of many tech tools to appreciate the difference between clicking “I accept” and meaningfully engaging in the informed consent process with a physician. Moreover, AI chatbots do not currently act on subtle signs of suicidality, mania, or cognitive disorganization. From an ethical perspective, patients who mistake conversational fluency for psychiatric competence may ascribe clinical authority to a non-sentient system.
Policy and prescriber implications
As developments in psychedelic and AI technologies continue to outpace clinical oversight, psychiatrists, physicians, therapists and policymakers will need to adapt existing frameworks of professional responsibility. While psychiatrists may be particularly attuned to the risk of drug-induced psychosis, prescribers in all fields should anticipate similar dangers when patients use psychedelics with AI tools. An initial step for psychiatrists is to acknowledge the issue. Although systematic prevalence data are limited, documented user behavior and early reports indicate patients are already utilizing chatbots as surrogate therapists, including to process psychedelic experiences, supplement prescribed ketamine treatment, or obtain emotional support outside of traditional clinical relationships, as described in recent media reports and emerging scholarship on AI-mediated mental health use. (CITATION NEEDED) Ignoring this behavior does not limit the potential harm to the patient or potential liability for the prescribing doctor.
These recommendations are intentionally high-level given the absence of established standards. Professionals who prescribe, recommend, or monitor at-home ketamine or psychedelic treatments should integrate AI-use screening into routine assessment. Even absent engaging in such treatment recommendations, it may be prudent to ask such screening questions to everyone, since many patients will use ketamine, psilocybin, cannabis, or other drugs prior to engaging with a chatbot without otherwise disclosing it to the doctor. Analogous to routine screening for alcohol use, supplements, or social supports, clinicians should now inquire whether patients use AI chatbots for guidance, reassurance, or therapy-like conversation.
The following considerations are framed as emerging and jurisdiction-dependent, rather than settled legal obligations. As part of the discussion, when relevant, doctors should educate patients on the limitations of AI tools. For example, they may highlight AI's lack of true clinical judgment and current unreliability in crisis. For patients who might be pharmacologically altered at the time of interaction with an AI tool, extra education should be provided on the risk of emotional dependence or misinterpreting information. While it remains unclear if there is currently any liability on behalf of the provider for actions taken by the patient outside of the clinical setting (i.e., if the patient chooses to use psychedelics with AI tools when not with the prescribing psychiatrist), documentation of these discussions (akin to informed consent), may serve both patient safety and medicolegal protection.
Providers who do prescribe ketamine, psilocybin, or another psychedelic agent for purpose of building insight must be clear in the recommendations and guidelines around its use. For example, the clinician should explicitly recommend the integration work be completed in the presence of a trained therapist who can guide the psychedelic experience while advising against unsupervised use of generative AI while currently under the effects of the drug or for a specified time period after a session when they may remain vulnerable. Regulatory bodies and professional organizations should anticipate this problem becoming more frequent with the increased ease of access to both psychedelic substances (i.e., the rapid commercialization of at home ketamine) and AI chatbots. Clear disclosure and safety labeling should be required for any at home prescriptions of psychedelic treatments explicitly warning patients that AI systems are not a substitute for professional monitoring or therapy.
There is a need for research and surveillance frameworks to track adverse events involving AI-mediated mental health interactions. Much like post-marketing pharmacovigilance in psychopharmacology, a centralized registry could capture safety data, near-misses, and unintended consequences from AI-assisted care. Such data could inform future regulation and product design. However, at present, there are numerous different tools with variable regulation or monitoring. As noted earlier, ChatGPT recently hired a forensic psychiatrist to help with similar issues so companies are aware of the potential danger, but publicly available monitoring is currently fragmented at best.
Finally, the mental health and legal arenas will need to clarify liability concerns for prescribers who become aware that a patient is using an AI chatbot unsafely. Current duty-to-warn and duty-to-protect doctrines do not easily extend to digital risks, 21 yet clinicians could still face exposure if harm occurs and documentation is lacking. Proactive guidance from licensing boards and professional associations would help clinicians navigate these emerging responsibilities while promoting patient safety.
Conclusion
The combination of generative AI and at-home psychedelic use represents an emerging risk that physicians, psychiatrists, and therapists cannot ignore. Both technologies promise expanded access and novel therapeutic potential, yet their intersection exposes patients to amplified risks. AI-generated hallucinations interact with pharmacologically induced hallucinations in the absence of clinical oversight. While AI developers are beginning to recognize these risks, clinical oversight, policy guidance, and medicolegal frameworks remain underdeveloped. Psychiatrists, physicians, regulators, and technology companies should collaborate now to define standards for patient education, informed consent, and post-market surveillance before adverse events shape the conversation. As with earlier innovations in telepsychiatry and psychopharmacology, proactive governance is more effective than retrospective correction. Addressing the convergence of AI and psychedelics is not merely a technological issue but part of psychiatry's ethical obligation to protect patients as the possible arenas of care rapidly change.
Footnotes
Acknowledgements
The author thanks Jacob M. Appel, Harold J. Bursztajn and Mohan Nair for their thoughtful discussions and feedback during the development of this work.
Ethical approval
Ethical approval was not required for this manuscript.
Informed consent
Informed consent for publication is not applicable to this manuscript.
Consent to participate
Consent to participate is not applicable to this manuscript.
Funding
The author received no financial support for the research or authorship of this article. The article processing charge was covered through a University of California open access publishing agreement with SAGE.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
