Abstract
For people with mental illnesses that impair reality testing, such as psychosis, severe depression and bipolar disorder, Artificial Intelligence (AI) Large-Language Models (LLMs) may represent threats to mental health. LLMs are unable to detect delusional beliefs, may encourage and validate delusions and cognitive distortions, miss opportunities to reinforce reality-based thinking, and exacerbate risks of self-harm and harm to others. Psychiatrists need to understand these risks of LLMs for people with severe mental illnesses, and educate patients and carers on avoiding these potential harms. Risk assessments need to be informed by an awareness of the inputs that patients receive from LLMs.
Get full access to this article
View all access options for this article.
