Abstract
Artificial intelligence is emerging as a powerful tool for improving mental health research and care, offering opportunities for early intervention, personalised treatment, ongoing monitoring and enhanced capacity for the mental health workforce. In Australia, however, the integration of artificial intelligence into mental health systems is hindered by substantial knowledge, policy and regulatory gaps. This paper outlines two urgent priorities for the safe and effective use of artificial intelligence in mental health: (1) closing knowledge gaps and (2) developing robust policy and regulatory frameworks. We outline emerging opportunities, including artificial intelligence–driven technologies that could improve affordability, accessibility and treatment outcomes, as well as tools to strengthen the mental health workforce, alongside key risks such as inadequate regulation, insufficient monitoring or reporting of adverse events, perpetuation of bias in research, data privacy and security concerns, and lack of human oversight. We provide 10 recommendations to guide the safer adoption of artificial intelligence for mental health in Australia. These include the creation of a National AI in Mental Health Expert Advisory Group, evidence-based national guidelines, expanded data collection on artificial intelligence use in mental health, Australian-led research that considers priority populations, development of Australian databases for training artificial intelligence models, targeted investment in artificial intelligence technologies that can support an under-resourced mental health workforce, creation of artificial intelligence mental health literacy resources for the Australian public, and regulations that hold developers and providers accountable for the safety of their technologies.
Get full access to this article
View all access options for this article.
