Abstract
This article examines how the concept of explainability must be fundamentally reconceptualised when applied to digital phenotyping in adolescent mental health contexts. After a short overview of digital phenotyping, the ethical challenges connected to it are sketched, which mostly come from the use of AI and include the well-known problems of bias, distortion, black box models and missing transparency. These aspects must be considered even more carefully when it comes to adolescents. After that, it is proposed that explainability should be the central ethical demand that must be fulfilled before such technologies are allowed to be used, and the connection between explainability and other important normative concepts like trust and informed consent is explained. While existing debates about AI transparency in healthcare often assume universal standards of explainability, this analysis demonstrates that the developmental, relational, and epistemic particularities of adolescence demand a qualitatively different understanding of what it means for algorithmic systems to be ‘explainable’. The paper argues that explainability in this context cannot be reduced to technical transparency or procedural information disclosure, but must be reimagined as a multidimensional, developmentally sensitive concept that encompasses relational dynamics, identity formation processes, and the cultivation of epistemic autonomy. This reconceptualisation reveals that traditional approaches to explainable AI – developed primarily for adult populations and clinical professionals – fail to address the specific ways in which adolescents engage with, understand, and are shaped by algorithmic categorisations of their mental states.
Get full access to this article
View all access options for this article.
