Abstract
Background
Mental illness remains a major global health challenge largely due to the absence of definitive biomarkers applicable to diagnostics and care processes. Although remote sensing technologies, embedded in devices such as smartphones and wearables, offer a promising avenue for improved mental health assessments, their clinical integration has been slow.
Objective
This scoping review, following preferred reporting items for systematic reviews and meta-analyses guidelines, explores validation studies of remote sensing in clinical mental health populations, aiming to identify critical factors for clinical translation.
Methods
Comprehensive searches were conducted in six databases. The analysis, using narrative synthesis, examined clinical and socio-demographic characteristics of the populations studied, sensing purposes, temporal considerations and reference mental health assessments used for validation.
Results
The narrative synthesis of 50 included studies indicates that ten different sensor types have been studied for tracking and diagnosing mental illnesses, primarily focusing on physical activity and sleep patterns. There were many variations in the sensor methodologies used that may affect data quality and participant burden. Observation durations, and thus data resolution, varied by patient diagnosis. Currently, reference assessments predominantly rely on deficit focussed self-reports, and socio-demographic information is underreported, therefore representativeness of the general population is uncertain.
Conclusion
To fully harness the potential of remote sensing in mental health, issues such as reliance on self-reported assessments, and lack of socio-demographic context pertaining to generalizability need to be addressed. Striking a balance between resolution, data quality, and participant burden whilst clearly reporting limitations, will ensure effective technology use. The scant reporting on participants’ socio-demographic data suggests a knowledge gap in understanding the effectiveness of passive sensing techniques in disadvantaged populations.
Introduction
Mental illness remains a significant global health concern, ranking among the top ten leading burdens of disease since 1990. 1 For patients that seek treatment for mental illness, accurate diagnosis is necessary for provision of specific interventions that will improve mental illness. Unfortunately, unlike some other chronic conditions, e.g. diabetes, there are no objective biomarkers available for mental illness diagnosis, and the current paradigm relies on intermittent clinical interviews or subjective self-reporting, which may provide an incomplete and inaccurate picture of patients’ conditions. 2
To improve the quality of observations to inform reliable diagnoses and progress monitoring, researchers have turned to remote sensing digital technologies such as mobile phones, wearables, and actigraphs 3 to enable the collection of behavioral and physiological data without requiring direct contact or self-reporting. This process, known as remote sensing 4 allows for continuous observations of patients in their living environments, providing valuable observational data for mental health assessments and diagnosis. 5 For example, automatically captured sensor data has been linked to individuals’ daily behaviors 6 or changes in mood symptoms, functioning levels and early detection of warning signs for suicidal ideation or onset and phase changes in bipolar disorder. 7
The unprecedented, accelerated trend towards increase in smartphone and wearable device ownership worldwide suggests that remote sensing has the potential to transform clinical and research practices in mental health.8,9 By combining clinical interview-like questionnaires with remote sensing, we can obtain frequent and near real-time measurements of core symptoms and behavioral mental health disorders from a patient's living environment, providing unprecedented proximity to their occurrence. This approach can mitigate the reliance on potentially unreliable retrospective accounts. However, to implement remote sensing in practice, we need evidence demonstrating concordance between passive sensor data and validated assessments. Existing experimental studies validating remote sensing with mental ill-health populations largely indicate the validity of remote sensor measurements when compared to validated psychiatrist assessments.3,10,11
While validity is important, it is not enough to integrate remote sensing into clinical and research practice. The accuracy and relevance of remote sensing data are also influenced by its temporal resolution, which can refer to both the duration of sampling and the time between samples. The temporal resolution needed for remote sensing varies depending on the symptoms or behaviors being monitored and the clinical context. For instance, mental health disorders with rapidly changing symptoms may require frequent and near real-time monitoring, while others with more stable or slowly changing symptoms can be monitored over longer intervals. The complexity of technical details, combined with differing reporting norms across various fields has led to duplicated and unreproducible research efforts. 10 As a result, the translation of remote sensing into mental health clinical practice has been slow. In response to this burgeoning gap, this study conducted a systematic review of remote sensing validation studies undertaken in populations with depression, schizophrenia, bipolar and addiction disorders to identify critical factors that are relevant to informing translation into clinical practice.
Specifically, we aimed to address the following questions in the studies where remote sensing was validated:
What were the distinctive characteristics of the population and the environment in which validation took place? What was the context for the validation of remote sensing, including the sensing purpose, and the nature and method of reference assessments used for validation? Which specific parameters were sensed, and what were the temporal factors influencing the sensing process?
Together this information aims to bridge the gap in knowledge related to translating remote sensing into practice in that it may assist in providing data that better informs the clinical diagnosis and treatment of patients with mental illness.
Methods
Search strategy
A systematic search of Medline OVID, Medline PubMed, IEEE, ACM, Scopus, and PsycINFO databases was performed on December 3, 2019 using search terms developed in collaboration with a research librarian at Flinders University, as shown in Table 1 (see Appendix 1 for detailed search strategy). Searches were repeated on July 15, 2022 to collect any further works that had been published since the initial search. The search results are presented in preferred reporting items for systematic reviews and meta-analyses (PRISMA) format in Figure 1.

PRISMA publication review chart. Adapted from: Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. Doi: 10.1136/bmj.n71.
Search terms organized by PICO framework.
Eligibility criteria
Only studies that (i) included populations with clinically diagnosed depression, anxiety, stress, schizophrenia, bipolar or substance use disorders as these conditions collectively represent the common constellation of patients encountered in clinical practice; (ii) gathered sensor data from participants using a wearable device or smartphone application reflecting the technology readily accessible to consumers, with potential for practical integration in real-world clinical settings; (iii) reported collection of validated assessment measurement of mental health outcomes from participants as a reference assessment; (iv) described a statistical relationship between the sensor data and reference mental health outcome assessment; (v) described the type and nature of collected sensor data; and (vi) involved collecting sensor data for at least 7 days. The stipulation of a minimum 7-day data collection period was established to align with real-world psychiatry assessments, capturing meaningful patterns and variations in mental health-related data, thus enhancing the clinical relatability and representativeness of the study findings. By specifically selecting studies that have employed validated assessment measurements of mental health outcomes and established a statistical relationship between the sensor data and the reference mental health outcome assessment, we ensure that only evidence-based real-world psychiatry assessments are considered in our analysis of insights into the translation and implementation issues. Non-English language publications, papers not published through peer review, studies providing no statistical outcomes, papers published before January 1, 2009, conference reports, protocols, in-vitro or lab-based studies, editorials or letters of opinion were excluded from the analysis. Search terms are included in Appendix 3.
Study selection processes
Screening of included papers was completed by three reviewers (AN, CL, YK), while data extraction was completed by another three reviewers (AN, MT, SI). Two authors were involved in screening and data extraction, and any disagreements at each of these stages were resolved to consensus between the authors through discussion. Unresolved discrepancies were resolved by a fourth author, NB. The process was carried out using Covidence software.
Data extraction and synthesis
Covidence 2.0 was used for screening and extracting information from included studies. The following variables were extracted from all eligible papers: (a) reported demographics, (b) clinical diagnosis of study population, (c) type and combination of reference assessments used for validation and (d) mode of administration (i.e. clinical interview, self-report, cognitive test) of reference assessments used, (e) types of modalities sensed (e.g. activity, sleep, speech, digital interactions, physiological signals), (f) intended purpose defined as types of decisions that were made using sensing data (e.g. detecting the potential for the disease, tracking the progression of disease that exists, forecasting future prognosis and intervention), (g) time resolution relevant to sensors—interval between consecutive samples, (h) duration of the study over which sensor data was gathered, (i) experimental design type—the chronological order of sensor data collection and reference assessments administration. Rationales used to include or exclude participants were derived qualitatively through thematic content analysis of recruitment criteria descriptions included in the paper. 12 Results are presented using descriptive statistics and narrative synthesis, and visualizations of the data were created using the Python software library seaborn (version 0.12.0).
Results
Overview of studies
The PRISMA chart shown in Figure 1 illustrates the entire study selection process. The search query returned 15,751 results, which reduced to 6520 studies after duplicates were removed. After reviewing the title and abstract further, another 6245 studies were excluded. Full text was retrieved for the remaining 275 studies, of which 225 studies were excluded for the reasons shown in the PRISMA chart. As a result, a total of 50 passive sensing experimental studies with mental illness were included12–61 (Tables 1 and 2, Figure 2). Of these studies, 18 were in depression,13–30 11 in schizophrenia,19,31–40 17 in bipolar,41–57 3 in anxiety,18,58,59 2 in substance use disorder,60,61 and 1 study was transdiagnostic 62 (Table 1). The median number of participants enrolled per study was 109 participants (range = 6–1784) (Table 1). Of the studies that reported gender of enrolled participants (n = 48), the average proportion of female participants were 53.23% per study (range = 0–92%) (Table 1). By geographical location, the United States (n = 20) was the most common country, and only three reported studies (two Brazil, one China) were from a location outside of high-income countries (Table 1).

Mental health topics of selected studies.
Summary of reviewed studies.
Characteristics of population and study environment
The sociodemographic information of enrolled participants was not uniformly reported. Of the included studies, 94% reported gender (Table 3), 92% reported age (Table 3), 26% reported ethnicity (Table 3), 20% reported marital status (Table 3), 36% reported education (Table 3), and 28% reported employment status (Table 3).
Demographic overview of included studies.
Those studies that reported details on recruited participants included: 44 studies with healthcare patients, 2 of students, and 4 of community members (Table 3). Of those studies that reported their recruitment source, 38 were enrolled through clinics,14,16,19–26,29,31–50,52,54–57,60–62 3 from the internet,27,30,53 and 2 from University.15,17
Of the 50 studies, 39 reported a funding source12–17,19,21–23,25–27,29–34,36,37,40–48,50–53,55–58,60–62 and 14 declared one or more authors had a conflict of interest.21,36,37,40,41,43–48,50,53,56 Twelve of the 50 studies reported using incentives for participation. The incentives ranged from financial incentives22,25,29,32,36,37,41,51,61 to gifting of the technology used in the study, 16 to gifting a t-shirt with an opportunity to win a smart phone 63 (Table 3).
A detailed overview of all variables that were used in deciding whether to include or exclude participants involved in the study are shown in Appendix 3. These variables were (a) diagnosis, (b) self-reported health behavior, (c) social, economic and family factors, and (d) severity of symptoms. The 253 identified variables were distributed across diagnosis (52.9%), health behaviors (19.8%), social and economic factors (14.2%) and other (3.9%).
Remote sensing validation context
Intended purpose of remote sensing
Studies of remote sensing used three methodologies: (i) to detect the potential for a disease to be present that is not known15,17,25,39,49,50,52,56; (ii) to track the progression of an established illness14,16,18–24,26–28,32–38,40–46,48,51,53–55,57,58,61,62; and (iii) to predict the progression of an established illness.13,30,31,59
Reference assessments type and mode of administration used for validation
The reference or comparator mental health assessments that informed diagnosis administered in the remote sensing studies varied substantially (see Table 4).
Remote sensor parameters temporal resolution.
Clinician interviews,18,35–37,40,43,45–48,52–54,56–59–61 cognitive assessment tasks,21,26,35,39 and self-report scales13–16,18–23,25–30,32–34,36–38,39,40,42–48,50–53,56–58,60–62 comprised the main modes of reference assessment.
For studies using self-report for reference mental health assessments (n = 41/50), the assessments scales used measured multiple mental health deficit (loss of function) diagnostic domains,1,14,32,33,38,39,62 depression/mood,12–15,17–22,24–27,29,35,36,39,41–43,45,49–52,55,56 anxiety and stress,15,18,25,58 psychosis/mania19,32,34,36–38,40,43–48,50–53,57 and other areas13,15,29,32,33,58,60–62 (Table 3).
Studies that reported using interviews for reference or comparator mental health assessments (n = 18), included 10 for bipolar disorders,43–48,52–54,56,57 4 for schizophrenia,35–37,40 2 for substance use,60,61 1 for anxiety, 59 and 1 for depressive disorders. 18
Of studies that used self-report scales for reference or comparator mental health assessments (n = 41), 16 were for depressive disorders,13–16,18–25–30 13 for bipolar disorders,42–47,48,50–53,56,57 9 for schizophrenia,19,32–34,36–40 2 for anxiety,18,58 and 2 for substance use disorders.60,61
Experimental designs used
All experimental designs that gathered remote sensor data and reference or comparator mental health diagnostic assessments used varied methodologies regarding how data was collected.
Twenty-eight studies described the data collection timeline methodology with sufficient detail to discern that there were four overall designs for these studies:
Design 1: There were 12 studies, which involved a reference assessment, followed by sensor data collection, and then one or more reference assessments.17,20,22–25,43,44–47,50,59
Design 2: In 9 studies, a reference assessment was followed by sensor data collection, and then a single reference assessment was performed.14,16,19,28,30,51–53,61
Design 3: In 5 studies, reference assessment preceded sensor data collection.23,26,34,38,39
Design 4: In 2 studies, sensor data collection preceded the reference assessment.33,56
Sensor characteristics
Type of sensing parameters
Sleep and activity sensors used relatively short measurement intervals (in the range of 30 seconds to 2 minutes).
Location sensors were used in a small number of studies with relatively longer measurement intervals (in the order of 5–15 minutes).
Ten different sensor types were used across studies (Table 3).
In the order of the number of studies that used a particular sensor measurement:
(1) Activity13–23,25,26,29,32,34–36,38,39,41–44,48,49,51–55,59,61 (2) Sleep,13,17–20,32,36,38,39,40,42,48,49,51,53,54,55,58,59 interaction with phone24,25,27,41–47 (5) Other sensors used were: Bluetooth
22
; conversations31,47,50; skin temperature22,34; skin conductance34,61; heart rate34,61; ambient light.
21
Time resolution for sensor sampling
The average pre-specified time period for gathering sensor data per study was 44 days (SD = 76.2).
The time interval between consecutively sensed parameters varied by the type of sensor used is shown in Table 4. The two most frequently sensed parameters—activity and sleep—had an average of 81 and 86 seconds between consecutive samples. The least two frequently sensed parameters—ambient light and Bluetooth—had a reported interval between consecutive samples of 900 and 600 seconds, respectively.
Time period of sensor data
There was substantial variation in the time period over which the sensor data was gathered. Studies that enrolled bipolar disorder patients had an average observation period of 61 days (SD = 90.8), 16.4 (SD = 24.2) for patients with schizophrenia and 44 (SD = 86.6) for patients with depressive disorders.
Figure 3 (Appendix 2) shows the relationship between the two aspects of sensor time resolution (measurement interval and duration of observation), as well as their interaction with sensor type, study purpose, study design and the types of condition assessment used. The data is further grouped by diagnosed condition. Only 29 of the 50 studies (58%) reported both measurement interval and duration. The other studies are not included in the figure.

Graphs show the relationship between sensor measurement interval and duration of remote sensing (with duration on a logarithmic scale). Measurement interval is shown as one of seven discrete values. Numerical labels indicate the number of the publication in the References. Where multiple studies reported the same interval and duration, markers for the studies are displayed side by side with arbitrary ordering. Where multiple sensor types were used in the same study (top row) or multiple forms of assessment were used (bottom row), markers belonging to the same study are grouped together inside a rectangular box, with arbitrary ordering inside the box. Note that three studies14,16,24 made use of multiple sensors at multiple measurement intervals, and hence are plotted in two separate locations each. Where a study shown in the top row has no corresponding marker in the same location in a lower row (e.g. study [14] has no marker for Study Design), this indicates that the information in question could not be obtained from the publication.
Discussion
Reliance on self-reported assessments in remote sensing
The three objectives of remote sensing mental health—detecting unknown diseases, tracking the progression of known diseases, and forecasting future outcomes—are congruent with findings from other studies. 64 However, our study revealed shortcomings in the reference assessments employed. Self-reports were the most common form of reference assessments. Self-report scales can be prone to bias, and this finding suggests biases and limitations inherent in self-reports may also be perpetuated into sensing approach. Furthermore, the majority of selected assessments have focused on detecting deficits, such as symptoms and impairments, in contrast to measures of function and strengths, which are just as critical to gain a comprehensive understanding of an individual's mental health. 65
Lack of socio-demographic details
Our study reveals a gap in understanding the applicability of remote sensing approaches to disadvantaged populations, partly due to insufficient reporting of sociodemographic information. We observed most research in this area is conducted in developed countries, leaving a dearth of studies on geographically diverse populations with limited mental health support options. Additionally, challenges in recruiting males have resulted in a gender imbalance in study participation, while the exclusion of low-functioning individuals raises concerns about the representativeness of findings and potential discrimination. Coordinated efforts toward shaping guidelines for ethical inclusive and representativeness in remote sensing mental health research are imperative. This effort should involve diverse perspectives from mental health researchers, ethicists, and representatives from different populations. Regulatory bodies can then monitor adherence to these guidelines and ensure benefits of remote mental health sensing research is accessible to all, regardless of their gender, socioeconomic status, or other sociodemographic factors.
Adoption details
Our study also revealed a gap in understanding into human factors that influence acceptability and adoption of remote sensing in existing research. Although some studies offered financial incentives, reporting on participant incentives remained inadequate. It is important for the scientific community to understand financial incentives offered because this lends insight into the extent to which engagement in a given study is likely to reflect engagement in routine care. It will also be important to explore applications of remote sensing in routine care settings with a focus on documenting engagement level and identifying patient perspectives on the value of such tools.
While various sensor types were explored to infer mental health, activity and sleep emerged as the most commonly and frequently sensed parameters, and the average time period over which sensor data was gathered was 44 days. This duration of observation may not be long enough to harness the full potential of these methods, which typically rely on intraindividual comparisons over time. This limitation has been noted in viewpoints, 66 but our finding puts an objective number behind this concern within the field. Additionally, consideration of the clinical implications of varying sampling rates across sensor types is crucial. For instance, high sampling rates may be required for activity and sleep sensors to accurately capture changes in behavior, whereas ambient light and Bluetooth sensors may require lower sampling rates because of their slower changes. It is also essential to carefully consider the practicality and feasibility of using certain types of sensors in real-world settings. For instance, location and Bluetooth sensors which demand significant power, may inadvertently disrupt other phone functionalities, thereby escalating the participant burden. While the use of multiple sensors and higher resolution has the potential to enhance data quality, 67 they may also be perceived as invasive, raising concerns about privacy and affecting participant willingness to engage in real-world studies, and in turn reducing the adoption of remote sensing in real-world. The diversity in sensor types, sampling rate, and data collection durations observed in our study underscores the necessity for standardizing approaches to sensor data collection in validation studies to advance remote sensing as a viable alternative paradigm in mental health research. 68
There remain a number of other considerations prompted by relevant recent literature. Based on this scoping review, it is unclear that the extant sensor methods have been sufficiently investigated and validated as proposed by other authors to constitute a “digital phenotype.”64,69 While the concept of a phenotype and corresponding genotype is relevant to mental illness, it is perhaps more achievable to derive endophenotypes 70 that describe some salient measurements of physiological characteristics of those with mental illness, such as activity levels in those who are depressed. In this context, our paper highlights that there remains much more to be investigated regarding what are suitable sensor measurements and whether these are valid characterizations of physiologic phenomena that are indicative of a particular mental illness. Similarly, other related research 71 points to another crucial area of investigation, that of user-friendliness and acceptability of remote sensor measurements for patients and clinicians, and which will likely require customization for clinical use.
Future directions
Future research should address gaps identified in reliance on understanding the applicability of remote sensing to disadvantaged populations and comprehending human factors influencing acceptability and adoption. Potential research directions for maximizing the benefits of remote sensing technology and addressing ethical concerns could focus on: (i) designing best practice reporting standards to capture minimum sociodemographic data in remote mental health sensing research protocols; (ii) increasing research with longer monitoring windows to enhance the opportunities for intraindividual analyses; (iii) understanding feasibility of implementing these methods in routine care; and (iv) expanding application of these methods to other mental health diagnoses beyond mood disorders.
Limitations
The review has several limitations that need to be considered. First, owing to varied methodologies, objectives and data collection practices, we could not evaluate the accuracy of predictions or diagnoses based on remote sensing. Second, we focused on peer-reviewed, academic studies published and not studies or examples of applications from industry or other venues. Third, our review focused on mental health and does not reflect the state of the remote sensing literature in other clinical populations. Nevertheless, this scoping review provides a comprehensive overview of psychiatric passive sensing academic literature in a narrative form.
Conclusion
The study highlights the nascent potential of remote sensing in mental health research and clinical practice. However, this analysis also reveals gaps in the field, including limited reporting on sociodemographic information, a focus on resource-rich countries, gender imbalance in study populations, and exclusions based on social determinants. While remote sensing shows promise in mental health, there is a need for standardization in study designs, greater inclusivity and attention to sociodemographic factors, attention to underrepresented diagnoses, longer monitoring windows, and improve transparency in reporting all protocol components that may be relevant for future implementation. Finally, the promise of remote sensing also needs to consider the limitations of the current methodologies, evidence-based64,69 and user (patient/clinician) acceptability, 71 well before the potential for characterization of mental illness using parameters can be realized.
Supplemental Material
sj-docx-1-dhj-10.1177_20552076241260414 - Supplemental material for Remote sensing mental health: A systematic review of factors essential to clinical translation from validation research
Supplemental material, sj-docx-1-dhj-10.1177_20552076241260414 for Remote sensing mental health: A systematic review of factors essential to clinical translation from validation research by Niranjan Bidargaddi, Richard Leibbrandt, Tamara L Paget, Johan Verjans, Jeffrey CL Looi and Jessica Lipschitz in DIGITAL HEALTH
Supplemental Material
sj-docx-2-dhj-10.1177_20552076241260414 - Supplemental material for Remote sensing mental health: A systematic review of factors essential to clinical translation from validation research
Supplemental material, sj-docx-2-dhj-10.1177_20552076241260414 for Remote sensing mental health: A systematic review of factors essential to clinical translation from validation research by Niranjan Bidargaddi, Richard Leibbrandt, Tamara L Paget, Johan Verjans, Jeffrey CL Looi and Jessica Lipschitz in DIGITAL HEALTH
Footnotes
Acknowledgments
The authors wish to acknowledge the contributions of Amy Nielson, Lidia Thorpe, Sarah Immanuel, Christine Huien Lee, Yixin Kwan, and Meseret Teferra in the data collection for this article and Dan Thorpe for analysis assistance.
Contributorship
NB conceived the study and wrote the first draft of the manuscript. RB and TP were involved in data analysis. All authors reviewed and edited the manuscript and approved the final version of the manuscript.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Ethical approval
Ethics approval was not required for this systematic review.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Informed consent
Informed consent was not required for this systematic review.
Supplemental material
Supplemental material for this article is available online.
| ID | First Author | Year | Interview | Cognitive Assessments | Self-report | Duration (days) | Experiment Design Type | ||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Global & Functional | Depression/Mood | Anxiety & Stress | Psychosis/Mania | Other | |||||||
| 12 | Alcantara | 2018 | CES-D | WHIIRS, ESS | 7 | 5 | |||||
| 13 | Averill | 2018 | CGIS, FAST | QIDS | 3 | 2 | |||||
| 14 | Ben-Zeev | 2015 | PHQ-9 | PSS | UCLA Loneliness Scale | 70 | 5 | ||||
| 15 | Bewernick | 2017 | MADRS | 7 | 2 | ||||||
| 16 | Chikersal | 2021 | 16 | 1 | |||||||
| 17 | Difranceso | 2019 | CIDI | IDS | Beck A | 14 | 5 | ||||
| 18 | Fasmer | 2016 | MADRS | BPRS | 11 | 2 | |||||
| 19 | Finazzi | 2009 | K-SADS-PL, CDRS | 63 | 1 | ||||||
| 20 | Jacobson | 2019 | MINI | HAM-D | 5 | ||||||
| 21 | Kumagai | 2019 | PHQ-9 | 365 | 1 | ||||||
| 22 | Merikanto | 2017 | K-SADS-PL, HAM-D | 18 | 3 | ||||||
| 23 | Mesquita | 2016 | 91 | 1 | |||||||
| 24 | Moukaddam | 2019 | PHQ-9, HAM-D | HAM-A | 8 | 1 | |||||
| 25 | O'Brien | 2017 | MINI, NART, EHI | MADRS, GDS-15 | 7 | 3 | |||||
| 26 | Saeb | 2015 | PHQ-9 | 14 | 5 | ||||||
| 27 | Saeb | 2016 | PHQ-9 | 10 | 2 | ||||||
| 28 | Stavrakakis | 2015 | MCTQ | 30 | 5 | ||||||
| 29 | Wahle | 2016 | PHQ-9 | 14 | 2 | ||||||
| 30 | Barnett | 2018 | 84 | 5 | |||||||
| 31 | Bromundt | 2011 | CGIS | BPRS, PANNS | SAS | 21 | 5 | ||||
| 32 | Bueno-Antequera | 2018 | SF-36 | BSI-18 | 7 | 4 | |||||
| 33 | Cella | 2019 | PANNS | 10 | 3 | ||||||
| 34 | Chen | 2016 | DSM-IV | VTS, GPT | 7 | 5 | |||||
| 35 | Chung | 2018 | DSM-V | CDSS | PANNS | 7 | 5 | ||||
| 36 | Depp | 2019 | CAINS 33 | CDSS | BPRS, SANS | 7 | 5 | ||||
| 37 | Janney | 2013 | GAF, CGIS | PANNS | 7 | 3 | |||||
| 38 | Kume | 2015 | BACS | GAF | 7 | 3 | |||||
| 39 | Mulligan | 2016 | DSM-IV | CDSS | PSYRATS, PANNS | 7 | 5 | ||||
| 40 | Abdullah | 2016 | 28 | 5 | |||||||
| 41 | Banihashemi | 2016 | HAM-D | 22 | 5 | ||||||
| 42 | Beiwinkel | 2016 | DSM-IV-R | HAM-D | YMRS | 365 | 1 | ||||
| 43 | Faurholt-Jepsen | 2014 | HAM-D | YMRS | 91 | 1 | |||||
| 44 | Faurholt-Jepsen | 2015 | SCAN | YMRS | 182 | 1 | |||||
| 45 | Faurholt-Jepsen | 2019 | SCAN | YMRS | 84 | 1 | |||||
| 46 | Faurholt-Jepsen | 2016 | SCAN | HAM-D | YMRS | 49 | 1 | ||||
| 47 | Gonzalez | 2014 | ISD-C-30 | YMRS | 7 | 5 | |||||
| 48 | Grierson | 2016 | 14 | 5 | |||||||
| 49 | Grunerbl | 2015 | HAM-D | YMRS | 84 | 1 | |||||
| 50 | Janney | 2014 | HAM-D | YMRS | 7 | 2 | |||||
| 51 | Krane-Gartiser | 2019 | DIGs, FIGs | MADRS | YMRS | 21 | 2 | ||||
| 52 | McGlinchey | 2014 | DSM-IV | IDS | YMRS | 60 | 2 | ||||
| 53 | Merikangas | 2019 | DSM-V | 14 | 5 | ||||||
| 54 | Ortiz | 2016 | 14 | 5 | |||||||
| 55 | Palmius | 2017 | DSM-IV | QIDS | 7 | 4 | |||||
| 56 | McGowan | 2019 | IPDE | QIDS | ASRM | 4 | 5 | ||||
| 57 | Cohodes | 2019 | MASC | PSQI | 18 | 5 | |||||
| 58 | Jacobson | 2021 | CIDI | 7 | 1 | ||||||
| 59 | Epstein | 2014 | DIS-IV | ASI | 112 | 5 | |||||
| 60 | Kennedy | 2015 | DIS-IV, SCID | ASI | 49 | 2 | |||||
| 61 | Gloster | 2021 | MHC-SF | BSCL, Psyflex | 7 | 5 | |||||
Design (1: gold standard → sensor → gold standard etc.; 2: gold standard → sensor → gold standard (once off) 3: gold standard → sensor; 4: sensor → gold standard; 5: unknown
Appendix 2
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
