Abstract
A wide array of digital supports (such as apps) have been developed for the autism community, many of which have little or no evidence to support their effectiveness. A Delphi study methodology was used to develop a consensus on what constitutes good evidence for digital supports among the broader autism community, including autistic people and their families, as well as autism-related professionals and researchers. A four-phase Delphi study consultation with 27 panel members resulted in agreement on three categories for which evidence is required: reliability, engagement and effectiveness of the technology. Consensus was also reached on four key sources of evidence for these three categories: hands-on experience, academic sources, expert views and online reviews. These were differentially weighted as sources of evidence within these three categories.
Lay abstract
Digital supports are any type of technologies that have been intentionally developed to improve daily living in some way. A wide array of digital supports (such as apps) have been developed for the autism community specifically, but there is little or no evidence of whether they work or not. This study sought to identify what types of evidence the autistic community valued and wanted to see provided to enable an informed choice to be made regarding digital supports. A consensus was developed between autistic people and their families, practitioners (such as therapists and teachers) as well as researchers, to identify the core aspects of evidence that everyone agreed were useful. In all, 27 people reached agreement on three categories for which evidence is required: reliability, engagement and the effectiveness of the technology. Consensus was also reached on four key sources of evidence for these three categories: hands-on experience, academic sources, expert views and online reviews. The resulting framework allows for any technology to be evaluated for the level of evidence identifying how effective it is. The framework can be used by autistic people, their families, practitioners and researchers to ensure that decisions concerning the provision of support for autistic people is informed by evidence, that is, ‘evidence-based practice’.
Autism spectrum disorder (hereafter autism) is a life-long condition characterised by persistent differences in social communication and interaction alongside restricted and repetitive patterns of behaviour, interests or activities (American Psychiatric Association, 2013). Studies in Asia, Europe, and North America have identified individuals with autism with an average prevalence of between 1% and 2%, diagnosed in three to four times as many males as females (Baio et al., 2018). The impact on the economy from intervention costs, lost earnings, and care and support for children and adults with autism is estimated at £32 billion per year in the United Kingdom and $175 billion per year in the United States (Buescher et al., 2014). Supports developed for autism have the potential to help members of the autistic community achieve better life quality, greater autonomy and inclusion (Brosnan et al., 2019). This is best achieved through participatory research approaches which critically reflect on the current status of support with the autistic and broader autism communities, in order to identify how the field can develop more inclusively (Fletcher-Watson et al., 2019; Parsons et al., 2019).
Reviews of the academic literature have shown that there is a growing number of digital supports for autistic individuals (Chia et al., 2018; Grynszpan et al., 2014; Odom et al., 2015; Pennington, 2010; Ploog et al., 2013; Ramdoss et al., 2011; Virnes et al., 2015; Wainer & Ingersoll, 2011; Wong et al., 2015; Zervogianni et al., in press). Digital supports are defined as ‘any electronic item/equipment/application/or virtual network that is used intentionally to increase/maintain, and/or improve daily living, work/productivity, and recreation/leisure capabilities’ (Odom et al., 2015, p. 3806). In the past decade touchscreen, tangible and immersive digital technologies have become increasingly popular and accessible, with many autistic people and their parents/carers reporting high levels of digital technology use for supporting both leisure and academic pursuits (Knight et al., 2013; Laurie et al., 2018; MacMullin et al., 2016; Pennington, 2010; Shane & Albert, 2008). Digital technologies have been developed to support autism in areas such as social skills and social interaction (for reviews see Camargo et al., 2014; Grossard et al., 2017; Ramdoss et al., 2011; Schlosser & Wendt, 2008) and emotion recognition (for review see Berggren et al., 2018). Digital technology, both in school and home settings, is being used in a variety of supportive ways such as increasing autonomy, reducing anxiety and increasing social opportunities for autistic people (Hedges et al., 2018), encapsulated by the term ‘digital support’. To get an idea of how profuse development in this area is, one curated database of autism apps (http://www.appyautism.com/en/) lists over 400 apps for the iOS format alone.
Digital supports aim to facilitate a wide range of outcomes for autistic people, across a wide variety of ages (Wong et al., 2015). Despite the extensive use of digital supports for autism, most digital supports available to the autistic community have little or no evidence to support their effectiveness (Constantin et al., 2017; Kim et al., 2018). Studies reporting on the effects of digital supports are often in-depth case-study reports (e.g. De Leo et al., 2011; Hagiwara & Smith Myles, 1999; Herrera et al., 2008; Mechling et al., 2009; Mechling & Savidge, 2011; Parsons et al., 2006). Beginning with 29,105 potential articles, Wong et al. (2015) identified 27 focused intervention practices for autism that met their inclusion criteria. An intervention practice met the level of research evidence necessary to be included if it was supported by (1) two high-quality experimental or quasi-experimental design studies conducted by two different research groups, or (2) five high-quality single case design studies conducted by three different research groups and involving a total of 20 participants across studies or (3) a combination of research designs that must include at least one high-quality experimental/quasi-experimental design, three high-quality single case designs and was conducted by more than one researcher or research group (Wong et al., 2015). Using similar inclusion criteria for research evidence, Knight et al. (2013) identified 29 studies that met these inclusion criteria; however, of these studies only three single-subject studies and no group studies met the criteria for quality of research evidence. Grynszpan et al. (2014) conducted a meta-analysis of digital technology supports for autism and identified only 22 out of 379 (6%) using pre-post group design studies. Studies were included based on criteria for quality of evidence that took into account participants’ diagnoses, outcome measures and interaction with digital technology. Of the 22 pre-post group studies, only 10 followed a randomised controlled design (2.6% of the initial sample). The analysis of efficacy on these 10 studies provided evidence for a beneficial effect of technology-based training for autistic children overall, irrespective of age and intelligence quotient (IQ). The effect size was in the small-to-medium range, with a significant heterogeneity among studies. Together, these reviews indicate that digital supports can be effective for autism, but only a very small proportion of the research evidence from group or single case designs is of sufficient quality to permit an informed decision on whether to use the digital support.
Under these circumstances, using evidence to make an informed choice about supports for members of the autistic community and broader autism community (including professionals and researchers) is challenging. The digital supports that do have published evidence of effectiveness are frequently developed in research projects, and are rarely made available to the autistic consumer (Constantin et al., 2017). Dijkers et al. (2012) argue that research findings and related expert opinions represent only one source of potential information influencing support-related decisions in health professions and sciences in general. Other sources of potential evidence include (1) personal (e.g. own experience and expertise as well as recommendations from peers), (2) professional practice guidelines (e.g. clinical recommendations) and (3) personal values and preferences (alongside societal values and norms). In addition, online information, from a product website to digital store reviews, may be a potential source of evidence for digital supports, especially when these are commercially available. Such sources of evidence have varying degrees of perceived independence, and relevance to the consumer’s priorities. The abundance of digital supports for autism and the lack of research evidence makes it difficult for the autistic and broader autism communities to select the most appropriate digital support for their needs. A framework to evaluate the effectiveness of digital supports from multiple sources of potential evidence is urgently required to support evidence-based practice (EBP) and is the aim of this study.
EBP describes the integration of the best available research, clinical expertise, patient values and circumstances and healthcare system policies (Dijkers, 2011; Sackett et al., 1996). EBP has its origins within medicine, and clinical expertise refers to both the clinician’s individual knowledge acquired through clinical experience and practice, as well as external clinical evidence based on relevant scientific research using the best available methodologies, with meta-analyses and randomised controlled trials (RCTs) being considered ‘gold standard’ methods (Sackett et al., 1996). This is integrated with the values and circumstances within the context of broader healthcare policy to ensure that practice is evidence-based. If a practice is not evidence-based, it risks being inapplicable to a specific patient, out of date or even potentially harmful to the patient.
Efforts have been made to extend EBP to primary care and specialised clinics for autism (Anagnostou et al., 2014), but defining EBP is not straightforward in this context (Mesibov & Shea, 2011). Dijkers et al. (2012) outline issues faced by professionals when trying to apply evidence-based practice. For instance, their routine clinical practice is often remote from the controlled circumstances in which an RCT is conducted. Specifically, patients often have specific comorbidities (e.g. attention deficit hyperactivity disorder (ADHD), intellectual disabilities) that do not match the inclusion criteria for an RCT. In autism, other factors include unexpected life events occurring during the evaluation period that affect the outcome of the support and, specifically for autism, the heterogeneity of the condition (Mesibov & Shea, 2011). Criteria for EBP in autism support have been proposed by Reichow et al. (2008) which take into account both group designs and single case designs. Strong, adequate or weak judgements can be made concerning the rigour of the underlying research based on common primary quality indicators, including clear and reproducible accounts of participant characteristics, dependent measures and independent variables (in addition to secondary quality indicators including inter-observer agreement, blind raters, procedural fidelity, generalisation and maintenance and social validity). Depending on the quality and quantity of the research (e.g. see the inclusion criteria described by Wong et al., 2015, above), EBP for a support for autism can then be classified as established or promising (see Reichow & Volkmar, 2010; Reichow et al., 2008). Such criteria are particularly valuable in the context of digital supports as they are more able to take account of the range of possible uses of technology, the multitude of ways technology can be personalised and the variety of potential outcome measures for digital supports. Evaluating rapidly developing technology-based supports in RCTs is difficult, given the mismatch between the timelines of commercial and academic progress (Fletcher-Watson, 2015).
EBP is the integration of best available research and clinical expertise with patient values. Integrating the values and opinions of the autism community within the consideration of the research evidence is therefore essential to EBP (see also Fletcher-Watson et al., 2019; Parsons et al., 2019). This study aimed to co-develop a framework for evaluating the evidence base for digital supports for autism, through better understanding of what constitutes evidence for the autistic and broader autism communities and what sources are being used to obtain that knowledge when considering digital supports for autism. We used an online, four-round Delphi study methodology, ideal for integrating the perspectives of multiple stakeholders (Hasson et al., 2000), with feedback managed by a moderator at all stages (Trevelyan & Robinson, 2015). The Delphi study methodology was selected as it has been proposed to be more effective for group-based judgement and decision-making than traditional group meetings by both increasing a group’s access to multiple interpretations and views and decreasing the negative features of group discussions such as domineering individuals and opinions (Belton et al., 2019; Hasson et al., 2000; Rowe & Wright, 2001; see also Humphrey-Murto & de Wit, 2019). The Delphi methodology was therefore chosen as an ideal format for systematically capturing and integrating opinion from a diverse group of experts, who were not co-located and remained anonymous from each other (Goodman, 1987; Hsu & Sandford, 2007). Since the method allows each individual to contribute anonymously and in their own time, the study allowed us to accommodate different communication preferences that do not include face-to-face communication and to avoid direct confrontation between people of differing opinions. Allowing participants to contribute at their own pace without having to manage live group discussions therefore made it easier to include autistic individuals.
Methods
Panel members
Four key groups of stakeholders were identified: (1) autistic people, (2) families of autistic people, (3) professionals who support autistic people and (4) researchers – all with experience of using or developing digital supports for autism and advising others on the topic (see Table 1). The literature recommends between 15 and 30 panel members (Hasson et al., 2000; Paliwoda, 1983), and we aimed for 10 participants from each of our stakeholder subgroups. We contacted members of our networks directly and invited them to take part or to recommend another expert if they were unable to participate. We only contacted people that met our inclusion criteria as an ‘expert’ and we asked those who were referred by other people to confirm they met these criteria. We defined ‘experts’ as people with the necessary experience with technology for autism to advise others. All potential participants completed a brief questionnaire detailing their experience with digital interventions for autism. Experts were therefore those who reported that they had experience using and advising others on technologies for autism and could therefore be researchers, practitioners and/or members of the autism community. As the needs of the autistic community and their immediate providers of support were of primary importance to this study, researchers were not included in the first two Delphi study rounds (see Table 1), but joined at the mid-point to help refine statements on evidence. Panel members were recruited on the basis of recommendations from autism networks and associations internationally, especially those relevant to digital technology for supporting autism (e.g. www.asdtech.ed.ac.uk). Panel members were recruited through personal invitations to experts from autism-related networks in different countries for relevant experts: Asociación Española de Profesionales del Autismo AETAPI (Spain), Autism Speaks (United States), Research Autism (United Kingdom) and Centres Ressources Autisme (France). The inclusion criteria were that panel members were adults, fluent in English and that autistic panel members had formal evidence of diagnosis. As a screener, we asked all potential panel members to discuss their knowledge and experience with digital technology including (but not limited to) touchscreen tablets, smartphones, gaming devices, computers, robots and argumentative and alternative communication (AAC) devices.
Number of panel members per round.
The age range of the community group was between 22 and 72 years (mean = 38.76 SD = 12.35) including 8 males and 17 females. In the researchers’ group there were 9 males and 3 females and no information on age was given. The sample was recruited from the United Kingdom (21), France (6), United States (3), Spain (3), Israel (3) and one each from Germany, Austria and Ireland. Some panel members fulfilled the criteria for multiple groups (e.g. autistic practitioners) but are only listed in one group here – as selected by themselves.
Procedure
The study was conducted using an online survey software (www.qualtrics.com) over four rounds (Table 2). A literature review was conducted on EBP for digital supports for autism (Zervogianni et al., in press) providing information about the goals of existing digital supports. These informed the design of the first round of the Delphi study, providing context for panel members to consider how they may seek sources of potential evidence. Panel members’ comments and ratings in each round were collected and analysed by the moderator (first author), and used to create content for the following round.
Goals and panel members in each round.
Round 1 – Brainstorming
In Round 1 the panel answered open-ended questions (see Supplemental material, Appendix I) about their goals and sources of evidence when selecting a digital support (as defined above). They were asked to think about recent experiences when choosing or recommending a digital support intended for an autistic person (potentially including themselves). A thematic analysis was performed on panel responses, and illustrative quotes were selected (Braun & Clarke, 2006). This identified recurrent themes pertaining to the purpose of digital supports and the outcomes which are sought when using digital supports. We also identified potential sources of information that panel members detailed in regard to these purposes and outcomes.
Round 2 – categorisation
The panel was asked to rate potential sources of information identified during round 1 using 5-point Likert-type scales for the following dimensions:
Relevance: whether information from this source is likely to relate to their situation;
Importance: whether information from this source is likely to be of high quality;
Usefulness: whether information from this source is likely to make a difference to their decisions/actions;
Accessibility: whether information from this source is likely to be easy to find and understand.
We also aimed to refine the list of features and outcomes of digital support the panel may want evidence for and to match sources of evidence to these features/outcomes. The list of features/outcomes was derived from comments and illustrative quotes made during round 1. The panel were presented with features/outcomes beginning with the phrase ‘You want to know whether . . .’ (see Table 3) and asked to list the sources of information they would use to find out specifically about those features/outcomes.
Desired features and outcomes of a piece of technology.
Third, in an open commentary the panel members were asked to discuss whether their personal experience was similar to specific quotes from the panel’s responses in the previous round. Those were selected to match sources of information proposed in the first round (Table 4).
Quotes from panel members.
Mean ratings for relevance, importance, usefulness and accessibility for each source of information were computed. Using thematic analysis, codes representing desired features and outcomes of digital supports were clustered into sub-themes and then top-level themes by two raters independently (first two authors). The themes were reviewed, validated and, if necessary, revised by two other independent raters (last two authors). In the first round, to gather input from the community we made an open-ended enquiry regarding the kind of evidence they seek when considering whether to use, or recommend that someone else use, a new technology. Specific examples of evidence were requested as illustrations of this. Analysis of these responses culminated in three high-level categories of evidence: ‘engagement’ (how the user experiences the product itself, its ease of use and attractiveness), ‘effectiveness’ (outcomes reached/directly observed changes) and ‘reliability’ (the technology is functional). The resulting output was composed of statements on potential sources of evidence grouped into three high-level categories: that is, evidence for reliability, evidence for engagement and evidence for effectiveness. This constituted the basis of what was used in round 3 to create the first draft of the framework.
Round 3 – refinement
Round 3 integrated the perspectives of autistic community, families and professionals with researchers. The expanded panel were asked to rank and edit the statements that had emerged from Round 2 regarding what constitutes evidence. They were given the opportunity to remove statements that they thought were inappropriate or irrelevant. They were told that not all statements would make it to the final framework and that they should prioritise statements that they would want to see appear in the final framework. The moderator merged revisions that were similar, yielding a list of ranked statements for each category of evidence. For the framework to be easy to use, the number of statements per category was restricted. Hence, only the five most highly ranked statements in each category were maintained.
Round 4 – finalisation
The panel was required to review and, if necessary, revise each statement in a draft framework. They were given three possibilities for each statement: (1) accept it as is, (2) make adjustments and (3) remove it from the framework. They were required to justify their edits when they chose to make adjustments or remove a statement. The panel was also asked to signal any ‘words of caution’ concerning the finalised framework. They ranked the five top statements from 1 to 5 with 1 being the most important source of evidence for them.
The moderator merged the edits suggested by the panel when they were similar and then classified them and responded as listed below. The classification was reviewed by two independent coders (last two authors). In case of disagreement between them, consensus was achieved through discussion.
Amendment: This is clarification or expansion of the scope of the statement without fundamentally changing it. For each amendment, two independent coders gave a score from 1 to 3: Should be integrated into the statement; Neutral stance regarding integration in the statement; Need not to be integrated.
To integrate an amendment, it had to have a mean score of less than 2.
2. Words of caution: These are important risks or constraints associated with the statement to an extent that they should be acknowledged in conjunction with the statement. These were adjoined to the statements or category of evidence they were associated with (see Supplemental material, Appendix II).
3. Rejections: This is when a panel member opposed a statement, or criticised major aspects of it. A threshold of 90% agreement (i.e. fewer than 10% of the panel rejected a specific statement) was set for inclusion of statements in the final list, following the emerging convention in Delphi studies (Ager et al., 2010).
4. Misunderstandings: Comments that appeared to be unrelated to the statement. The statement was double-checked and reworded for clarity if needed.
Results
Round 1
The desired outcomes of digital support for the autistic and autism communities that emerged from the panel’s responses primarily related to autonomy, time awareness and management, enhanced quality of life for family/carers, better communication, social participation, fun/leisure, learning support, creativity and enhanced cognitive skills. The
Desirable features and outcomes for digital technology derived from thematic analysis.
Sources of information with regard to those features and outcomes were reviews and recommendations specifically from the autism community, personal hands-on experience and direct observation, expertise of the design team, involvement of autistic users in the design, scientific evidence and non-specific online reviews.
Round 2
Six potential sources of information relevant to choosing and using technologies were rated for relevance, importance, usefulness and accessibility (see Table 6).
Ratings for source of information derived from round 1 according to four parameters, ranked from the highest to the lowest mean score over all dimensions.
Thematic analysis of panel responses produced three high-level categories of for which evidence might be required, defined as follows:
Round 3
The sources of evidence used for these three categories are summarised in Table 7, followed by descriptions that were summaries of comments that appeared across panel members and groups.
Statements ranked in the top five positions per category of evidence.
Round 4
In this final round the panel had to edit and rank statements in each category of sources of evidence. The statements that had the highest mean ratings as a source of evidence were similar for the three categories (Table 8).
Ranking of statements for all categories.
Of the 23 panel members, 3 (more than 10%) rejected the statements shown in Table 9, so they were excluded. The final framework statements reaching inclusion for consensus by the autistic and autism communities as well as researchers are listed in Table 10, with the agreed explanatory text.
Statements that were removed from the framework.
An evidence-based framework for digital supports for autism.
Discussion
There is a plethora of highly accessible digital supports purporting to support the autistic community (Chia et al., 2018; Grynszpan et al., 2014; Odom et al., 2015; Pennington, 2010; Ploog et al., 2013; Ramdoss et al., 2011; Virnes et al., 2015; Wainer & Ingersoll, 2011; Wong et al., 2015; Zervogianni et al., in press) but no mechanism by which consumers, practitioners or researchers can gauge the level of evidence supporting their use. This is the first study to generate a consensus from an international group made up from the autistic and broader autism communities as well as researchers as to what constitutes good evidence for digital supports for autism. Through a Delphi study methodology, consensus was achieved on a detailed framework providing the parameters for which evidence is sought and the sources of evidence perceived to be important. This novel framework allows users of digital supports to incorporate evidence into their decision-making regarding the selection and use of digital support, for themselves, or their autistic family members, pupils, clients, participants and so on. The framework can also inform those developing digital supports for the autistic community, highlighting what types of evidence are considered important. For the first time, the autistic and autism communities can incorporate EBP into the development, application and use of digital supports. Importantly, this framework has been co-developed through a participatory research approach which connects researchers with relevant autistic and broader autism communities to achieve shared goals. These methods can deliver results that are relevant to people’s lives and thus likely to have a positive impact (Fletcher-Watson et al., 2019; Parsons et al., 2019).
The study revealed that academic evidence obtained with carefully conducted empirical research was just one of the aspects that may inform the autistic and broader autism communities when selecting appropriate digital support. This was clearly expressed by the panel from the very beginning of the study, when the importance of reliability, engagement and effectives emerged. Clinical research methodologies, such as RCTs, need to be augmented with other sources of empirical evidence, as well as hands-on experience or other users’ feedback, to identify the extent to which a digital support is reliable, engaging or effective. Reliability and engagement may be particularly pertinent as features of digital support which are not present in the same way for non-technology-based supports (Mesibov & Shea, 2011). EBP for digital supports therefore departs from non-technology-based EBP for autism, highlighting the need for a specific EBP framework.
There are important similarities and differences between the proposed EBP framework for digital supports for autism and other general models of evidence provision in the field of system and software engineering. For example, the ISO/IEC 25010:2011 standard (International Organisation for Standardisation/International Electrotechnical Commission) defines a ‘quality in use’ model composed of five characteristics: effectiveness, efficiency, satisfaction, freedom from risk and context coverage (Bevan et al., 2016). ‘Effectiveness’ is a common theme between the framework co-developed with the autistic and broader autism communities and this standard, and ‘engagement’ maps closely to ‘satisfaction’. ‘Reliability’, however, represents a different perspective, potentially related to (but distinct from) ‘efficiency’, which also takes into account task time, time efficiency, cost-effectiveness, productive time ratio, unnecessary actions and fatigue. ‘Reliability’ focuses more upon, ‘will the app crash?’, for example, reflecting the end-users experience of technology which is not captured separately by the ISO/IEC model, but is embedded within ‘satisfaction’ which incorporates ‘proportion of users complaining, proportion of user complaints about a particular feature and user trust’. Thus, there are parallels between the requirements of the autistic and broader autism communities and international standards, but the participatory research approach ensures the relevance of the framework to those who the digital supports are being developed for. In addition, words of caution associated with the framework (see Supplemental material, Appendix II) emphasised potential downsides of technology and thus introduced the notion of freedom from risk. Indeed, risks for health and social status were acknowledged in the word of caution related to ‘engagement’, which warned about possible over-engagement with technology that would monopolise the child’s time, and the framework needs to be interpreted within reference to the words of caution.
There were also similarities and differences in which sources of evidence were perceived to be most salient for reliability, engagement and effectiveness. While trying out the product was identified as the best source of evidence for informing reliability and engagement, academic research was viewed as the best source of evidence for effectiveness. This highlights that EBP, as informed by the broader autism community, requires multiple sources of information. Online reviews and expert opinions were also identified as key sources of evidence in all three domains. Recent accounts of fictitious online reviews (Morris, 2017) and the independence of the expert are important considerations when evaluating these sources of evidence, and this is highlighted in the ‘words of caution’ (see Supplemental material, Appendix II). Thus while similar sources of evidence are identified for reliability, engagement and effectiveness, they are weighted differently for each category.
As noted above, EBP is informed by integrating best available evidence along with practitioner expertise and the values of recipients of the practice. Co-developing the proposed EBP framework with researchers, technology developers, practitioners and the autism community helps ensure that it will be useful for these communities (see Fletcher-Watson et al., 2019; Parsons et al., 2019). However, different participant groups may have different levels of access to different kinds of evidence, which may lead to inconsistencies in the type of sources of information which will actually be used by different types of potential users. For example, researchers may have greater access to and expertise in interpreting academic papers, while educators and caregivers may have more experience supporting day-to-day use and evaluating the long-term utility of digital technologies. In addition, while the framework identified commonalities in what constitutes evidence, it is important to note that there may be additional sources of evidence that are also significant for only one of the participant groups. Finally, we found the Delphi study methodology to be an effective method for integrating potentially divergent perspectives into an agreed-upon framework. However, despite our number of participants being consistent with those proposed from previous research (Hasson et al., 2000; Paliwoda, 1983), our sample was relatively small given the heterogeneity of autism, and of stakeholder perspectives in the broader autism community. This needs to be borne in mind when considering if the framework is suitable for the entire autism community.
Future work will apply this framework to digital supports for the autistic community, to identify the level of evidence available (complete, adequate, limited, none) from each source for reliability, engagement and effectiveness to highlight if the available evidence is strong, adequate or weak (after Reichow et al., 2008). An online version of the framework that enables researchers, developers and the autism community to evaluate the evidence base for any digital supports they are interested in is freely available at beta-project.org. Importantly, this framework identifies the strength (i.e. availability, quality) of the evidence, not the outcome of the evidence. It is possible, for example, that there could be strong evidence that an app is not engaging. For instance, de Vries et al. (2015) conducted an RCT trial on a computerised support for training executive functions that yielded non-significant changes associated with high attrition rate in autistic participants, thus discouraging continuing practice. The framework developed here supports the sourcing and consideration of evidence into best practice, not necessarily what that best practice should be.
Supplemental Material
AUT898331_Supplemental_material_Appendix_I – Supplemental material for A framework of evidence-based practice for digital support, co-developed with and for the autism community
Supplemental material, AUT898331_Supplemental_material_Appendix_I for A framework of evidence-based practice for digital support, co-developed with and for the autism community by Vanessa Zervogianni, Sue Fletcher-Watson, Gerardo Herrera, Matthew Goodwin, Patricia Pérez-Fuster, Mark Brosnan and Ouriel Grynszpan in Autism
Supplemental Material
AUT898331_Supplemental_material_Appendix_II – Supplemental material for A framework of evidence-based practice for digital support, co-developed with and for the autism community
Supplemental material, AUT898331_Supplemental_material_Appendix_II for A framework of evidence-based practice for digital support, co-developed with and for the autism community by Vanessa Zervogianni, Sue Fletcher-Watson, Gerardo Herrera, Matthew Goodwin, Patricia Pérez-Fuster, Mark Brosnan and Ouriel Grynszpan in Autism
Footnotes
Acknowledgements
The authors thank all the panel members who participated in the Delphi study. Without their input, the project would not have been possible.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
Supplemental material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
