Abstract

An expanding literature on computer-assisted cognitive behaviour therapy (CBT) has established that it can be an effective treatment for depression and anxiety. Not all programs appear equally effective (Foroushani et al., 2011), underscoring the importance of establishing the efficacy of any program that is being offered. It is known that fewer participants drop out of treatment when there is some contact with a therapist or proxy, but there is more work to be done in ascertaining other factors that may increase the acceptability to consumers of computerised CBT and enhance retention rates in treatment.
Criticism has sometimes been levelled at trials using computer-assisted CBT programs on the basis that they have only treated the mildly affected, and there is little follow-up. The former concern is somewhat allayed by recent studies, where baseline symptom measures confirm that participants score in at least the moderate range. In many studies the greatest proportion of participants are self-referred, raising concerns that this could be a group with less severe symptomatology. This concern may be unjustified, however, as it is known that the majority of patients with anxiety disorders (with the exception of those with panic disorder) do not seek treatment, despite significant distress and disability, and only in severe anxiety does the proportion seeking help reach the majority (Andrews et al., 2001; Slade et al., 2009). In any case, Bell et al., (2012) attempted to address this criticism by taking only patients who had been referred by general practitioners and mental health clinicians for assessment and treatment to a specialist anxiety disorders clinic. Baseline measures indicate that participants had anxiety disorders of at least moderate severity, and a long waiting time for treatment probably excluded those who were going to remit spontaneously. This study produced positive findings in ‘real-world’ patients with comorbidities, previous experience of mental health treatment, and somewhat lower levels of education than in many trials of mainly self-referred participants.
A number of recent studies have now reported data following 6 months or more of follow-up, addressing the second main criticism. Bell et al., (2012), as a consequence of a strikingly long average waiting time for an appointment at their clinic (9 months), were able to observe and follow-up their participants over 6 months without any disadvantage to those randomised to wait-list. Bell and colleagues found a good level of maintenance of gains. An 18-month follow-up of internet-based treatment for post-traumatic stress disorder (PTSD), with 83% of the original sample (Knaevelsrud and Maercker, 2010), a 3-year follow-up of panic, with 81% of the original sample (Ruwaard et al., 2010) and a 5-year follow-up of social phobia with 80% of the original sample (Hedman et al., 2011b) have also shown maintenance of gains.
Wait-list is not the most stringent control, so it is pleasing to see studies emerging that compare face-to-face therapy with computer-assisted treatment. These studies so far indicate equivalent outcomes, although many lack a psychological placebo comparator. A Swedish study compared internet-based CBT to group CBT for social anxiety disorder, finding a slight advantage for the internet-based treatment at post treatment that had disappeared by 6 months, and maintenance of gains in both conditions during the 6-month follow-up period (Hedman et al., 2011a). It will be important to see more studies including a psychological control condition in the future, although the ‘credible psychological placebo’ has long been a challenge in psychological therapy research.
Few patients appear to complete all modules in computer-assisted CBT programs. Attrition rates appear highly variable, with rates ranging from highs of 70% to lows of 10% reported. This compares to rates of 6–55% that have been reported for face-to-face CBT, with an average of around 25% often quoted. In one computer-assisted CBT study with an attrition rate of only 10.9%, therapist contact time was much higher than the approximately 30 minutes per participant reported in many studies (Kiropoulos et al., 2008). The completion rate of 35% for participants in Bell et al., (2012) is quite typical. Do participants drop out when they achieve a personally satisfactory level of improvement, thus accounting for the overall moderate-to-high effect sizes of computer-assisted CBT? Or would higher rates of remission result if more participants could be retained in treatment to complete all modules? Remission rates are infrequently reported, but Hedman et al., (2011a), using an intention-to-treat analysis, reported a rate of 34%, which is better than recent studies of depression, but leaves room for improvement. There is also great variation in the number of modules that comprise each treatment program. There may be as few as six or as many as 15.
The cost-effective nature of computerised CBT presents opportunities to study a number of questions about this type of therapy. How many modules and how much therapist time is optimal, and is there a dose–response relationship between the number of modules completed? Are there essential modules to complete that result in a better outcome? How important is time spent on homework practice outside of sessions? Are there differences between the disorders? There is also the opportunity to address many unanswered questions about CBT programs in general: for example, to determine the necessary and sufficient components of a program, and the minimum hours of treatment required. Internet therapies offer the prospect of greater access to patients, but also perhaps expanded research opportunities.
See Research by Bell et al., 2012, 46(7): 630–640
