Abstract
One of the primary goals of continuing medical education (CME) is to enhance the learners' performance, and a major goal of evidence-based medicine (EBM) is to improve knowledge of current best care. This paper overviews the use of a Learning Needs and Knowledge Assessment tool to highlight the potential learning needs and knowledge of neurologists and to focus the issues, interest and interactions of neurologists in a workshop on EBM migraine therapy. Virtually all neurologists felt they used evidence-based medicine in their daily practice. Surprisingly, 50% of neurologists agreed that they were uncertain which triptan to use. The great majority of neurologists felt that the triptans were not all equally efficacious. Our survey identified significant knowledge gaps among neurologists regarding how to appraise the validity of evidence from a randomized clinical trial, and with regard to what are the most clinically useful measures of benefit in clinical trials.
Keywords
Introduction
In the early part of 1999 the Canadian Headache Society (CHS) decided to sponsor a workshop on Evidence-based Migraine Therapy. As part of the process of developing the agenda and content for the workshop, a formal survey questionnaire was sent to potential participants to determine their knowledge and to determine their potential learning needs.
Evidence-based medicine (EBM) has become increasingly important in the assessment of data regarding various medical therapies (1), and with the advent of many new symptomatic migraine treatments over the past decade, such a workshop was felt to be timely.
Needs assessments (2) are part of CME programmes and have played a role in allowing planning committees not only to specify the what should be taught to the learners, but also in allowing the planners to learn the perceived needs of the learners. A recent formal needs assessment of Canadian neurologists revealed that they were primarily interested in new therapeutic treatments for migraine, and not diagnostic, investigative or other management issues related to headache and migraine (3).
The learning needs and knowledge assessment of neurologists who participated in the workshop and also of the nonparticipants is described in this paper. This knowledge assessment served as a needs assessment, and allowed for an assessment of their knowledge and potential learning needs that focused the faculty and participants on relevant learning issues regarding EBM in migraine therapy.
Methods
An assessment questionnaire was developed by the faculty and was mailed out to all potential participants of the workshop. The questionnaire consisted of seven questions. Four of these were designed to determine the level of knowledge of EBM, and another was designed to determine whether the neurologists thought they used EBM in their practices. The four questions on knowledge of EBM were multiple-choice questions with more than one answer. The questionnaire also included two generic questions regarding the use of triptan medications in practice, and these two closed questions utilized a four-point scale to determine the preferred answers.
Ninety neurologists received the questionnaire. A total of 47 completed questionnaires were returned, 35 from neurologists who later attended the workshop (participants), and 12 from neurologists who did not (nonparticipants). For participants, 35 out of 56 questionnaires were returned (63%). For nonparticipants, 12 out of 34 questionnaires were returned (35%).
The data from the questionnaires was collated, analysed and produced in tabular form. The author presented the results of the survey and the correct answers, where applicable, to the participants and faculty at the beginning of the workshop. A general discussion ensued and set the stage for the didactic sessions that followed and the interactive case based workshop. This was found to be an effective strategy to facilitate learning.
Results
The seven questions and responses of the participants and nonparticipants are outlined in Tables 1 and 2. Some participants and nonparticipants did not respond to all questions. The correct answers for the EBM questions (Questions 4–7), as predetermined by the faculty, are given in Table 2.
Questions related to neurologists' perceptions of evidence-based medicine use, and to triptan use
Questions related to knowledge of evidence-based medicine
Discussion
It is interesting to analyse, in general terms, the results of the responses to each question, in light of the answers. (It should be noted that this assessment tool was not pretested before the workshop, nor was there a testing of its reliability, and thus the findings may not be able to be generalized to other groups).
Question 1: Do you use evidence-based medicine in your daily practice?
Virtually, all of participants and nonparticipants indicated that they used EBM in their daily practices. However as can be seen from the responses to subsequent questions, this did not necessarily mean that the neurologist attendees all meant the same thing when using the term EBM. Generally it is satisfying, however, that virtually all participants and nonparticipants recognized the concept of EBM and felt they were using it in their practices.
Question 2: In clinical practice, I am uncertain which triptan to use
Of the participants, about half agreed that they were uncertain which triptan to use and 16 disagreed, two strongly disagreed. The nonparticipants generally disagreed with the statement. These answers again were somewhat surprising to the faculty as it was generally felt that most neurologists would have no problem knowing which triptan to use in practice. It suggested that there was a need to look more closely at the evidence supporting the usage of the triptans during the workshop.
Question 3: In my opinion, all triptans are equally efficacious
The participants and nonparticipants generally disagreed with the statement, the participants more so than the nonparticipants. This suggested they did see a difference in efficacy, which was not totally surprising when one separates out the different formulations of the available triptans and their pharmacological properties. Nevertheless, the answers clearly pointed to the need to determine what exactly was the evidence for these opinions, especially among the triptans that were very similar in efficacy as determined from randomized clinical trials, comparative studies and meta-analyses.
Question 4: Four of the key elements of evidence-based medicine are:
The answers to this question included four key elements of EBM as discussed in another paper in this supplement. The great majority of participants and nonparticipants listed the correct answer, numbers 1, 3, 4 and 5 (asking well-built answerable clinical questions, finding the best evidence, critically appraising and interpreting the evidence for its validity and importance, and applying the best evidence in your practice).
Consulting with other specialists on current opinion and practice is a common activity in daily practice, however, its shortcomings are pointed out concisely in a recent review of EBM: ‘Finally, we may practise in the replicating mode, whereby we follow the practice of experts. This is the quickest model of practice and is most suitable for conditions that we see rarely and for which we need ongoing advice (such as the care of complicated transplant patients). Practising in this mode, however, does not allow us to determine whether the advice we receive is authoritative (evidence-based) or authoritarian (opinion-based)’ (4).
It was obvious that most neurologists had a good knowledge, or made reasonable guesses, of what the key elements of EBM were prior to the workshop, suggesting this was not a learning need, although this result did not guarantee that they could actually use the EBM methodology.
Question 5: The best general source of current best evidence is:
The correct answer was 1, 3 and 4. The New England Journal of Medicine and MedLine have published or made available randomized clinical trial data, but neither have analysed the data in an evidence-based fashion, which generally addresses the four or five questions that are required to adequately determine the validity and appropriateness of a study. The Cochrane Library of Systematic Reviews is considered a great general source of EBM but to date contains little comparative data on migraine therapy. Evidence-Based Medicine Journal Club and ACP Journal Club are probably not used by most neurologists or headache specialists, which probably explains the low correct answer rate in both participants and nonparticipants. This question highlighted the need for the workshop.
Question 6: When appraising the validity of evidence from a randomized clinical trial, the two most important questions to ask are:
The actual response from the majority of participants were that number 2 and 4 were the correct answers, but in fact number 1 and 3 are the best correct answers. Non-participants also did poorly on this question. The question probably more than the others clearly demonstrated a major learning need for the participants of the workshop. Obviously not all trials have to be blinded and validity is less dependent on the groups being similar at the beginning of the trial. This is because although randomization does not guarantee that groups will be the same at the start of treatment, if necessary, post hoc adjustments can be made to reflect any imbalance in the groups. These issues are discussed in a later paper in this supplement.
Question 7: When appraising the importance of evidence from a randomized clinical trial, the most useful clinically relevant measures of benefit (or harm) are:
Less than 25% of the participants and about 50% of the nonparticipants picked the best correct response to this question, numbers 1 and 3. These measures are used on a regular basis at scientific meetings and in publications, so it is of concern that there was such a varied response rate, suggesting, once again, a need for further education in EBM. The P-value can be easily modified by many statistical manoeuvres, and the relative risk reduction (RRR) can be most misleading. As an extreme example, if a treatment lowers the absolute risk of an undesirable outcome from 2% to 1%, the relative risk reduction is an impressive 50%, but the absolute risk reduction is a much less impressive, and likely clinically insignificant 1%. Absolute risk reduction (ARR), and number needed to treat (NNT) measures tend to be more useful clinically. In clinical trials, the NNT indicates how many patients must be treated with a new drug to achieve a therapeutic response in one additional patient over and above the responses achieved by another drug or placebo. If the NNT is high (e.g. 20) there is less incentive to switch to a new therapy, compared to a low NNT (e.g. 3).
Conclusion
Using collected learning needs and knowledge assessment data at a CME workshop appeared to have some merit, especially as the data was presented and discussed with the participants and faculty before the workshop began. It highlighted and gave focus to the potential learning needs of the participants and acted as an interactive learning tool as well as a knowledge assessment tool. Pretest and post-test assessment and evaluation tools are commonly used in CME, and this tool collected precourse knowledge data containing many elements of a formal needs assessment.
Further, information gathered and discussed in this paper was of a semiquantitative and qualitative nature, and this was intended to be so by the planners of the workshop. Nevertheless, it proved to be a valuable exercise for all concerned, as it appeared to enhance the learners interests, interactions, enthusiasm and knowledge in a positive fashion prior to the actual workshop, which was designed to educate the participants in the areas highlighted by the assessment tool.
