Article Text

Download PDFPDF

Research paper
Video-based training improves the accuracy of seizure diagnosis
  1. Udaya Seneviratne1,2,
  2. Catherine Ding1,
  3. Simon Bower1,
  4. Simon Craig3,
  5. Michelle Leech2,
  6. Thanh G Phan1,2
  1. 1Department of Neurosciences, Monash Medical Centre, Melbourne, Australia
  2. 2Department of Medicine, Southern Clinical School, Monash University, Melbourne, Australia
  3. 3Department of Emergency Medicine, Monash Medical Centre, Melbourne, Australia
  1. Correspondence to Dr Udaya Seneviratne, Department of Neuroscience, Monash Medical Centre, 246 Clayton Road, Clayton, Melbourne, VIC 3168, Australia; udaya.seneviratne{at}monash.edu

Abstract

Background and aim The difficulties in differentiating epileptic seizures (ES) from psychogenic non-epileptic seizures (PNES) are well known. However, interventions to enhance diagnostic accuracy have not been well studied. We sought to evaluate the accuracy of discrimination between ES and PNES before and after targeted training among medical students.

Methods A teaching module incorporating videos of typical ES and PNES was used for training. Typical ES and PNES, 10 each, were shown in a random mix. The participants were asked to make a diagnosis as the baseline test, followed by a detailed discussion on videos. One month later, a 1 h lecture was delivered on the diagnosis and classification of seizures, followed by two more tests 3 and 6 months later, using a similar format, but different videos. A group of emergency medicine trainees also went through the preteaching test for comparison. We used summary receiver operating characteristic curves and area under the curve (AUC) to quantify the discriminating ability and z scores to assess the differences between AUC between different stages of training.

Results In medical students, the AUC improved significantly from 0.52 (95% CI 0.49 to 0.55) at the baseline to 0.64 (95% CI 0.59 to 0.69, p<0.001) at 3 months and 0.63 (95% CI 0.57 to 0.69, p<0.001) at 6 months. At 3 and 6 months testing, they achieved results similar to that of emergency medicine trainees (p=0.5).

Conclusions Targeted video-based training increases the accuracy of visual discrimination of seizures short-term and medium-term.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Seizures and seizure mimickers present a common diagnostic problem at emergency departments, hospital wards and outpatient clinics.1–4 A detailed history including eye-witness account and physical examination are considered to be the cornerstones in the diagnosis of seizures. Differentiating epileptic seizures (ES) from psychogenic non-epileptic seizures (PNES) is a major diagnostic challenge. The reported rate of misdiagnosis of PNES as epilepsy ranges from 20% to 26%,5 and the average delay in diagnosing PNES has been reported to be as long as 7–9 years.3 ,4 The misdiagnosis of PNES as epilepsy is associated with potentially deleterious consequences, such as unnecessary interventions (intubation, ventilation, central venous lines, intensive care),1 ,6 inappropriate prescribing of antiepileptic medications, expensive investigations, frequent hospital visits, financial loss to the individual, healthcare system and society at large,4 ,7 and even rare cases of death due to unnecessary treatment.8 In view of this, there is a pressing need to enhance the diagnostic accuracy of healthcare professionals in this area.

One potential approach to improve seizure recognition among health professionals is to incorporate it into training at the undergraduate level. In a previous study, we demonstrated that junior doctors had very low accuracy of 0.54 for the diagnosis of ES and PNES.9 We speculate this low value could be due to the lack of exposure to seizures and formal training on the subject during their undergraduate period. It is possible that neurology education may be hampered by ‘neurophobia’-the fear of neurology.10 Investigators have sought to reduce this fear through improved techniques of medical education.11 There is a dearth of studies looking at interventions to improve diagnostic skills, particularly with regards to seizure recognition among medical students.12 This issue has a particular practical relevance in the transition from a medical student to an intern doctor.13 Against this backdrop, we sought to investigate whether targeted training using videos of typical ES and PNES would result in persistently improved diagnostic accuracy. This was conducted as a ‘proof of concept’ study of targeted training involving medical students.

Methods

This study was approved by the local Human Research Ethics Committee. We undertook an intervention-based model to improve seizure recognition skills among medical students. Third year medical students of Monash University, Melbourne, Australia, were chosen as participants. With minimal prior knowledge, they were considered to be an ideal group for a proof of concept study of this nature. The accuracy of seizure diagnosis by medical students was compared with those from a cohort of emergency medicine (EM) trainees from Monash Medical Centre who underwent the same test. EM trainees were selected as comparators, as they often encounter patients presenting with seizures in routine clinical practice, and we wanted to test whether medical students could reach their diagnostic accuracy following targeted training.

Videos

Video clips of typical ES and PNES were selected from the video-EEG recordings performed at Monash Medical Centre, Melbourne, Australia. The diagnosis had been concluded by the consensus opinion of epilepsy specialists based on typical clinical, semiological and video-EEG characteristics. ES and PNES were classified according to the 1981 International League Against Epilepsy (ILAE) classification,14 and a proposed semiological classification,15 respectively. In table 1, we have also provided the corresponding ILAE 2010 epileptic seizure classification.16 Ten ES and 10 PNES were selected to represent the spectrum commonly encountered in clinical practice. Complex partial, myoclonic, tonic and generalised tonic-clonic seizures with typical semiology were included in the ES video collection. The PNES videos consisted of typical events with motor manifestations. The key semiological features are described in table 1.

Table 1

Semiologic features and classification of seizure videos used in the study as per International League Against Epilepsy (ILAE),14 ,16 and a proposed semiological classification15

Medical students: training module and stages

In the current curriculum, the teaching on epilepsy mainly consists of a 2 h Problem-Based Learning (PBL) session. This is primarily a student-led discussion and no seizure videos are used. They also get some exposure to patients with seizures during the neurology ward rotation, typically 1 month in a year. Outside this curriculum, a training module on the diagnosis of ES and PNES was developed by the research team comprising clinicians and academics. The module was based on a series of video clips demonstrating typical semiologic features of ES and PNES. The students were told at the beginning of these sessions that the results of the three tests would not be counted towards their student assessments. The tests were presented to medical students as an optional tool for their learning.

The study was conducted in four stages. In the first stage, medical students were shown a mix of 20 video clips (10 typical ES and 10 typical PNES) unaccompanied by clinical details and EEG (Test 1). Each video was shown only once, and the participants were requested to mark the diagnosis as ES or PNES based on the observation of semiological features. They were not provided with clinical details or any other clues. This test was used to assess their baseline diagnostic accuracy. After the test, an epilepsy specialist discussed the semiological features of each video clip in detail, as shown in table 1, with a particular emphasis on key features differentiating ES from PNES.

The second stage consisted of a 1 h interactive lecture on the diagnosis and classification of ES and PNES, delivered 1 month after Test 1. Numerous video clips were used in the lecture to highlight semiology of ES and PNES.

In the third stage, conducted 3 months after the second stage, participants were retested on their diagnostic accuracy (Test 2). The format was identical to Test 1. However, different video clips (10 ES and 10 PNES) with similar semiology to those of Test 1 were used. There was no teaching involved at this stage. The results of Test 2 were considered to be a reflection of short-term retention of knowledge.

The fourth stage was conducted 3 months after the third stage. Another test (Test 3) was carried out using a similar format to Tests 1 and 2, but with a different set of video clips (10 ES and 10 PNES) with matching semiology. This test was used to assess the medium-term retention of knowledge.

Studies indicate a gradual decay in the retention of knowledge following an educational intervention.17 There is no universal agreement on a classification defining the duration of knowledge retention. A recent review considered interval of months as short-term and years as long-term retention.17 For the purpose of this study, we arbitrarily defined retention interval of 3 months as short-term, and 6 months as medium-term. This is a pragmatic definition considering the value of knowledge retention in routine clinical practice.

Emergency medicine trainees

This group consisted of advanced trainees in EM, who had been working as doctors for at least 4–5 years, following their internship and basic training. They had a test similar to Test 1 of medical students with the same videos. This was to test the accuracy of seizure diagnosis by EM trainees.

Statistical analysis

We analysed data using two different approaches. In the first method, we analysed only participants who had participated in all four stages. In the second method, we analysed participants who attended each stage as separate groups. For each study, the dichotomous outcome data were arranged in a 2×2 table. In case of data heterogeneity, we used a random effect model based on weighting to account for intragroup variances.18 The pooled sensitivity and specificity with 95% CIs were calculated for each test. Using a similar methodology described previously,9 we used summary receiver operating characteristics (SROC) curves to summarise central tendency of the receiver operating characteristics (ROC) curves.19 Each ROC curve is considered to represent the accuracy of seizure diagnosis by the individual medical student. The area under the curve (AUC) of the SROC was interpreted as follows: 0.5, discrimination of ES from PNES no better than chance; 0.6–0.69, poor discrimination; 0.7–0.79 fair discrimination; 0.8–0.89, good discrimination; 0.9–1, outstanding discrimination.20 META-DiSc software was used for statistical analysis.19 The differences of AUCs between medical students at different testing stages were calculated using the z scores, taking into account the data were derived from the same cases.21 Values of p<0.05 were considered statistically significant. Using the z scores, we also measured the difference between the results of medical students and the baseline values (Test 1) of EM trainees.

Results

Characteristics of medical students

At Test 1, there were 103 medical students (main cohort 1). Due to drop out, the total numbers of medical students were 98 and 60 at Tests 2 and 3, respectively (main cohorts 2 and 3). However, only 39 went through all four stages of the training module (subcohort). The original main cohort at Test 1 consisted of 53% males (mean age 20.9±1.8 years). The demographic profile of the subcohort of 39 who completed all four stages was similar (males 56%, mean age 20.6±1.2).

Pooled specificity and sensitivity

The baseline pooled specificity and sensitivity of the original cohort of 103 students (main cohort 1) were 0.5 (95% CI 0.46 to 0.53) and 0.53 (95% CI 0.50 to 0.56), respectively. The baseline values were similar in the subcohort of 39; specificity 0.47 (95% CI 0.42 to 0.52) and sensitivity 0.53 (95% CI 0.48 to 0.58). Specificity and sensitivity increased to 0.58 (95% CI 0.53 to 0.63) and 0.64 (95% CI 0.59 to 0.68), respectively, at 3 months, and 0.57 (95% CI 0.52 to 0.62) and 0.62 (95% CI 0.57 to 0.67) at 6 months in the subcohort of 39.

Summary receiver operating characteristic curves

At baseline (stage 1), the AUC of the main cohort 1 and the subcohort were 0.52 (95% CI 0.49 to 0.55) and 0.50 (95% CI 0.44 to 0.57), respectively, indicating the diagnostic accuracy was no better than by chance. In the subcohort of 39, at Test 2, the AUC increased to 0.64 (95% CI 0.59 to 0.69), a highly significant improvement compared to the baseline (z score=5.2, p<0.001). After 6 months, at Test 3, the AUC was maintained at 0.63 (95% CI 0.57 to 0.69), which was not statistically different from Test 2 (z score=0.35, p=0.73), though highly significant compared to the baseline (z score=6.6, p<0.001) (figure 1). The main cohorts 1, 2 and 3 showed a similar trend, demonstrating a highly significant increase of AUC to 0.63 (95% CI 0.59 to 0.67) at Test 2 and 0.63 (95% CI 0.59 to 0.67) at Test 3 (z score=7.3, p<0.001). It should be noted that main cohort 2 (at Test 2), and main cohort 3 (at Test 3) consist of a mixture of participants including the subcohort and new participants. The increase in AUC in main cohorts reflects the influence caused by the students who received targeted training.

Figure 1

Changes in area under the curve with sequential testing of medical students and emergency medicine trainees. The graph demonstrates the increase in area under the curve (AUC) with 95% CI from Test 1 to Test 2 of medical students (MS). Note, there is no significant change from Test 2 to Test 3. At Test 1, the AUC of emergency medicine trainees is significantly higher than that of MS, but does not significantly differ from Tests 2 and 3 AUCs of MS.

Emergency medicine trainees

A total of 27 trainees (men 48%) took part in Test 1. The mean age was 33±4.9 years. They achieved an AUC of 0.66 (95% CI 0.60 to 0.72). The pooled specificity and sensitivity were 0.64 (95% CI 0.58 to 0.70) and 0.61 (95% CI 0.55 to 0.67), respectively.

Medical students versus emergency medicine trainees

We used z scores to compare the differences between AUC of different tests involving EM trainees and the subcohort of medical students. At Test 1, EM trainees scored a significantly higher rate of diagnostic accuracy than medical students (z score=4.8, p<0.001). However, following the video-based training, there was no significant difference between the medical students (Tests 2 and 3) and the EM trainees (Test 2 z score=0.61 & p=0.5; Test 3 z score=0.75 & p=0.5).

Discussion

Using an intervention-based model, we have demonstrated that focussed video-based training significantly improves the diagnostic accuracy of differentiation between ES and PNES. The degree of improvement brings these medical students to the level seen among EM trainees. Furthermore, our study demonstrates that medium-term retention can be achieved with appropriate education, and that videos can be used as a powerful tool in seizure education.

Key findings

The baseline finding among medical students with regards to seizure diagnosis has reaffirmed previous studies that medical students have difficulties in reliably differentiating PNES from ES.9 ,22 A single intervention was able to increase the accuracy of visual seizure recognition above the baseline, and this improvement was sustainable 6 months later. Similar improvement in diagnostic sensitivity and specificity has been reported in a recent study involving doctors and medical students to evaluate the diagnosis of ES and PNES before and after a 15 min teaching session with seizure videos.22 Although this result indicates improved immediate diagnostic accuracy following video-based teaching, it did not test short to medium-term retention in contrast with our study.

We had assured the students that the results would not be used towards their end-of-year assessment. Thus, finding the improved accuracy was encouraging given that no incentive was given for participation in this study other than improvement of their knowledge. It should be noted that there was a gradual drop in attendance from stage1 through stage 4. This was most probably due the fact that the teaching module was not compulsory and the results were not counted towards their end-of-year assessment. One would speculate better results with improved attendance.

Significance

The significance of our findings goes beyond epilepsy education and provides lessons for improving medical education models. A single focussed intervention was able to increase visual seizure recognition to the level seen among EM trainees with 4–5 years of experience in clinical practice. This finding raises the question about possible outcomes if we increased the frequency of educational sessions beyond one intervention. Unlike EM trainees, medical students in their first clinical year have limited context to apply and practise the knowledge from a single intervention. Repeated doses of this intervention may increase the size of the observed effect.

Challenges in neurology education

Neurology is considered to be a relatively more difficult specialty by medical students and other health professionals.10 ‘Poor teaching’ of neurology is often cited as a major reason for the avoidance of neurology subjects including epilepsy.10 This has led researchers to coin the term ‘neurophobia’ (the fear of neurology).23 This effect of ‘neurophobia’ combined with persistent myths about epilepsy may contribute to diagnostic difficulty.24 Researchers termed this phenomenon as ‘we see what we expect to see’.24 Our results indicate this may be ‘treatable’. We speculate that teaching of other neurological conditions, such as movement disorders, may benefit from this type of focussed approach. The commonality among these disorders is that the written description in the textbook may not necessarily convey the semiological depiction of the multiple phenotypes, and studying the video provides a better opportunity to comprehend the nature of these episodic disorders. Furthermore, this study illustrates that contrasting two seemingly similar conditions is an effective method of teaching.

Different models of seizure and neurology education

Over the last 50 years, there has been a gradual paradigm shift in medical education from the conventional didactic teaching to PBL.25 PBL is a key component of an integrated medical curriculum. In epilepsy education, despite their different philosophical approaches, both methods are relatively similar in the objectives of teaching, with the focus being the treatment of epilepsy. Pressure on clinical placements with increased numbers of medical students in the healthcare setting may lead to a dilution of experiential learning. The understanding and recognition of neurological problems, including seizures, movement disorders and a range of neurological syndromes, are not easily assimilated from text book descriptions. This study identifies the need for, and utility of, interactive video teaching to enrich undergraduate learning and skill acquisition in seizure recognition.

A web-based interactive training programme incorporating seizure videos has been used to improve knowledge of seizure disorders among medical students.12 However, there was no description of the use of PNES videos and it is not certain if this approach would also improve the ability to differentiate between ES and PNES.12 The lack of use of PNES videos may be due to ethical concerns on posting these videos on the web. A related method is computerised tutorials, but these methods have not been designed to aid visual diagnosis of ES and PNES.26 ,27

Limitations

We acknowledge some limitations of the study. First, the videos were shown without clinical details. It could be argued that this is contrary to what happens in real life, and the outcome results could be an underestimate of true diagnostic accuracy. However, it is not always possible to extract a detailed history when faced with a patient in the emergency department having a seizure.8 This situation may also arise when the person may not want to divulge the history.8 The doctor may need to depend on visual diagnosis of the seizure for decision on pharmacologic therapy. Up to 78% of PNES may be prolonged, mimicking the appearance of status epilepticus, and prompting urgent intervention.28 The failure to recognise PNES has been reported to cause dangerous escalation of therapy and death.8

Second, it is possible that some students may have self-studied the subject using other resources during the study period, and the improvement may not be entirely attributable to the training module. Third, the longest retention period we tested was 6 months. This does not necessarily reflect what happens in real life with varying intensities of training among different groups of healthcare professionals. Finally, the medical students had only two sessions of teaching, which elevated them to the level of EM trainees in terms of seizure diagnostic accuracy. It would be useful to study whether they will achieve more with further training sessions (‘dose escalation effect’).

Conclusion

Our study demonstrates that targeted video-based training is an effective intervention to enhance the accuracy of seizure diagnosis. This is an encouraging finding considering the burden of misdiagnosis with adverse outcomes. Our study provides a basis for medical educators to develop targeted training packages incorporating seizure videos for the education of medical students and healthcare professionals.

References

Footnotes

  • Contributors US: study design, data analysis, manuscript writing. CD: study design, data collection, data analysis. SB: study design. SC: study design, critical review of the manuscript. ML: study design, critical review of the manuscript. TGP: study concept and design, supervision of statistical analyses, critical review of the manuscript.

  • Funding This study was supported by an unrestricted education grant from UCB pharma.

  • Competing interests US has received an educational grant and speaker honoraria from UCB Pharma. CD, SB, SC and ML report no disclosures. TGP has received honoraria from Genzyme, Bayer Pharmaceuticals and Boehringer Ingelheim. He serves on the scientific advisory board on Fabry disease for Genzyme.

  • Ethics approval Human Research Ethics Committee, Monash Health, Victoria, Australia.

  • Provenance and peer review Not commissioned; externally peer reviewed.