Papers That Might Change Your Practice: Review Of The Introduction of a New Screening Tool for the Identification of Cognitively Impaired Medically At-Risk Drivers

David B. Hogan, MD, FRCPC1, Michel Bédard, PhD2
1 Professor and Brenda Strafford Foundation Chair in Geriatric Medicine, University of Calgary, Calgary, AB, 2 Canada Research Chair in Aging and Health, Director, Centre for Research on Safe Driving, Lakehead University and Northern Ontario School of Medicine, Thunder Bay, ON.

Paper Title: The introduction of a new screening tool for the identification of cognitively impaired medically at-risk drivers: the SIMARD A Modification of the DemTect

Authors: Bonnie M. Dobbs, PhD (Department of Family Medicine, University of Alberta), and Donald Schopflocher, PhD (Faculty of Nursing, University of Alberta)

Reference:Journal of Primary Care & Community Health 2010;1:119–27

WHY THIS PAPER

Family physicians, geriatricians, and others need an effective, efficient, office-based approach to assessing driving risk in older patients. The Screen for the Identification of cognitively impaired Medically At-Risk Drivers (SIMARD) has been suggested on the web (http://www.mard.ualberta.ca/Home/SIMARD/), in a Pfizer-sponsored toolkit for physicians, and in other venues as a way to do this. The 2010 BC Guide in Determining Fitness to Drive recommends using the SIMARD to determine whether patients with persistent cognitive impairment and a long list of additional conditions (i.e., chronic renal disease, end-stage renal disease, or renal transplant; chronic obstructive pulmonary disease or other respiratory disease; vestibular disorders; congestive heart failure or postcardiac arrest; mood disorders, attention-deficit hyperactivity disorder or schizophrenia; stroke or cerebral aneurysm; multiple sclerosis, Parkinson’s disease, or cerebral palsy; traumatic brain injury; intracranial tumors; obstructive sleep apnea–hypopnea syndrome or narcolepsy; psychotropic drug use; general debility or lack of stamina) should be referred to DriveABLE for a driving assessment.(1) What is the available evidence on the SIMARD? Does it justify such widespread use? To answer this question, we reviewed the only published paper then available about this instrument.

INTRODUCTION

2010 was the first year of publication for the peer-reviewed journal where the paper appeared. There is no Journal Citation Reports impact factor for the Journal of Primary Care & Community Health, and it is not indexed on MEDLINE. Both authors are on faculty at the University of Alberta and have previously published on this topic. Dr. Dobbs is well known for her work on driving.

The paper starts by noting the importance of motor vehicle crashes (MVCs) and identifying those 65+ as high-risk drivers (when MVC rates are expressed per kilometer driven). The authors feel this is due to the high prevalence of medical conditions among older persons. Cognitive impairment is high-lighted as being particularly important. Most would agree that a diagnosis of mild cognitive impairment or dementia should not in itself be sufficient grounds to lose driving privileges,(2) but to date no brief cognitive test has shown sufficient sensitivity and specificity to justify its use as the sole determinant of driving safety. The authors conclude that there is a need for an instrument to screen for “cognitive impairment relevant to driving.”

There are limitations to the literature review provided. It focuses solely on the use of cognitive instruments for the detection of concerns about driving. Information on the global severity (or functional impact) of cognitive impairment, the personality and behavior of the driver, his or her driving history (e.g., history of MVCs or citations), and whether concerns have been raised by the family should also be considered. Recent recommendations on the assessment of driving risk are not referenced—for example, the authors quote from the Canadian Medical Association 2000 guide on determining medical fitness to operate a motor vehicle but not the 2006 one.(2) While there is room for improvement, clinicians already have approaches to the assessment of driving risk among those with impaired cognition, and over time persons with a dementia do give up driving.(3)

Research Question

Stated goals of the study are “…to develop and validate a brief, scientifically based, easy-to-administer, easy-to-score, paper and pencil screening tool to be used in the primary care setting for the identification of individuals at risk for declines in driving competence due to cognitive impairment with or without dementia…[and] introduce dual cut-points to identify those who would very likely pass or fail a driving assessment, leaving those who fall between the cut-points to be referred for a driving assessment to determine competency.”

Study Methods

Two study groups were recruited: an instrument development cohort (n = 181) followed by a tool validation one (n = 244). Both were mainly people referred for a driving assessment from community-based family physicians (146/181 and 192/244 [total 338/425, 79.5%]). The remainder were community-dwelling “healthy” (not defined) seniors actively recruited by posters or through community-based agencies (35/181 and 52/244 [87/425, 20.5%]). Inclusion criteria for those referred consisted of the presence of cognitive impairment (with or without a dementia), fluency in English, a valid driver’s license, currently driving, and consenting to the study. We infer that the healthy controls had to be fluent in English, in possession of a valid driver’s license, currently driving, and consenting. The participants studied were not a representative sample of patients seen in primary care. Nearly 80% were referred for a driving assessment with the rest being healthy volunteers. Those referred had already been deemed by their family physicians to be at higher risk for continued driving. Possibly this was on the basis of cognitive impairment, as all had deficits. The healthy volunteers were self-selected and probably very confident in their driving abilities. Both sources of participants required fluency in English. Referral and volunteer bias is likely. An unstated number of individuals approached for the instrument development component of the study declined to participate. We are told they were similar to those studied with regard to age, sex, and pass rate, but no further information is given. This raises concerns about possible non-respondent bias.

The investigators selected the DemTect as the cognitive test they were going to work with. It was not clearly stated in the paper why they chose it rather than some other brief cognitive measure, nor why they decided to deconstruct it. The DemTect takes 10–20 minutes to complete and consists of five tasks (word list immediate recall, word list delayed recall, number transcoding, semantic word fluency, and digit span reverse).(4) Presumably the duration required for the full test was felt to be too long, or the full test was not as predictive as a combination of the subtests. They looked at the subtests plus the time required as possible predictors of driving performance, selected which ones to include in their new cognitive test, developed a scoring scheme, and determined dual cut-points to categorize participants as unsafe, indeterminate, and safe on an on-road test. An indeterminate classification meant the cognitive test could not accurately predict the result of the on-road test. These participants would have to be referred for further testing.

On-road driving tests are considered a valid measure of driving safety.(5) One of these, the DriveABLE on-road driving test, was used as the measure of driving performance. Participants underwent a standardized road test in a dual-brake car with a trained evaluator who was blinded to the driver’s diagnosis and cognitive test results. Results for the on-road test were reported as either pass or fail. The performance and interpretation of this particular on-road test is somewhat of a “black box,” but what we know of it indicates that it is a reasonable approach to the assessment of driving risk.(69) One issue with it might be the use of a standardized vehicle and not the car owned by the person being tested, though there are arguments supporting the use of a standardized vehicle.(8) Usually DriveABLE results are reported as “recommend pass,” “borderline pass,” or “recommend cessation.” We suspect the fail or unsafe grade in this study was the “recommend cessation” result, but this was not stated.

Data were collected by trained psychometricians but it isn’t stated when, where, or in what sequence the cognitive tests were done. We are not told whether the psychometricians were blinded to the source (i.e., those referred ... and driving test results of the participants), cognitive status, diagnoses, and driving test results. This raises concerns about the possibility of expectation bias.

For predicting the results of the road tests the authors did not consider participant characteristics other than cognitive test results. Specifics were not given on how the regression was done, or what decision rules were used to select the components of the DemTect included in the final model. The authors claimed that the proportion predicted to fail who actually did fail the on-road test was “analogous to sensitivity” while the proportion predicted to pass who did was “analogous to specificity.” We disagree. This is not how sensitivity and specificity are calculated. We also fear the qualification “analogous” will be dropped quickly.

Ethical approval was obtained but the specific board from which it was received is not mentioned. Participants provided informed consent but it was not stated whether this was in writing or oral. There was no comment on how the five healthy controls who “failed” the on-road test were dealt with. The authors report that the CEO and President of DriveABLE is the spouse of the lead author but that the lead author owns no shares or has a financial relationship with DriveABLE and that her spouse was not involved in the research. The second author has no reported current connection with DriveABLE, but between 1993 and 2001 consulted with the firm. The study was supported by a grant from the Alberta Centre for Injury Control & Research.

RESULTS

Baseline information on the participants is provided in the paper. Their average age was in the mid-70s and 70% were men. The mean Mini-Mental State Examination (MMSE) score was about 26. Approximately half of all participants failed the on-road test. The failure rate was low among the volunteers (5/87, 5.7%; based on information provided in Table 1 and the Discussion section of the paper) and substantially higher among those referred (208/338, 61.5%).

Three subtests of the DemTect (semantic word fluency, delayed recall, and Arabic into word number transcoding) were selected for the new screening tool (called the SIMARD). These items modestly predicted on-road performance (R2 = 0.265; adjusted R2 = 0.253). Based on the available literature the predictive tests chosen have relatively low face validity.(10,11) The scoring scheme used was as follows: score = (number transcoding × 10) + (delayed recall × 8) + semantic word fluency. No information is provided on interrater and intrarater reliability testing. No information is provided on how the test operates when administered and scored by those with minimal or no training.

The cut-points selected were 30 and 70. Those scoring 30 or less were judged to have a high probability of failing the on-road driving assessment while those scoring over 70 were categorized as having a high probability of passing. In the instrument development group, 49 (27.1%) scored 30 or less and were predicted to fail (42 [85.7%] failed the on-road test), 89 (49.2%) had an indeterminate score (40 [44.9%] of these participants failed the on-road test), and 43 (23.8%) scored over 70 and were predicted to pass (36 [83.7%] passed). The authors reported an “analogous to sensitivity” rate of 86% and an “analogous to specificity” rate of 84%. These results are, respectively, the predictive value of a positive test and the predictive value of a negative test. The authors used three categories (predicted fail, indeterminate, and predicted pass) for the results of the SIMARD. While sensitivity and specificity are calculated with dichotomous variables, we can use multilevel likelihood ratios to determine whether the pretest probability of a condition (here failing the on-road driving assessment) changes substantially after a test is done (here the SIMARD). (12) Using the data presented, the likelihood ratio of failing an on-road assessment with a score of 30 or less is 3.95. It is 1.05 for a score of 31 to 70 and 0.14 for a score greater than 70. Guyatt et al.(13) suggest that likelihood ratios of less than 5 result in small changes in probabilities. Using the likelihood ratio of 3.95, if there is a 50% pre-SIMARD probability of failing the on-road driving assessment, a value of 30 or less on the SIMARD would increase the probability of failing the on-road driving assessment to 80%, which we feel would be insufficient in itself to recommend license revocation.

DISCUSSION

The authors propose that physicians be aware of driving “red flags” and administer the SIMARD if cognitive impairment is suspected. These results would then be used to counsel patients. Those scoring between 31 and 70 would be referred for an on-road driving test. How effectively this would work in a primary care setting and whether it offers any advantages to other approaches(2,3,1417) cannot be answered by this study.

On-road testing is relatively expensive and not available in all jurisdictions.(18) The authors state that the SIMARD might eliminate up to 60% of requests for in-depth driving assessments. We suspect the volume could easily increase if the SIMARD became widely used as an unknown number of individuals would undergo an on-road evaluation primarily on the basis of an indeterminate test score. We feel the false negative and -positive rates are too high for the SIMARD to be used as the sole determinant of whether someone should be offered an on-road evaluation, especially for such an important issue. Losing the right to drive can have devastating effects on the person, yet continued driving by an unfit driver endangers both themselves and others. Based on the figures in the paper, approximately 1 in 6 (15%) of those who scoring over 70 on the SIMARD will fail an on-road assessment. The same proportion scoring 30 or less would pass it. These misclassification rates are relatively high. A receiver operating characteristic curve (along with the area under the curve) would have shown the problems with the accuracy of the SIMARD.(19) The posttest probability seen with the SIMARD does not justify making licensing recommendations solely based on the results of this test. Moreover, how the test would work in diverse primary care settings (where test administration is predictably not as standardized as seen in research settings) is unknown. We feel the risk of misclassification might be even higher. This would be particularly problematic for individuals with the other medical conditions listed in the 2010 BC Guide for whom no data are available.

The authors compared the SIMARD to the MMSE. While using two cut-points and a three-level classification scheme that excluded half of all participants when considering the performance of the SIMARD, the authors used a single cut-point (a score of 24 or more predicted pass while a score of less than 24 predicted failure on the on-road test) that included all participants in the calculation of sensitivity and specificity for the MMSE. They found that 81% of those predicted to fail by their MMSE score did in fact fail the on-road test while 58% of those predicted to pass did. For the reasons outlined, these figures cannot be directly compared to the “analogous” sensitivity and specificity reported for the SIMARD. We are not suggesting that the MMSE is a good stand-alone test for assessing driving risk.(5,10,14,16,17,20) If used at all, the MMSE would be part of a global assessment of driving risk, and there would be a more nuanced interpretation of the MMSE results obtained. For example, while an MMSE score of 24 or less might identify an increased risk of unsafe driving, no claim would be made that a higher score indicates no concern.(5)

CONCLUSIONS

Assessing the risk of driving in older adults is complicated with no easy answers.(18) While Drs. Dobbs and Schopflocher should be complimented on their contribution to the literature, we can’t support the routine use of the SIMARD at this point in time. To adopt a new cognitive instrument for a limited indication, practitioners have to be convinced that it offers striking benefits. We don’t believe the SIMARD has been shown to work more effectively in a primary care setting than other approaches to assessing driving safety, such as the one outlined in the 7th edition of the CMA Driver’s Guide(2) and the practice parameter update of the American Academy of Neurology.(5) The SIMARD is not in our opinion sufficiently accurate to be the sole determinant of who should go for an on-road driving test. Further research, including confirmatory studies, is required to determine the role of the SIMARD in the assessment of driving risk among older patients.

ACKNOWLEDGEMENTS

While we would like to thank colleagues for their thoughts and comments, the opinions expressed in this paper are our own.

CONFLICT OF INTEREST DISCLOSURES

We have no conflict of interest to declare. This material paper is based on a presentation by D.B.H. at a Division of Geriatric Medicine (University of Calgary) Journal Club (October 27, 2010).

REFERENCES

1 British Columbia Ministry of Public Safety and Solicitor General Office of the Superintendent of Motor Vehicles. 2010 BC guide in determining fitness to drive [Internet]. Victoria: Office of the Superintendent of Motor Vehicles; 2010 Jul 12 [cited 2011 Jun 3] Available from: http://www.pssg.gov.bc.ca/osmv/publications/docs/2010-guide-in-determining-fitness-to-drive.pdf

2 Canadian Medical Association. Determining medical fitness to operate motor vehicles: CMA driver’s guide, 7th ed. Ottawa: Canadian Medical Association; 2006.

3 Herrmann N, Rapoport MJ, Sambrook R, et al; Canadian Outcomes Study in Dementia (COSID) Investigators. Predictors of driving cessation in mild-to-moderate dementia. CMAJ 2006;175:591–5.
cross-ref  pubmed  pmc  

4 Kalbe E, Kessler J, Calabrese P, et al. DemTect: a new, sensitive cognitive screening test to support the diagnosis of mild cognitive impairment and early dementia. Int J Geriatr Psychiatry 2004;19:136–43.
cross-ref  pubmed  

5 Iverson DJ, Gronseth GS, Reger MA, et al. Practice parameter update: evaluation and management of driving risk in dementia: report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology 2010;74:1316–24.
cross-ref  pubmed  pmc  

6 Dobbs AR. Evaluating the driving competence of dementia patients. Alzheimer Dis Assoc Disord 1997;11(Suppl 1):8–12.
cross-ref  pubmed  

7 Dobbs AR, Heller RB, Schopflocher D. A comparative approach to identify unsafe older drivers. Accid Anal Prev 1998;30:363–70.
cross-ref  pubmed  

8 Kowalski K, Tuokko H. On-road driving assessment of older adults: a review of the literature. Victoria: University of Victoria Centre on Aging and Justice Institute of British Columbia; 2007.

9 Korner-Bitensky N, Sofer S. The DriveABLE Competence Screen as a predictor of on-road driving in a clinical sample. Aust Occup Ther J 2009;56:200–5.
cross-ref  pubmed  

10 Molnar FJ, Patel A, Marshall SC, et al. Clinical utility of office-based cognitive predictors of fitness to drive in persons with dementia: a systematic review. J Am Geriatr Soc 2006;54:1809–24.
cross-ref  

11 Mathias JL, Lucas LK. Cognitive predictors of unsafe driving in older drivers: a meta-analysis. Int Psychogeriatr 2009;21:637–53.
cross-ref  pubmed  

12 Haynes RB, Sackett DL, Guyatt GH, et al. Clinical epidemiology: how to do clinical practice research, 3rd ed. Philadelphia: Lippincott Williams & Wilkins; 2006.

13 Guyatt G, Rennie D, Meade MO, et al. Users’ guides to the medical literature, 2nd ed. New York: McGraw-Hill; 2008.

14 Hogan DB. Which older patients are competent to drive? Approaches to office-based assessment. Can Fam Physician 2005;51:362–8.

15 Byszewski A; Ottawa Driving Toolkit working group. The driving and dementia toolkit, 3rd ed for health professionals. Ottawa: The Champlain Dementia Network and the Regional Geriatric Program in Eastern Ontario; 2009. http://www.champlaindementianetwork.org/uploads/Resources/kitjune09.pdf

16 Carr DB, Ott BR. The older adult driver with cognitive impairment: “It’s a very frustrating life”. JAMA 2010;303:1632–41.
cross-ref  pubmed  pmc  

17 Molnar FJ, Simpson CS. Approach to assessing fitness to drive in patients with cardiac and cognitive conditions. Can Fam Physician 2010;56:1123–9.

18 Rapoport MJ, Herrmann N, Molnar FJ, et al. Sharing the responsibility for assessing the risk of the driver with dementia. CMAJ 2007;177:599–601.
cross-ref  pubmed  pmc  

19 Bédard M, Weaver B, Man-Son-Hing M, et al. The SIMARD screening tool to identify unfit drivers: are we there now? J Prim Care Community Health 2011;2:127–32.

20 Brown LB, Ott BR. Driving and dementia: a review of the literature. J Geriatr Psychiatry Neurol 2004;17:232–40.
cross-ref  pubmed  



Correspondence to: Dr. David B. Hogan, HSC-3330 Hospital Dr. NW, Calgary, AB T2N 4N1. E-mail:dhogan@ucalgary.ca

(Return to Top)



Canadian Geriatrics Journal, Volume 14, Issue 2, June 2011