BOSTON — Although many behavioral neurologists are adhering to recommendations about the use of single-domain cognitive assessment tests, new survey data from the American Academy of Neurology (AAN) suggest that more needs to be done.
The survey of 200 members of AAN's Behavioral Neurology Section showed that 75% "often used" a clock-drawing test when assessing spatial cognition, 70% used the digit span forwards/backwards test when assessing attention, and 56% used semantic fluency as a way to assess language.
No specific memory or executive function tests were "often used" by the majority of those who responded, but recalling three to five unrelated words and the Luria Hand Sequence Test were most commonly cited for the two domains, respectively.
Although both of these tests are considered to be brief and widely accessible, "neither has robust normative data and highlight that there's room for improvement in measuring memory and executive dysfunction," said James R. Bateman, MD, fourth-year resident at the University of North Carolina at Chapel Hill.
"Ultimately, our goal is to use these data as a starting point to improve the quality and utility of clinical cognitive testing," he added. Dr Bateman discussed the findings during his presentation at the American Association of Neurology 2017 Annual Meeting (AAN) titled "Do We Practice What We Preach?"
"Little Data" on Routine Use
For the survey, "we wanted to identify how and what behavioral neurologists do regarding cognitive testing in clinical practice," said Dr Bateman. "And we wanted to evaluate whether current practices conform to recommendations published in 2015," which were based on an evidence-based report.
In addition, the investigators sought to explore broader aspects of these assessments in order to improve the Neurobehavioral Status Exam (NBSE). Interestingly, "there is little data about what [cognitive] tests are routinely used," noted Dr Bateman.
The current survey, which was developed by an AAN Behavioral Neurology Section workgroup, were sent to 713 Section members.
Of the 200 (29%) who responded, 50% said they spent most of their time in clinical practice and 49% spent most of their time in academic practice; 58% had been practicing for 20 years or longer; and 64% saw at least 20 patients a week.
The most common types of patients seen by the respondents were those with adult dementia (n = 165) and stroke/traumatic brain injury (n = 96). Half of the providers reported spending more than 30 minutes on cognitive testing of new patients, 24.5% spent 11 to 20 minutes, 13.5% spent 21 to 30 minutes, and 11% spent 1 to 10 minutes.
In addition, 49% reported that they perform cognitive testing themselves. Although few of the providers were using computerized testing, 63% said they would consider using it in the future.
All were asked about their use of cognitive tests within each of five cognitive domains: spatial cognition, attention/concentration, language, memory, and executive function. The three test-use ratings were often use, occasional use, or never use.
"We compared reported practice characteristics in these domains to consensus testing recommendations, which include brevity, accessibility (public domain), and the availability of normative data in relevant patient populations," write the investigators.
Limitations of Memory Assessments
While the single-domain tests used most often by a majority of the respondents for attention, language, and spatial cognition conformed to these recommendations, those used most often for memory and executive function did not.
These two domains "did not seem to have a natural consensus on tests with robust normative data," said Dr Bateman.
The 3-5 Unrelated Words test for memory "isn't rigorous enough," with characteristics that vary significantly; up to 19% of patients who recall only one third of its words are considered normal on more detailed testing, he said.
The second most commonly used memory assessment was the Rey-Osterrieth Complex Figure Test. While Dr Bateman said it has "good normative data" and is studied across multiple disorders, he added that the test limits specificity and has a long administration time and complex scoring.
The CERAD Word List, which came in third on the "often used" list of memory tests, is relatively simple, takes just 5 to 10 minutes to complete, and provides excellent normative data for older adults with dementia. However, robust normative data is lacking for those outside of that population, said Dr Bateman.
In addition, the Rey Auditory Verbal Learning Task provides good normative data and can be used for a variety of disorders. But it's more complex, which may limit its use for NBSE, he added.
FAB for Executive Function?
For assessing executive function, the Go–No Go Task came in second in use to the Luria Hand Sequence Test. But "both were used in isolation and really lack standardization and normative data."
On the other hand, the Frontal Assessment Battery (FAB) has robust normative data and "is probably a good candidate for an executive screening test for NBSE," said Dr Bateman.
Overall, "we urge greater reliance on brief, accessible standardized tests with normative data, particularly for assessing memory and executive functions," write the researchers.
During the postpresentation question-and-answer session, it was noted that many patients present with a range of symptoms, so the use of a specific battery may not be appropriate.
"I think that's a really excellent point," answered Dr Bateman. "Coming up with a single memory test or single test for executive function is probably never going to work for all behavioral neurologists across the board. But what is important is coming up with a range of tests within each individual cognitive domain that have normative data that we can then use."
No Easy Solution
Session co-moderator H. Branch Coslett, MD, professor at the Center for Cognitive Neuroscience at the University of Pennsylvania School of Medicine in Philadelphia, told Medscape Medical News that the issue of cognitive testing "is a huge problem."
"That's because you have conflicting constraints, and the big issue is time," said Dr Coslett. "To do an adequate cognitive exam is time consuming, there's no way around it. And given the pressure on reimbursement to see more patients, it's difficult to see how we're going to do better."
He added that the idea of only using brief batteries of tests that have empirical validation is a good one, "but knowing exactly what that battery might be is where the rub is."
Dr Coslett noted that his group has developed their own version of the Montreal Cognitive Assessment, called the Philadelphia Brief Assessment of Cognition. "We like it, but I'm not sure if anyone else would," he said.
"People are very wedded to their particular tests. Nobody is going to look at our battery and say, 'Oh, this is perfect.' Some people have very strong allegiances to different tests. So I don't see an easy solution to this," he concluded.
The survey was created and distributed by the AAN. Dr Bateman and Dr Coslett disclosed no relevant financial relationships.
American Academy of Neurology 2017 Annual Meeting (AAN). Abstract S18.003. Presented April 24, 2017.
Medscape Medical News © 2017
Cite this: AAN Survey: 'Room for Improvement' in Clinical Neurocognitive Testing - Medscape - May 03, 2017.