If you worry that artificial intelligence (AI) will one day replace your own clinical acumen as a dermatologist, Vishal A. Patel, MD, advises you to think differently.
"AI is meant to be an enhancement strategy, a support tool to improve our diagnostic abilities," Patel, a Mohs surgeon who is director of cutaneous oncology at the George Washington University Cancer Center, Washington, said during the ODAC Dermatology, Aesthetic & Surgical Conference. "Dermatologists should embrace AI and drive how it is utilized – be the captain of the plane (technology) and the passenger (patient). If we're not in the forefront of the plane, we're not to be able to dictate which way we are going with this."
In 2019, a group of German researchers found that AI can improve accuracy and efficiency of specialists in classifying skin cancer based on dermoscopic images. "I really do believe this is going to be the future," said Patel, who was not involved with the study. "Current research involves using supervised learning on known outcomes to determine inputs to predict them. In dermatology, think of identifying melanoma from clinical or dermoscopic images or predicting metastasis risk from digitized pathology slides."
However, there are currently no universal guidelines on how large an AI dataset needs to be to yield accurate results. In the dermatology literature, most AI datasets range between 600 and 14,000 examples, Patel said, with a large study-specific variation in performance. "Misleading results can result from unanticipated training errors," he said.
"The AI network may learn its intended task or an unrelated situational cue. For example, you can use great images to predict melanoma, but you may have an unintended poor outcome related to images that have, say, a ruler inside of them clustered within the melanoma diagnoses." And unbeknown to the system's developer, "the algorithm picks up that the ruler is predictive of an image being a melanoma and not the pigmented lesion itself." In other words, the algorithm is only as good as the dataset being used, he said. "This is the key element, to ask what the dataset is that's training the tool that you may one day use."
Convolutional Neural Network
In 2017, a seminal study published in Nature showed that for classification of melanoma and epidermal lesions, a type of AI used in image processing known as a convolutional neural network (CNN) was on par with dermatologists and outperformed the average. For epidermal lesions, the network was one standard deviation higher above the average for dermatologists, while for melanocytic lesions, the network was just below one standard deviation above the average of the dermatologists. A CNN "clearly can perform well because it works on a different level than how our brains work," Patel said.
In a separate study, a CNN trained to recognize melanoma in dermoscopic images was compared to 58 international dermatologists with varying levels of dermoscopy experience; 29% were "beginners," with less than 2 years of experience; 19% were "skilled," with 2-5 years of experience; and 52% were "experts," with at least 5 years of experience. The analysis consisted of two experiments: In level I, dermatologists classified lesions based on dermoscopy only. In level II, dermatologists were provided dermoscopy, clinical images, and additional clinical information, while the CNN was trained on images only. The researchers found that most dermatologists were outperformed by the CNN. "Physicians of all different levels of training and experience may benefit from assistance by a CNN's image classification," they concluded.
Gene Expression Profiling
Another aspect of AI is gene expression profiling (GEP), which Patel defined as the evaluation of frequency and intensity of genetic activity at once to create a global picture of cellular function. "It's AI that uses machine learning to evaluate genetic expression to assess lesion behavior," he explained.
One GEP test on the market is the Pigmented Lesion Assay (PLA) from DermTech, a noninvasive test that looks at the expression of two genes to predict if a lesion ismalignant or not. "Based on their validation set, they have shown some impressive numbers," with sensitivities above 90%, and published registry data that have shown higher sensitivities "and even specificities above 90%," he said.
"On the surface, it looks like this would be a useful test," Patel said. A study published in 2021 looked at the evidence of applying real-world evidence with this test to see if results held up. Based on the authors' analysis, he noted, "you would need a sensitivity and specificity of 95% to yield a positivity rate of 9.5% for the PLA test, which is what has been reported in real-world use. So, there's a disconnect somewhere and we are not quite there yet." That may be a result of the dataset itself not being as uniform between the validation and the training datasets, he continued. Also, the expression of certain genes is different "if you don't have a clean input variable" of what the test is being used for, he added.
"If you're not mirroring the dataset, you're not going to get clean data," he said. "So, if you're using this on younger patients or for sun-damaged lesional skin or nonmelanocytic lesions around sun-damaged areas, there are variable expressions that may not be accurately captured by that algorithm. This might help explain the real-world variation that we're seeing."
Another GEP test in use is the 31-Gene Expression Profile Test for Melanoma, which evaluates gene expressions in melanoma tumors and what the behavior of that tumor may be. The test has been available for more than a decade "and there is a lot of speculation about its use," Patel said. "A recent paper attempted to come up with an algorithm of how to use this, but there's a lot of concern about the endpoints of what changes in management might result from this test. That is what we need to be thinking about. There's a lot of back and forth about this."
In 2020, authors of a consensus statement on prognostic GEP in cutaneous melanoma concluded that before GEP testing is routinely used, the clinical benefit in the management of patients with melanoma should be established through further clinical investigation. Patel recommended the accompanying editorial on GEP in melanoma, written by Hensin Tsao, MD, PhD, and Warren H. Chan, MS, in JAMA Dermatology.
In Patel's opinion, T1a melanomas (0.8 mm, nonulcerated) do not need routine GEP, but the GEP test may be useful in cases that are in the "gray zone," such as those with T1b or some borderline T2a melanomas (> 0.8 mm, < 1.2mm, nonulcerated, but with high mitosis, etc.); patients with unique coexisting conditions such as pregnancy, and patients who may not tolerate sentinel lymph node biopsy (SLNB) or adjuvant therapy.
Echoing sentiments expressed in the JAMA Dermatology editorial, he advised dermatologists to "remember your training and know the data. GEP predicting survival is not the same as SLNB positive rate. GEP should not replace standard guidelines in T2a and higher melanomas. Nodal sampling remains part of all major guidelines and determines adjuvant therapy."
He cited the characterization of GEP in the editorial as "a powerful technology" that heralds the age of personalized medicine, but it is not ready for ubiquitous use. Prospective studies and time will lead to highly accurate tools."
Patel disclosed that he is chief medical officer for Lazarus AI.
This article originally appeared on MDedge.com, part of the Medscape Professional Network.
Lead Image: Wrightstudio/Dreamstime
Medscape Medical News © 2022 WebMD, LLC
Cite this: Why Dermatologists Should Support Artificial Intelligence - Medscape - Mar 07, 2022.