John M. Mandrola, MD: Hi, everyone. This is John Mandrola from theheart.org | Medscape Cardiology, and I am here at the American Heart Association (AHA) meeting with my friend, Mintu Turakhia, who is a cardiac electrophysiologist at Stanford. He is an outcomes researcher and is on the leading edge of digital health. That is what we're going to talk about today. Mintu, welcome.
Mintu Turakhia, MD, MAS: Thanks, John. It is really great to be here.
Dr Mandrola: Let's start with artificial intelligence (AI) and begin with my bias, that if AI is anything like computer ECG reading, we are in trouble. What do you say?
Dr Turakhia: AI is a revolution, and a little bit of a revelation, in healthcare. Let me explain that. It has really come into healthcare from the outside world; we hear about self-driving cars, robots, and things like that, but AI in healthcare is not about that. We are not going to have robots taking care of us in the foreseeable future.
Where AI is being reframed is in what I call assistive intelligence, things that help clinicians, hospitals, and the staff that we lean on so heavily. Machine-read ECGs are a great example of early technology that was used to relieve people like us who have to read 400 ECGs on a weekend. That is currently in the form of what we call an expert system.
The big ECG companies got together and created a set of rules and parameters that were used to classify arrhythmias, and they gave that to us and we would edit them. We know in practice that they do not really work all that well and we have to reclassify. What AI does that is different is that it attempts to learn off of a set of ECGs that are then coded or annotated to a gold standard (like a clinician read). The AI then uses complex math to really learn and apply that learning forward.
There is a difference in the computational ability, and the studies thus far show that it is at least as good as a cardiologist and generally better.
AI Learns On the Go
Dr Mandrola: AI learns differently from humans—is that true?
Dr Turakhia: It depends on the use case. Where AI is like humans is in tasks that require a reasonable amount of computation. Where AI excels over humans is when something is very computationally heavy: chess, go, games with strategy. Where AI doesn't do as well is where there is complex computation that is not linear and math, or looking many, many steps ahead—things like reading facial expressions, understanding the context of a child's cry. Those are harder to train.
If you bring this back to cardiology and ECGs, the reason it can work well with an ECG is that it is basically an image, and the technology and algorithms for artificial intelligence have really dramatically improved over the past 5-10 years. There is a framework of deep learning called a convolutional neural network, and they have been used across medicine to be compared to physicians in classification.
This first started for assessment of skin lesions, whether they are melanoma or nonmelanoma.
Dr Mandrola: Okay.
Dr Turakhia: Then we moved into retinopathy, looking at funduscopic exams and assessing for diabetic retinopathy.There were two landmark papers[3,4] that showed that, again, deep learning algorithms did at least as good and often better. That has recently been extended to Holter classification as well.
Getting back to your question: Is it different learning? The reason I think this is so important for our field is that the confusion of a deep learning algorithm, the errors—what is called, in classification problems, a confusion matrix—are actually very similar to what happens when we misread things.
Specific examples are atrial fibrillation (AF), which often gets classified incorrectly as atrial tachycardia or sinus rhythm. Wenckebach may often get classified as Mobitz II heart block or a complete heart block. Those are the same errors that the deep learning algorithms are making. They are recapitulating the human tree of error. If you can do that, I would argue that it is much safer because you are not creating these random nonsensical errors.
Dr Mandrola: These algorithms—give us a couple of examples of what we might use them for in arrhythmias.
Dr Turakhia: The general application of AI-based computation can be used many ways. We have already seen that it works well for Holter; it's at least as good as cardiologists, perhaps better. Think about it as assisting physicians, easing the burden, having better preliminary reads.
What if we get to the point where the technology for handheld or portable ECGs is available all over the world? What is going to lag is the expertise to read that. Would we at some point be able to [use AI to] give provisional reads, at least for high-risk conditions, without having it be confirmed by a cardiologist or a doctor, which could take much longer? That is a potential use case.
Another use case: We all know that the style and perhaps quality of an ECG interpretation depends on who is reading that week.
Dr Mandrola: Right.
Dr Turakhia: That is a known thing. If an algorithm calibrates well to a population of highly trained cardiologists who are good, then maybe this is a way of ensuring quality among a group. It might not be that the AI is doing the reading, but rather it's making sure that no single physician is falling behind the pack, so to speak, or is below the average in terms of misclassification. That is a way of maintaining quality without having to do double reads. Now, in terms of new use cases, what could be very exciting is the stuff that we love to spend our time arguing about, which is, where is the PVC [premature ventricular contraction] coming from?
Dr Mandrola: Meaning, right or left ventricle?
Dr Turakhia: Is it from the outflow tract? Is it from the cusp? I would argue that we are currently at a stage where there are expert systems. There are papers that say, you want to look at the intrinsicoid deflection and the positivity and various things in different leads. It would be nice to be able to consolidate that data and train algorithms across known classifiers—meaning, PVCs that were localized—and see whether it can reasonably guide us as doctors to identify where to potentially start mapping.
Dr Mandrola: Assisting doctors?
Dr Turakhia: Yes.
The Connected Self: No Doctor Required?
Dr Mandrola: Let me go back to the other thing you said, about algorithms looking at AF detection and arrhythmia detection. What do you think about the future—say, 5 years from now, 10 years from now—in terms of wearables and the connected selves, so to speak? Will patients be able to take doctors out of the loop, and could these devices ever get so good that they could do AF detection on their own?
Dr Turakhia: I do not think that doctors will ever be out of the loop for treatment. I do not see algorithms getting DEA numbers and medical licenses in my lifetime. But what we are going to have is a situation where the diagnostics have moved out of the domain of our offices and into the domain of direct-to-consumer, direct-to-patients, and things like that. We already see that on the consumer side with wearable exercise monitors, activity monitors, and heart rate sensors.
The next step is, can you do more than just detect heart rate? Is there something in the signal that could clue you in on potential unusual changes in heart rate or rhythm? That is the exercise we are in now to understand whether that can be done.
Dr Mandrola: Better prediction, perhaps?
Dr Turakhia: Prediction is hard. When I think of AI, I think of two problems: classification and optimization. Classification is when you are looking at a picture and you want the AI to tell what it is (whether it is a skin lesion or ECG). Optimization is about reducing pain points, getting elevators to work more efficiently. I could even extend that: A really important use case that is not being tapped is scheduling procedures and procedure rooms in the OR.
When you get into prediction, now you have longitudinal access, another dimension of time. Algorithms are not going to be able to tell you exactly when someone is going to have a stroke or go into cardiac arrest in the foreseeable future because these are stochastic events; they are based a little bit on chance. And just because you measure more does not mean that you are going to be able to predict those things. But they may help clue us in on earlier diagnoses of certain things.
Dr Mandrola: That is an important point. More collection of data is not necessarily better. One of the things I am concerned about with the connected self is that until we redefine what is normal in terms of ambient arrhythmia or low heart rates/high heart rates, it seems like there is going to be trouble with overdiagnosis and overtreatment. What do you think about that?
Dr Turakhia: I think that is a real concern at two levels; one is the actual classification of disease. The nice thing about ECG for rhythms is that it is a gold standard, and we believe that there is very high specificity and sensitivity for things like atrial fibrillation. Now, if you extend that all the way down to simple parameters, just heart rate alone, it is not going to be sensitive or specific for AF; it is going to have lower sensitivity. It may have specificity if you say, well, every heart rate above X bpm is probably AF, but you are going to miss a lot then.
What about pulse irregularity? When doctors do irregular pulse checks, it turns out that that is actually not very specific or sensitive.
If you extend those concepts to looking at wave forms, looking at irregularities, you have to make sure that these algorithms would do their job, and we do not know. The first problem is classifying correctly. The second problem is, what do you do about ambient signal that is low-level? Let's say that we all had the ability to monitor an ECG forever and all the time. What are you going to do with that run of atrial tachycardia that 6 minutes? What are you going to do if that run of 1 hour kind of looks like AF but you are not sure? That is where things get really messy.
Dr Mandrola: It seems like we are going to need more time to figure that out.
Dr Turakhia: I would argue that we still do not have great consensus on what to do with gold-standard monitoring. For example, when you and I put in pacemakers or defibrillators with atrial leads, and someone comes in for their check or is on remote monitoring, we get an alert that they had a new episode of AF for 1 hour. Is that enough to give them anticoagulation? Is it not? There is actually a lot of variation across practices and physicians, and the guidelines have not been able to pin that down due to lack of evidence.
Dr Mandrola: I think we are going to learn more as we go on. I believe that these extra digital data are going to help define what the new normal is.
Dr Turakhia: I hope so. I think the other solution—or the challenge, I should say—is that just because you measure things, or you download apps, or you wear these things, does not mean that outcomes are going to improve. We have accomplished a lot with miniaturization of technology with connectivity with the smartphone, and perhaps with wearables, but where we really failed is in behavior change. Getting patients to do the right thing is hard; how do we fix that?
What are the incentives that are needed for things like that? That is the key. Are we going to give each individual some assessment of risk in the same way that they see the number of calories on the menu at a fast-food restaurant, or is it going to be framed through financial incentives such as lower premiums or a penalty, depending on your behaviors and your adherence? I do not know the answer, but I think this is a really hot area right now.
Dr Mandrola: Yes. Provocative. Mintu, thank you for being with us.
Dr Turakhia: Always a pleasure. Thanks, John.
© 2018 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: Artificial Intelligence in Cardiology: Friend or Foe? - Medscape - Feb 08, 2018.