This transcript has been edited for clarity.
Suraj Kapa, MD: Hello, everybody, and welcome back to the Mayo Medscape video series. Today we are going to be talking with our colleagues about the role of artificial intelligence (AI) in healthcare, specifically as it relates to cardiology and the identification of novel opportunities in this area. I am joined today by Dr Francisco Lopez-Jimenez, professor of medicine at Mayo Clinic Rochester, with an expertise in preventive cardiology; and his colleague, Zachi Attia, assistant professor of medicine, also at Mayo Clinic in Rochester, and a bioengineer by trade. Together they comprise the leadership that is heading up the AI efforts within innovation in the cardiovascular department at Mayo Clinic. My name is Suraj Kapa, and I am also an assistant professor of medicine.
Three Future Applications of AI
Kapa: Francisco, I have been hearing so many excellent things about opportunities for applying AI to healthcare data. When you look at the landscape of AI, what do you see as the future or the present of applying AI to cardiology, and to healthcare in general?
Francisco Lopez-Jimenez, MD, MBA: Thank you, Suraj. This is a very important question. I see the future being developed in three main areas. The first one is optimization and automatization of interpretation of echocardiograms, coronary angiograms, cardiac CTs, and other diagnostic modalities. The second one is identifying people at risk. I think AI has the potential to identify those at risk for sudden cardiac death, those who might have a reinfarction, and those who might die from heart failure soon after they leave the hospital. The third is the management of big data. As we know, big data is everywhere, so we need AI to better handle the massive amounts of information we get from wearable devices like Holter monitors, which help us better understand the underpinnings of cardiovascular disease.
Kapa: Zachi, if you had to explain AI to a layperson, what is it exactly?
Itzhak Zachi Attia, MS, MD: Thank you, Suraj. AI is the ability of machines to mimic human intelligence. It enables [machines] to automate the process of taking an ECG, for example, and then give us the rhythm analysis. We also discovered that it can see patterns that humans cannot see. That may be the more interesting part of AI. It can look at thousands or hundreds of thousands of ECGs, look at the prognosis, and decide if the patient is going to have a disease. It does that just by looking at a lot of examples and the labels. We do not say what to look for, we do not define any specific features; we start completely unbiased and without a specific hypothesis, except for the label we are looking for. Just by showing the neural network a lot of examples, it learns to find the patterns that are unique for each label.
Applications of AI to Cardiac Data
Kapa: I have read a lot of articles in this space. With all the work you have done so far and that has been accomplished so far, what do you both think is the most promising or exciting discovery you have made in applying AI to cardiac data?
Lopez-Jimenez: To me, the one that has been very exciting and innovative was the identification of people who have paroxysmal atrial fibrillation but the ECG shows sinus rhythm. It really opens up a new era in cardiology, particularly in the analysis of electrocardiography, because it can find a condition that generally requires an ECG showing atrial fibrillation to make the diagnosis. Now we can identify those with paroxysmal atrial fibrillation with an ECG that might seem otherwise normal.
Kapa: Zachi, your thoughts?
Attia: As Dr Lopez mentioned, that is interesting because we're looking at an ECG saying, "It looks totally normal, but the AI algorithm said something else." And the AI algorithm is the right one.
Another example is low ejection fraction (EF) detection. When we did the low EF detection, we divided patients by their current echocardiogram. We later looked at patients who had a false positive, meaning AI said they had heart failure or low EF and normal echocardiogram (EF was actually about 50%, not ≤ 35%). When we looked at the future echocardiograms, we noticed that they had six times more risk of developing low EF in the future. Now, we do not think that the neural network was able to predict the future, but we think it was able to locate subclinical features of the disease so it can see something below the surface that humans cannot see. It basically helps us to define the term of previvors—patients who are going to have a disease—and maybe we can do something to prevent it instead of waiting for the disease to happen.
Challenges for Implementing AI
Kapa: Wonderful. Francisco, wearing your hat as a clinical cardiologist, what do you see as the potential hurdles of implementing these algorithms into general care?
Lopez-Jimenez: We have to be mindful that AI might represent some challenges in terms of the validity of the data, the validity of the algorithms. We need to be sure that what we call AI comes from good data, comes from data that can be reproducible and can be applicable to a variety of populations, not just a very specific group of people. Second, the application of AI is challenging in the sense that we know that the application of the knowledge does not happen automatically. We are used to all paradigms of medicine, so this might become a slow process if we do not think about the best way to implement AI. We also need to be careful not to assume that the implementation is always going to be successful. There are always unintended consequences of applying new technology, so we have to measure and evaluate [the technology] every time we implement new algorithms to see if there are any unintended consequences in the application. An algorithm might be very good for diagnosing a condition but have an extremely high rate of false-positive results, and that would make the healthcare explode into an exceedingly expensive system if we are not careful implementing this step by step.
Kapa: You bring up a number of interesting points, namely that the implementation science that underlies deployment of algorithms that are built out of big data is unique compared with where we have come from. We are coming from a place where we might create scoring systems or guidelines, but still it was contingent upon the clinician to make a decision based on entering the scoring system in or thinking of the guidelines as they apply to a patient. Now you are getting an automated read from a device saying, "You should think about this." How do you think clinicians' behaviors are going to change? What are your thoughts?
Lopez-Jimenez: The question is still up in the air. At the end of the day, it has to be clinicians who will be ultimately responsible for what the machine is saying. How can we reconcile this completely automatic system that is going to give you a diagnosis and at the same time have a clinician who is responsible for that diagnosis? This is something we have to define. For example, who is responsible if a completely automatic car has an accident? You might say the company is responsible. But in medicine it is more complicated because errors may happen occasionally, so who is going to be responsible for that? We have to really work on this. We have to work on how the clinician is going to still have the final word, which I think is going to be the case for many years to come. No matter how automatic a system becomes, the clinician will have the final word to determine whether he or she thinks this diagnosis is present or not.
Importance of 'Explainability'
Kapa: Since AI is a broad field and includes many different machine learning and other techniques for data analysis, one of the major criticisms I hear sometimes of this kind of AI work is the "explainability." In other words, somebody says that the ECG can tell us if you have a low EF or not, but why? Zachi, do you think it is always going to be a black box, or do you think we will get to a point where we will very readily be able to explain what these obscure neural networks are actually seeing?
Attia: I think we will be able to open the black box and maybe peek into it and see what the main features and main drivers of the network are but not really know exactly what it does. We will get hints and say, "I see that when this happens, the network decides this way or another," but I do not think it will ever be a fully explainable model. It is actually a stronger point for these models, because the ability to explain also holds us back because it only enables us to do what we can explain. If the network is seeing a very nonlinear, a very gestalt-like feature, even an expert that can do the same test would not really know what it is seeing. He will just say, "I see there is something wrong with it." I think we want to make sure we are not limiting our systems to only the very simple things we can explain.
Also, when looking at big data, bias in the difference between different data distributions needs to be addressed. When we are testing a model, we want to make sure that we are testing it on a variety of patients. We know that these neural networks and other AI models are very good in AI pattern recognition. If we have a pattern that we do not want it to recognize, there is still a chance it will recognize it, and we want to make sure that we can globalize and generalize the network and make sure it works for everyone. It is key that when you develop an AI model and validate it before using it, you want to make sure it actually works for the group you are planning.
Kapa: You brought up a great point about explainability. When we have an expert clinician, an expert cardiologist, a leader in the field walk into a room and say, "You're doing this wrong; it should be done this way," we accept it as gospel even if they cannot discretely explain it in terms that will necessarily lead that person to be able to redundantly do it the same exact way that person was able to. The expert is giving their clinical gestalt, often based off of decades of experience. Our expectations of the computer are very different from our expectations of the human, because our expectation of the human is that they do not necessarily need to be able to explain it as long as they get it right. And it seems like we need to trend ourselves a little bit more toward that way of saying, "The data relationship is so complex that what matters is that it is getting it right all the time." Your thoughts, Francisco?
Lopez-Jimenez: I completely agree. Clinical decision-making is a great example for that, as is the use of medications that we do not understand. It took decades to understand the mechanism of action of some medications, but as long as they were proven to be effective, I do not think people were complaining. For example, we do not really understand how and why aspirin has a preventive effect for colon cancer, but several studies show that it does. So how can we stop using a preventive strategy just because we do not understand the way it works? We do not do that, and I do not think AI should be any different.
Kapa: As we forge ahead into these activities, we need to marry the concepts of replacing humans with the idea that we are actually augmenting the ability of humans to provide the care that we need. Otherwise, as you point out, why invent new drugs? Why invent new techniques of doing the same type of work in a less invasive way? We do it because we think it will provide better care, not because we want to replace a certain person or a certain group of people. We do it because care will improve and expand as we push forward.
Innovative Relationship Between Data Scientists and Clinicians
Kapa: I feel like here [at Mayo] we have a very unique relationship within our clinical innovation space which is led by data scientists and a clinical cardiologist. How do you envision that role? Do you envision healthcare evolving in the way of having data scientists embedded into the programs from the beginning, or do you envision it as calling upon individuals with data science expertise to do projects almost like we do with our statisticians today? What is the unique element to bringing the data scientist into play in our healthcare schema?
Lopez-Jimenez: At Mayo Clinic, the engineers are embedded into the clinical day-to-day activities. That is the best way for them to understand the needs and for us to explain to them what we do, so together, we can come up with better solutions than just limiting ourselves to what the clinician believes is important. Engineers can come up with amazing ideas. At Mayo, Zachi and others come up with recommendations. Sometimes they ask why things are done a certain way, and the response many times is, "Because this is the way we have been doing this for years." Just challenging those paradigms is very important. Zachi, any words on that?
Attia: While we can outsource some of that work, I do not think it will allow us to create more innovative solutions as we go. A lot of our projects start in one point [and change] as we go along. For example, when we did the EF model, we were trying to find ways to improve it. When we did not, we said, "How is it that age and sex do not improve it?" Francisco and I were talking to you and we wondered, "Maybe the ECG can predict age and sex." It worked. If we were disconnected, where I would sit in a different office doing the technical part and you were doing the clinical part, I do not think we would ever get to these questions. A lot of these questions started new projects in different fields, and the concept of working together side by side is much more thought-provoking and creates a lot of new beginnings.
Kapa: It has been well known for a long time that to disrupt industries or disrupt ways we practice (which, frankly, I think all of us would agree in healthcare is necessary in this day and age), you need to have a mindset that is anti-disciplinarian and anti-establishmentarian, such that you can actually bring novel insights that exist outside the existing paradigms of how you normally practice. And once you are entrenched in a situation, it can be extraordinarily difficult to bring in a new insight or a new idea that would actually go against the very grain of how you have been trained to practice and how you have been practicing, unless you are forced to do so by someone like Zachi, who comes in and says that something could be done a different way or thought of differently.
Lopez-Jimenez: As an electrophysiologist, you have seen for the past 20 years or so how the development of new technology to solve medical problems (eg, pacemakers, ICDs) truly requires people outside the medical expertise to essentially get into the clinical field and see what the needs are and work together with the clinician.
Attia: It is beneficial for the engineer as well. My questions or my ideas now are very different from what I would ask 5 years ago when I had just arrived. By working with physicians, it is much easier to understand what we are trying to solve, how to make a change in the patient's prognosis, and to look for projects that are worth doing. All of us have a limited bandwidth and we want to focus on things that can get to patients and actually help them.
Expanding Use of AI
Kapa: One of the biggest things I see as a potential with AI is not really for healthcare but for delivering health. We are all talking about patients, but the fact is that most of the human population is not patients. Most of the human population has never been a patient and might never be a patient until they get truly sick and are close to death. When we start talking about how to cost-effectively scale our expertise and our abilities to everybody to deliver health, as opposed to delivering healthcare, AI techniques might allow us to achieve that where, frankly, no number of physicians could accommodate—even if you spread them out worldwide and spent trillions and trillions of dollars to create. Zachi, what do you see as the next steps in building AI and incrementally adding AI into the healthcare system?
Attia: Today we can get ECGs from phones, watches, and digital stethoscopes, so we have much more control of how we monitor our health. The use of AI can help us do that and only send us to the physicians when or before we actually need it. I think it can reduce anxiety. It can help you scale healthcare—it can especially help in war areas, where it is harder to see a physician, or countries where there is one physician for over 10,000 people. I think it will be much more significant there. When sending everything to the Cloud, like an ECG or echocardiogram, first analysis is done by a machine, and only if needed does it go to an actual human being. I think it can help reduce the amount of patients.
Kapa: Francisco, any final thoughts?
Lopez-Jimenez: One common concern is whether AI is going to take our jobs, and I do not think that should really be a concern. Many years ago, I remember that when CT scanners and MRIs became standard practice, there was the idea that neurologists would be out of work because most of the work they used to do was just trying to make the clinical diagnosis. We know now that neurologists can do amazing things, thanks to the technology that MRIs and CT scans have brought. So, I think AI is going to help us to be better doctors by helping us practice better medicine that is more individualized and more precise. We may have more time to spend face-to-face with the patient, particularly as AI might help us to navigate the electronic medical system in a smart and more efficient way. That will definitely impact the quality of life of providers and the health of our patients.
Kapa: So, in other words, AI might let us become human again?
Kapa: Thank you, Francisco and Zachi, for joining us, and thanks to everyone else for joining us on theheart.org | Medscape Cardiology.
© 2019 Mayo Clinic
Cite this: AI May Help Doctors Be Doctors (and Better Ones, Too) - Medscape - Dec 17, 2019.