This transcript has been edited for clarity.
Eric J. Topol, MD: Hello. I'm Eric Topol with my partner Abraham Verghese, here to bring you the next podcast of Medicine and the Machine. Today we're really delighted to have as our guest Pearse Keane, a leading ophthalmologist at Moorfields Eye Hospital in the United Kingdom. Welcome, Abraham and Pearse.
Pearse and I got to know each other from the National Health Service (NHS) review that occurred over the past couple of years. I had the chance to have an eye exam with Pearse at what I believe is the most prestigious eye institute in the world. Pearse brings to us a lot of perspective about the use of deep learning in artificial intelligence (AI), with a unique program that Pearse and his collaborators started some years back and are continuing to build on. Maybe you can summarize, Pearse, what you've been up to, because it's nothing short of remarkable.
Pearse Keane, MD: Thank you for that. For those of the listeners who don't know, I'm a consultant ophthalmologist (a consultant in the UK system is like an attending in the US system) and I work in Moorfields Eye Hospital in London, the oldest eye hospital in the world. I'm also a clinical academic at the Institute of Ophthalmology at University College London.
For many years I've been interested in the potential of new technologies and innovation in healthcare. In that regard, I have read many of your books even before I was privileged to meet you, Eric. I have always felt that ophthalmology is one of the most technically driven and technically advanced of all the medical specialties. Going back to the 1950s, we had the introduction of laser for treatment of diabetic retinopathy, and in the 1970s and 1980s there was use of ultrasound to remove cataracts through micro-incisions. More recently, we have been doing a lot of gene therapy and stem cell therapy in the eye.
It's really exciting to me that ophthalmology also seems to be at the forefront of the development and application of AI. That's also exciting because there's a huge need in ophthalmology for new technologies and innovation. Some people, perhaps even some doctors in other medical specialties, may think of ophthalmology as somehow being a small or niche medical specialty. But actually it's one of the busiest of the medical specialties. For example, in 2017 in the United Kingdom, ophthalmology overtook orthopedics as the number-one busiest of all the medical specialties in terms of clinic appointments. Nearly 10% of all clinic appointments in the NHS of the UK are for eyes. And that number has grown by more than a third in the past 5 years, as we have an aging population and an increasing prevalence of diseases such as diabetes. The unfortunate reality is that some people lose their sight, go blind, because of delays in being seen and treated by ophthalmologists and specialists like myself. It felt to me, particularly given how image-driven ophthalmology is, that AI and deep learning in particular would be particularly potent in the specialty.
Genesis of DeepMind Collaboration
Topol: And what's different from the United States is that you do optical coherence tomography (OCT) on every patient each year. So you have a wealth of data that is unmatched, as far as I know, to be able to apply deep learning. You had the foresight of developing a "deep"—if you will—collaboration with DeepMind. You have a paper in Nature Medicine of triage on over 50 different acute eye conditions. Could you summarize that briefly?
Keane: Ophthalmology, not just in the United Kingdom but all around the world, is now dominated by OCT imaging. OCT is like an ultrasound, but it measures light waves instead of sound waves and it gives very-high-resolution images. At Moorfields, we do huge numbers of OCT scans. Even though an OCT scan has higher resolution than an MRI scan or a CT scan—it's a three-dimensional imaging modality—we do about 1000 OCT scans per day. So we have huge amounts of data.
And what was happening, as I've already explained, was that we have this huge challenge to see and treat all the patients in a timely fashion. This is not just in Moorfields and not just in the NHS, but across the world in general. It is standing room only in ophthalmology clinics.
I had a patient with age-related macular degeneration (AMD) named Elaine. (She's talked about her condition publicly.) She had lost the sight completely in her left eye because of the severe form of AMD called wet AMD. Then in about 2012 or 2013, she started to lose the sight in her good eye. She went to her local community optometrist, who told her, "I think you're developing wet AMD in your good eye. There's a treatment for it now. We can inject anti–vascular endothelial growth factor drugs into your eye and it can stop you from losing the sight, but you need to urgently see a retinal specialist in hospital." Elaine then went to her family doctor and got this urgent referral, but she got an appointment 6 weeks later.
Now, you can imagine that if that was a family member of yours, if it was my mother, I would want her to be seen and treated in a few days and not in 6 weeks. As it happened, Elaine was able to be seen and treated in a timely fashion. But it was that single exemplar that made me think, Why don't we apply deep learning to OCT scans to prioritize those patients with the most sight-threatening disease so that we can get them in front of someone like me or my colleagues as soon as possible?
With that being the case, in July 2015, I approached the company DeepMind and said, "I'm a consultant at Moorfields where we're doing a 1000 OCT scans per day. There are people who are losing sight because they can't be seen and treated quickly enough. We should work together to apply deep learning to these scans. And by the way, we're two stops away from you on the London Underground, so this could really facilitate a collaboration."
Topol: And you did not even miss one acute triage in thousands of OCT with the algorithm in 50 different conditions. I know you're going forward with that prospectively; it's commendable that you're doing such rigorous work.
Docs Can Do AI
Topol: I thought it would be especially relevant to discuss on Medicine and the Machine the paper you recently published about doctors who had no initiation with AI working with these tools. It was stunning, with respect to how facile the uninitiated doctors were and how quickly they could achieve this.
Keane: This is the project I am most excited about in the longer term. I have been working with DeepMind and Google for nearly 5 years now. In the course of that collaboration, I've always been a little in awe of some of the computer scientists that I was able to work with at DeepMind. I always felt a bit sad that in the course of my long, interminable medical training that I never had an opportunity to do some really dedicated machine learning and data science.
With all of that being the case, with deep learning, there are a number of blockers to its actual development and implementation. You've got three main things: the fact that you need very large datasets, the fact that you need a lot of specialized technical expertise, and the fact that you need a lot of computing resources to be able to train these deep learning networks. In 2017 I read an article in the New York Times, called "Building A.I. That Can Build A.I." It described these automated deep learning platforms which allow people without any coding experience to develop deep learning systems for tasks such as image classification. When I saw this paper, it was a eureka moment because it was finally an opportunity to begin to explore this area without the intermediary of the scientists at DeepMind or other computer scientists or engineers in an academic setting.
We immediately got the idea of getting publicly available medical image datasets from all around the world and using them on these automated deep learning platforms. We did that because we were appropriately cautious with not wanting to use images from our own institution without going through a rigorous approval process. So we tracked down datasets from skin cancers, chest x-rays, OCT scans, retinal photographs. The amazing thing was that we were able to get results that were comparable to state-of-the-art in just a few days. This is exciting because this is a big step toward the democratization and industrialization of deep learning. Much as I think the people at DeepMind and in other academic settings are very smart, healthcare professionals bring something special to the mix. It's only by empowering clinical researchers that we will actually realize the true potential of AI. Because if you leave it to computer scientists and to engineers, without wanting to be derogatory, they may still just focus on the same tasks and optimizing the same tasks, whereas healthcare professionals will quickly learn and identify many novel indications for developing and applying AI systems. That's the phase we could be entering into now.
Staying in Your Lane
Abraham Verghese, MD: This is fascinating. Linking this to our previous discussion on behavior, have you had pushback and resistance from rank-and-file ophthalmologists or physicians in practice as you try to implement things like this?
Keane: I've been giving talks about AI-enabled healthcare for a number of years now, and it's very common that, no matter how reassuring you are, the question comes up at the end of, "Will AI replace doctors" or "Will AI replace healthcare professionals?" I give various different answers to that. What's interesting with these automated deep learning platforms is that when we published the results in Lancet Digital Health in September 2019, in the course of the revisions the editor of the journal said something to us like, "We hope you will make it clearer that these platforms will not necessarily replace AI experts in the future." When I read that, I could not help but smile a little bit, because this is the same question that doctors have had to face for a number of years. Of course, I really don't think that it's a question of people being replaced. Tasks will be made easier using these technologies.
To expand on that a little bit, there is pushback from machine learning experts about the use of these platforms. It's worth highlighting some caveats about these platforms. The first is that no one is suggesting that you could use these systems for direct clinical care anytime soon. There is a huge amount of validation that's required, and careful testing that's required, for even state-of-the-art, bespoke deep learning and other AI systems. That will even be more so in the context of automated deep learning platforms that healthcare professionals are beginning to develop. Nonetheless, I can imagine lots of clinical research applications we can start to explore with these tools. For example, in ophthalmic research, it may be very common that you are given a dataset of 10,000 retinal photographs, but you don't know whether they have a label of being a right eye or a left eye. Commonly, you would have a medical student look through all the 10,000 scans and label as a right or left eye. Now we don't have to do that. We would just train a model which could tell the difference between the two—it takes a short period of time—and then we have something that is useful in terms of the research.
In the same way that statistical software became available in previous decades and empowered clinical researchers, I could imagine these tools doing something similar. For example, if I have an idea, I might do an initial feasibility study with one of these tools, and if I see some signal there, then perhaps I will start to work with people who have more specialized expertise and take the next steps.
Topol: You're highlighting a number of things. Just to review, Pearse, by having these automated modules, healthcare workers can play with and learn from them; they are remarkable educational tools. And that democratization is something that could be part of the curriculum for medical students in training and throughout the profession of the healthcare workforce. But the other thing that you're touching on is interesting as well, because if you talk to statisticians, their worst nightmare are researchers who use Statspack and come up with flawed P values.
In a sense, what we've got now is a similar, more advanced state. Essentially, all AI is a math story. And now the fear—and I hadn't heard the backstory about the editor before, which is fascinating—is that the AI people could be replaced. But that gets us to what disturbs me the most, and I think Abraham and I have touched on this in the past: Why is it always pitting the doctors and clinicians versus the machines when what we really are after is the fusion of efforts? There seems to be this relentless deep blue chess match that is just unstoppable, and it keeps recurring in all the publications. Why is this such a fixation?
Keane: I'm not sure; I think it's human nature. Maybe an analogy might be in terms of political reporting: Maybe it's difficult to keep people focused on policies and it sometimes becomes about personalities. One thing that struck me as you were saying this is that we also can lose track of why we're doing this, which is for patients' benefit. Ultimately, if something produces better outcomes and is better for our patients, then it should be irrelevant whether it's a human or a machine or a fusion of the two that achieves that. I think that has to be our "north star" in this regard.
Telemedicine
Verghese: As you look to the future in your field, I know you said that this is not going to replace the interaction, but it certainly might modify it. As we spend more and more time on Zoom, it seems to me that some specialties, and perhaps ophthalmology is one of them, might allow for screening in a way that we would have had to do in person before. Do you see new horizons as we evolve our technology and our ways of treating patients at home, which is what we're doing right now? So many of our visits now are by video or by phone call.
Keane: If it's not inappropriate to say, I think there may be opportunities to explore and accelerate some new ways of working with the challenging times of the pandemic in the coming months and longer.
There are probably two aspects to the way I would see this. First, many aspects of healthcare really should be consigned to telemedicine or to some other remote monitoring–type processes. In my field of ophthalmology, if you were diagnosed with glaucoma and it's picked up early and you're being treated with drops, you're typically going to be fine. You have to be monitored for the rest of your life, but is it really necessary for you to come in every few months to the crowded hospital service and wait for a long period of time for a very short consultation where you're told, "You're fine. We'll see you in 6 months' time"? Or are there better ways that we can do it?
For me, the promise of AI is around how we can bring world-leading expertise into the community and into people's homes and overall improve their experience. But, of course, the flipside to that is that there are some aspects of care that I don't think should be replaced or that the human interaction should be downgraded in some way. For example, I could never imagine a situation where your smartphone or your smartwatch tells you that you're going to go blind or that you've got cancer or something like that. You would always want this kind of human interaction with healthcare professionals who are experienced, who have been through the process, and who can help guide you along.
Topol: I was just going to ask you that exact scenario, about being able to communicate to a patient about the inability to prevent the blindness and the serious matters that require the human connection, human touch, and empathy. You already anticipated that question. This has been an important discussion.
I look to you, Pearse, as one of the most careful researchers in the AI medical field because you avoid any hype. You go after things not just first retrospectively, but then you do the right types of trials. And you've taken us to new frontiers with respect to how to educate the healthcare workforce with the AI-to-build-AI theme that you've reviewed. It's terrific. We're so glad to have a chance to have this conversation with you.
The other thing I would add, Pearse, is that this is the best of both worlds. The idea that you sought a very able collaboration with the DeepMind folks in the United Kingdom which helped enable the progress that you have made, and in many ways has shaped the arc of your ongoing efforts, is something else that we have to keep in mind. It's not so easy to accomplish this just in the academic circles; having the ability to tap into resources from the tech world is extremely helpful as well.
Keane: I've learned from the past few years that to try to make a lot of progress and to drive innovation, you have to have collaborations that are at the boundaries of different disciplines and at the boundaries between industry and academia. I believe that that is the way to make real progress.
Topol: It shows, because the three of us are not computer scientists and here we're talking about it and learning about it all the time, every day. Thanks so much, Pearse, for joining us. Abraham, I look forward to the next time we can convene for a Medicine and the Machine podcast.
Eric J. Topol, MD, is one of the top 10 most cited researchers in medicine and frequently writes about technology in healthcare, including in his latest book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again.
Abraham Verghese, MD, is a critically acclaimed best-selling author and a physician with an international reputation for his focus on healing in an era when technology often overwhelms the human side of medicine.
Pearse A. Keane, MD, is a National Institute for Health Research clinician scientist, based at the Institute of Ophthalmology, University College London, and an honorary consultant ophthalmologist at Moorfields Eye Hospital. He specializes in retinal diseases, such as age-related macular degeneration and diabetic retinopathy, and frequently tweets about ophthalmology and AI at @pearsekeane
Follow Medscape on Facebook, Twitter, Instagram, and YouTube
Medscape © 2020 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: AI for Eyes Is 'Nothing Short of Remarkable' - Medscape - May 06, 2020.
Comments