The Use Of Artificial Intelligence In The Clinics
AI is growing into the public health sector and is making a major impact on every aspect of primary care. AI-enabled computer applications help primary care physicians to better identify patients who require extra attention and provide personalized protocols for each individual.
AI enables primary care physicians to use to take their notes, analyze their discussions with patients, and enter required information directly into EHR systems. These applications will collect and analyze patient data and present it to primary care physicians alongside insight into patient’s medical needs.
A recent retrospective study found that smokers had a higher intraocular pressure (IOP) than nonsmokers—regardless of a glaucoma diagnosis. While this finding has significant public health ramifications, the real story is the research method. Unlike traditional retrospective studies that involve hundreds or even thousands of patient records, this study included 12.5 million patients.
Opening the door to new insights. Previously, conflicting studies on smoking’s impact on IOP prevented ophthalmologists from making recommendations to their patients, said Aaron Y. Lee, MD, MSc, whose research has explored the use of artificial intelligence (AI) and big data in ophthalmology.
“Yet with these findings, analyzed from the Academy’s IRIS Registry data, we can guide our patients on the risks of being an active smoker, versus a past smoker, versus a never smoker, based on data from millions of patients,” said Dr. Lee, at the University of Washington in Seattle. “Our ability to now analyze this high volume of patient records vividly demonstrates the power of big data.”
New technologies are accelerating the use of big data. In the past few years, researchers have developed new technologies to collect and process the information on an unprecedented scale, with data-base size measured in petabytes or exabytes, said Suzann Pershing, MD, MS, at Stanford University School of Medicine in Palo Alto, California.
“With the advent of cloud-based storage and advanced modeling, we can now perform complex analyses in seconds—queries that would have taken us hours, days, or weeks not that long ago and were not even fathomable in past decades,” she said.
“AI and deep learning are the next advances stemming from big data,” said Dr. Pershing. “Big data has the potential to enhance research and medical education, streamline documentation, facilitate quality, and improve clinical care.”
Data Mining in the IRIS Registry
What is big data? In the 2015 Edward Jackson Memorial Lecture, ophthalmologist Anne L. Coleman, MD, Ph.D., stated that big data is frequently characterized by the “3 Vs”—large volume, variety, and rapid velocity of growth.
She noted that other important traits include variability, veracity, and complexity. Dr. Coleman, the current Academy president, is at the University of California, Los Angeles (UCLA).
Opening a new frontier for eye research. Ophthalmology has been a pioneer in big data, said Dr. Pershing, and thousands of ophthalmology practices have supported this through their participation in the Academy IRIS Registry, which formally launched in 2014 under the stewardship of William L. Rich III, MD, David W. Parke II, MD, and many like-minded colleagues.
The IRIS Registry rapidly grew to become the single largest national clinical specialty data registry in the United States.
Mining a rich seam of data: Endophthalmitis after cataract surgery. In Dr. Coleman’s research for her Jackson Lecture, she reviewed both Medicare claims data (a 5% sample; 2010-2013) and IRIS Registry data (2013-2014) for cases of endophthalmitis after cataract surgery and found rates of 0.14% and 0.08%, respectively. “Because endophthalmitis is a rare event following cataract surgery, big data can offer deeper insights,” said Dr. Coleman.
In an Academy analysis published this year, Dr. Pershing and colleagues used IRIS Registry data to evaluate 8.5 million cataract surgeries and determine a real-world endophthalmitis incidence of 0.04% among cataract surgeries overall. They discovered that risk factors may include younger age, concomitant ophthalmic surgery, and anterior vitrectomy.
Big data research into rare complications can help to inform conversations with patients about risk and—because visual acuity data is included in the IRIS Registry—about visual results. Furthermore, the findings can lay the groundwork for future research.
Deep Learning Sparks an AI Revolution
Dr. Lee said AI is a field that has existed since the dawn of computing. “People were using algorithms to identify things in photos,” he said. “Referred to as computer vision, researchers were trying to get computers to see as humans see.
Yet it wasn’t until about a decade ago when a new family of models came together, referred to as deep learning, that we were able to accomplish this.” What is deep learning? In essence, deep learning involves machine-learning techniques that enable computers to learn directly from examples.
While traditional image analysis required manual development of kernels that are used for edge detection and simple shape discrimination, Dr. Lee said that deep learning involves a many-layered artificial neural network trained to develop these convolutional filters using training data.
Big data enables deep learning. “To train these models, you need a lot of data—and the only way to accomplish this is with big data,” Dr. Lee said.
“The availability of this big data, combined with the newfound ability to merge two different kinds of datasets—EHR information and ophthalmic images—led to significant advances in our ability to classify images and detect objects in a picture using deep learning as a tool.”
In 2016, for example, Dr. Lee and his colleagues demonstrated that deep learning was effective for classifying normal versus age-related macular degeneration (AMD) images drawn from optical coherence tomography (OCT) scanning.
“This was one of the first studies using deep learning to classify OCT scans in ophthalmology, and we were able to show high accuracy levels,” Dr. Lee said. “This report was part of an AI revolution, with an explosion of new papers centered around and ophthalmic imaging.”
FDA approves the IDx-DR. Another key milestone occurred in April 2018 when the FDA permitted marketing of the first medical device to use AI to detect greater than a mild level of diabetic retinopathy (DR) in adults with diabetes, Dr. Lee said.
The device, called IDx-DR (Digital Diagnostics), was the first FDA-approved device to provide a screening decision without the need for a clinician to also interpret the image or results.
Big Data in Action: A Surgical Surprise and a Clinician’s Hunch
A research project inspired by a patient story illustrates big data’s potential, said Dr. Lee.
Posterior capsule rupture in an AMD patient. “A surgeon experienced an unexpected posterior capsule rupture during cataract surgery,” Dr. Lee recalled. “He couldn’t figure out why that happened, so he reviewed the patient’s chart and noticed the patient had undergone multiple injections for macular degeneration.”
Using big data to follow up on a surgeon’s hunch. The surgeon “emailed our team,” Dr. Lee continued and wanted to know if there was any chance that intravitreal [IVT] injections were a risk factor for cataract surgery complications.
The question inspired a big data study to find the answer. Dr. Lee and his colleagues used EHR data from 20 hospitals of 65,836 patients undergoing cataract surgery between 2004 and 2014, of whom 1,935 had undergone previous intravitreal therapy.
Using univariate and multivariate regression modeling, the researchers found that patients who underwent 10 or more previous injections had a 2.59 times higher likelihood of posterior capsule rupture during cataract surgery.
Big data opens the door to previously unobtainable findings. “What is powerful about this story is that without big data, you would have missed this association,” Dr. Lee said. “Cataract surgeons live in a different world than retinal surgeons.
Their paper charts are different, and before the advent of big data, it would have been impossible to make this link. But because we had data that could be extracted from the cataract and retina specialists, this analysis was possible.
“And it all began with an important clinical question,” Dr. Lee said. He added that the surgeon was able to report back to the patient and explain how the rupture occurred, and then subsequently communicate that risk to other patients.
Health Care Systems and a Rising Tide of Chronic Conditions—a Role for AI?
Pearse A. Keane, MD, at Moorfields Eye Hospital and the University College London (UCL) Institute of Ophthalmology, also noted that many of his AI projects were first inspired by a single patient.
An ongoing surge in chronic conditions. “As background, in 2017, ophthalmology overtook orthopedics as the No. 1 busiest specialty in the U.K. National Health Service [NHS], with nearly 10 million clinic appointments annually,” Dr. Keane said. “Couple that with the high prevalence of chronic eye disease, such as AMD, and it is an enormous challenge treating these patients.”
Wet AMD? A flood of better-safe-than-sorry referrals. Dr. Keane recalls a patient, Elaine Manna, who had lost sight in her left eye due to AMD—and then developed symptoms of wet AMD in her right eye.
While a timely appointment is vital, it took several weeks for her to get an appointment to see a retina specialist. This was, in part, because of a large number of false-positive referrals received. For example, in 2016, Moorfields Eye Hospital received 7,000 urgent referrals for possible wet AMD, yet only 800 of these referrals had the condition, Dr. Keane noted.
Using AI to evaluate OCT scans. With Ms. Manna in mind, “We realized that if we can use AI to identify those 800 with wet AMD or any other time-sensitive macular disease, we could prioritize those patients and save sight,” he said.
In 2018, Dr. Keane and his colleagues published a proof-of-concept paper on the development of an AI system using OCT scans that could assess more than 50 different retinal diseases.
But what if deep learning requires too much data? Using deep learning to assess medical images was not without difficulties. The two main challenges, according to Dr. Keane, were technical variations in the imaging processes and the patient-to-patient variability in the pathological manifestations of the disease.
Previous approaches attempted to deal with these challenges via a single end-to-end black box network, which typically required millions of scans to accomplish. Using a divide-and-conquer approach.
“By contrast, our DL [deep learning] architecture decoupled the technical variations and pathology variants, and it solved them independently,” Dr. Keane said. A deep segmentation network created a detailed device-independent tissue-segmentation map, which was analyzed by a deep classification network that provided diagnoses and referral suggestions.
The researchers demonstrated that the AI system’s performance in making referral recommendations reached or exceeded that of experts after training on only 14,884 scans.
“This system could ultimately help clinicians identify and prioritize patients with the most sight-threatening diseases,” said Dr. Keane. “Now that we have shown a proof of concept with the ability to use AI to diagnose retinal disease, we are working to bring this from ‘code to clinic.’”
Predicting the imminent risk of wet AMD in the second eye. Dr. Keane’s AMD patients were the inspiration for another AI study. “Individuals with wet AMD in one eye are understandably terrified that they will be diagnosed with wet AMD in their good eye,” he said.
“From this observation, we got the idea that perhaps we could use AI as an early warning system—a bridge—to protect the good eye. If the patient was at risk for AMD, we could start preventive treatment on time.”
By combining models based on OCT images and corresponding automatic tissue maps, Dr. Keane and his colleagues developed an AI algorithm to predict conversion to wet AMD within a clinically actionable six-month time window.
“We could not achieve predictive sensitivity in all patients, but in a subset of patients, AI could make the prediction with 90% specificity,” he noted.
Especially useful for clinical trials. Dr. Keane observed one of the weaknesses of deep learning outside of clinical trials is that it is “a bit brittle”—outside factors can result in unpredictable answers.
“However, we think that using deep learning for risk stratification in clinical trials would be a powerful application for AI since in clinical trials you can control these factors,” he said.
Will the Future Eye Exam Monitor Systemic Health?
Ophthalmology’s use of AI can advance eye care, but could it also be used to monitor a patient’s general health?
The AlzEye study to anticipate dementia. “We believe the eye can be used as a window to the rest of the body, something that my colleague Alastair Denniston has termed ‘oculomics,’” said Dr. Keane.
In 2018, for example, Google team members used deep learning to predict age and gender from retinal photos. Dr. Keane and his colleagues wondered if oculomics could serve as a predictive tool to detect Alzheimer’s disease (AD).
Previously, researchers found the most consistent retinal feature of AD is OCT-measured changes in the retinal nerve fiber layer (RNFL). Researchers have also shown that a thinner RNFL at baseline was associated with an increased risk of developing dementia.
With this foundation, Dr. Keane and his fellow investigators launched AlzEye to explore whether retinal structures could predict dementia and its subtypes. This U.K. project links data from millions of fundus photographs and retinal OCT scans to systemic health diagnostic codes drawn from NHS data.
“This is unprecedented research that is being led by an outstanding ophthalmology resident, Siegfried K. Wagner,” Dr. Keane said.
“The dream scenario is that patients visit their family doctor for an eye check and can receive information about their systemic health and possible risks from high blood pressure and diabetes to a stroke or AD. DL has an opportunity to provide huge patient benefits.”
Developing resources for deep learning. Two NHS Foundation Trusts—Moorfields and University Hospitals Birmingham—are spearheading the Insight Health Data Research Hub for Eye Health (www.hdruk.ac.uk/insight), which launched in October 2019 as a collaboration between the NHS, academic bodies, industry, and charities.
The goal: to advance research and, ultimately, to improve patient care by making vast, anonymized datasets available to researchers. “Insight already contains more than 10 million images from Moorfields, but we will be working in the coming years to go from 10 million to 100 million,” said Dr. Keane.
As data hubs like the IRIS Registry and Insight continue to expand, ophthalmology is likely to be central to the AI revolution in health care.
A Promising Future
Big data and AI are positioned to play an increasingly significant role in health care delivery moving forward, Dr. Pershing said. Ophthalmology generates an enormous amount of data for analysis, and the addition of patient-generated data will further expand big data’s potential.
Patient-generated data. “Patients are beginning to check their vision at home, and apps are being developed to self-monitor visual acuity,” Dr. Pershing said.
“And wearables, such as contact lenses that can provide continuous IOP monitoring or detect blood glucose levels, will generate even more data. The challenge is that these patient data will be generated from disparate resources, and so we must focus on enhancing linkages.”
Pandemic prompts a rethink on data sharing. Dr. Pershing noted that the COVID-19 crisis forced out-of-the-box thinking, as demonstrated by the rapid adoption of telemedicine and the expansion of infrastructure for data collection and storage.
“And in the COVID sphere, researchers are nimble in disseminating their data across the NIH and around the world, which is helping to change the way we are thinking about data collection and sharing,” she said.