Exploring Dicom & Devices  - In Light of Standardization in Ophthalmic Imaging

Exploring Dicom & Devices - In Light of Standardization in Ophthalmic Imaging

January 11, 2022

Failing to standardize ophthalmic imaging devices risks leaving the eye care field behind as the health care industry increasingly relies on artificial intelligence to analyze diagnostic imaging. Falling behind is bad for research, and bad for quality of care.

Vision care providers use a variety of imaging devices to diagnose, monitor, and treat eye disease. Examples include basic photography and optical coherence tomography, a noninvasive imaging technique used to evaluate the retina and other structures inside the eye.

Often, such imaging is used to detect subtle anatomic changes that help evaluate disease progression and guide treatment.

The National Eye Institute, part of the National Institutes of Health, is joining the American Academy of Ophthalmology (AAO) and others in calling for imaging device makers to standardize their data formatting.

Such standardization is expected to enable communication across health care providers, improve quality of care, and enhance the creation of datasets for research.

The AAO has for many years supported adoption of the Digital Imaging and Communications in Medicine (DICOM) standard. DICOM includes a system of globally agreed-upon ophthalmological definitions.

It promotes the seamless sharing of medical images by detailing how to format and exchange images and the information with which they are associated, such as the text describing the image and patient demographic information.

Currently, meeting these standards is optional and DICOM compliance is low for ophthalmic imaging technologies.

Even so-called DICOM compliant devices fail to fully meet DICOM standards; there is no easy way to exchange digital imaging data from one manufacturer’s equipment to another’s without creating a custom interface.

Big data and machine learning are increasingly present at the forefront of modern ophthalmic research, particularly retina research where imaging forms the backbone of several outcome measures.

These novel avenues and technological advancements have allowed researchers and providers to tap into the wealth of historical data contained in millions of ophthalmic images accumulated over decades in ophthalmology.

With this push has come the mounting realization of a significant and possibly insurmountable hurdle: the lack of image standardization, which has put the brakes on rapid progress.

Imaging modalities, such as optical coherence tomography (OCT), OCT angiography (OCTA), and widefield fundus imaging, have become some of the mainstay devices in evaluation of the retina, yet there are countless ways in which we interact with these devices, their images, and their representations in the electronic medical record, in both clinical and research environments.

In fact, not only do the imaging devices and analytic software vary among manufacturers, but also the variables introduced by the operator as well as the patient add complexity to interpretation across devices.

The subtle and not-so-subtle variations in current proprietary imaging technology, analytic software, and interfaces complicate things further, contributing to variable interoperability, and rendering the rich image data being collected on an international scale unable to reach its full potential for analysis.

In 2021, in an attempt to address this growing elephant in the room, an official recommendation for image standardization was issued by the American Academy of Ophthalmology (AAO) and backed by the National Eye Institute (NEI).

HISTORICAL EFFORTS

Lessons From Radiology

As a field that is highly dependent on imaging, radiology was the first to face the challenge of standardization on a large scale.

In 1983, the American College of Radiology and National Electrical Manufacturers Association released the first version of the Digital Imaging & Communications in Medicine (DICOM) guidelines.

The DICOM standard, now in its third iteration, underlies the vendor-neutral digital encoding, transmission, and storage of nearly all images in radiology.

These unified conventions revolutionized radiology in terms of efficiency and interoperability.

The ubiquitous Hounsfield unit provided an objective way of describing signal in radiographic images. Standards of process are also apparent in the countless highly standardized imaging protocols that populate the order lists of electronic medical records at medical facilities worldwide.

Unfortunately, there remain several unresolved issues before ophthalmic imaging can reach this point.

Where We Stand

In ophthalmology, opinion pieces with such titles as, “DICOM? What’s That? Why You Should Care” were being published as recently as 2013.

At present, DICOM compliance among ophthalmic devices remains relatively inconsistent, and a more complete adoption of this standard was specifically called for in the 2021 AAO recommendation and echoed by the NEI.

The convening of a special interest group for imaging standards at the 2021 meeting of the Association for Research in Vision and Ophthalmology (ARVO) highlighted historical stagnation in the adoption of ophthalmology-wide standards.

This history is marked by the evolving needs of consumers (in clinical practice and research), and the challenges faced by industry in adapting to what has, at times, been a moving target.

Past efforts to standardize, including AAO-sponsored talks with industry leaders, as recently as 2018, ended with the AAO discontinuing funding without any major solutions.

NEED FOR STANDARDIZATION

The incorporation of artificial intelligence (AI) and machine learning into ophthalmic research likely underlies recent calls to standardize imaging platforms, segmentation algorithms, and analytic software housed within the devices.

Specifically, ophthalmology is seeing an increasing number of investigators who are working to develop AI models that might someday be used to diagnose, stage, and recommend treatment of prevalent vision-threatening diseases such as macular degeneration, glaucoma, and diabetic retinopathy.

The training, validation, and testing of machine learning models, however, depend heavily on the availability of hundreds of thousands of images obtained from the large-scale and efficient collection of standardized imaging data.

At present, that process is hampered and disjointed, with each research group developing its own highly siloed pipeline of image acquisition and analysis.

Image acquisition and storage processes are often improvised out of necessity, given the unique barriers faced by each research team and the different imaging devices in use.

Such home-brewed approaches can at times be inefficient, impractical, and not scalable to national or international databases, and they can make research difficult to replicate across institutions.

Apart from the potential clinical benefits, imaging device standardization may ultimately enhance interoperability of data at the institutional and national level — not only the images themselves but also the accompanying data in the electronic health record.

An exploratory workflow study conducted in 2013 demonstrated marked reduction in the need to edit clinical data or manage misfiled images after the adoption of a DICOM-compatible workflow.

As even routine follow-up visits in many ophthalmic subspecialties are becoming increasingly reliant on imaging, this benefit would be of immense value for even smaller practices.

In the United States, the Department of Health and Human Services recently issued the United States Core Data for Interoperability to classify current capabilities regarding many aspects of the health care system and provide a standard mandate.

At present, “ophthalmic data” consisting of intraocular pressure, visual acuity, and refraction, are regulated, and it is possible that ophthalmic imaging may follow in the future.

CURRENT CHALLENGES

DICOM

Presently, adherence to the DICOM standard is not required for market authorization of ophthalmic imaging devices by the FDA, and even those currently marketed as “DICOM compliant” may not truly meet the standards.

The 2021 AAO report lists 12 DICOM standards that are relevant to ophthalmic devices and specifies the theoretical benefits of adherence.

However, even in the informal discussion at ARVO 2021 between field leaders on standardization, including the authors of the AAO report, and industry representatives, it was clear that DICOM adherence will not be the magic bullet to solving this problem.

Given the lack of central consensus on specific needs, practitioners and researchers have brought focus to the DICOM standard.

Where standards like DICOM currently exist, industry, practitioners, and researchers should push for more complete compliance, but significant effort is needed to define and redefine guidelines for standards that are insufficient for implementation at this time.

Whether the standard is defined by DICOM, by specific ophthalmology consensus, or some combination of existing and to-be-devised standards, an overarching, well discussed, and specific framework of standards is required.

Communicating the final products of consensus to device vendors will be a key challenge. Two specific and actionable requests guided by DICOM principles and outlined by the 2021 AAO report are for device vendors to do the following:

1. Provide machine-readable, discrete data for userselected reports of ophthalmic imaging or functional testing. Unifying imaging and testing data with an individual’s clinical information in one place will be crucial to scaling up current research efforts in addition to streamlining clinical practice.

Anecdotally, the challenge of locating and verifying data from multiple sources is one faced by both researchers and clinicians alike.

2. Use lossless compression for pixel or voxel data to encode the same raw data as used by manufacturers.

File-compression algorithms contribute to quality loss and additional variability between the same types of images obtained with different imaging devices.

Subtle quality loss can impede big data and AI research efforts that are highly dependent on maximal extraction of data.

Devices

The ability to rapidly and noninvasively capture detailed information on several parts of the eye at a near histologic level using our imaging and testing armamentarium is unique to ophthalmology, showcasing both the breadth and depth of obtainable clinical data in the eye.

The variety of instruments emphasizes the ingenuity of collaboration between vendors, researchers, and clinicians. However, this has had the unintended side effect of overdiversifying the number of currently existing data pipelines.

Large academic institutions and health systems may often employ several different brands of the same device across their system, making it difficult to compile even intra-institutional data at times.

Home-grown efforts by individual investigative groups can result in cumbersome, inefficient methods of extracting complex data and converting it into formats with which researchers can work.

Faced with challenges of extracting the detailed data contained within unique image formats, some investigators may opt to use preview en face images saved in a conventional file format, losing out on fine resolution and other subtleties.

Implementing the DICOM standard may guide these changes, but ultimately significant software changes may be required to unify the many different file formats behind which a significant amount of data is currently trapped.

A sentiment, however ambitious, echoed by numerous attendees at the ARVO 2021 special interest group was a desire to be able to open any image, from any device, in a standard image viewer and to extract standardized raw data types.

In particular, the staples of retinal imaging, OCT and now OCTA, are particularly challenged by technological overdiversity.

Each device manufacturer uses a unique proprietary software for acquisition, storage, and analysis; there are no major studies that have compared any potential differences these may result in at a large scale, even among different software versions of one manufacturer.

For example, “vessel density” calculations are all performed differently by major device vendors, and authors are unable to draw direct comparisons with similar studies often for the sole reason that a different OCTA or analysis method was used by a companion research group.

Although there are published conversion factors to potentially enable comparison of retinal thickness measurements across OCT devices and images, such assessment is lacking for OCTA.

Clinical researchers, who may lack the high degree of technical expertise that industry manifests, are naturally hesitant to draw interdevice conclusions, and currently, there is little guidance on how to do so and whether such comparisons are even possible.

Returning to the importance of standards in informing technology, device companies that must painstakingly define certain components of an otherwise proprietary measurement would save valuable time and effort under the option of standard parameter calculation methods.

Furthermore, it is unknown how other mechanical parameters such as scan speed, imaging depth, and acquisition parameters, such as registration, tracking, scan tilt, and scan area, affect image standards.

Backwards compatibility has also posed a problem for researchers who are unable to tap the large potential legacy data reserve of patients imaged prior to more modern image storage systems.

This greatly limits the utilization scale of available data, but also impedes longitudinal research into disease progression, which is a crucial and growing component of machine-learning research.

BEYOND HARDWARE TO PATIENTS, PRACTITIONERS, AND RESEARCHERS

The incredible diversity of measurement indices and novel interpretative software highlights the research enthusiasm and evolving thought.

Yet, as noted previously, consensus on the goals and methods of imaging and measurement will be necessary to optimize development within this field.

Using the example of the growing number of machine learning studies that depend on OCT, should layer analysis be addressed using the true geometry of each segment?

Should it be assessed volumetrically? Which layers or combinations thereof should be targeted? As novel methods are introduced in the literature, standardization of image analysis platforms will be necessary, in addition to that of the images themselves.

Although assistance from technology will be important, basic imaging protocols accessible to ophthalmic technicians are also necessary. Again, with machine-learning work, there are new issues that could be addressed at the acquisition level.

Even standardizing procedures, such as centering the fovea in both the x- and y- axes of OCT images and minimizing image tilt and rotation, could reduce the systematic variation in the images themselves, which would otherwise be internalized by the AI.

The representation of imaging data and correlated clinical data in the electronic medical record will also be necessary.

Current medical record software providers often have Picture Archive and Communication Systems (PACS) image storage directly integrated into patient charts, but the interoperability with direct chart integration of quantitative image data, such as retinal nerve fiber layer thickness for glaucoma monitoring, can be lacking.

Proprietary databases and file formats for image storage may even limit data to the instruments and cameras themselves, further adding to the difficulties in making electronic medical records more streamlined.

Again, adherence to the DICOM standard will be an important milestone in this aspect of standardizing images and their ultimate utilization clinically.

From the clinician’s perspective, agreement on important diagnostic and outcome milestones is needed to inform the standardization of images and associated data with the relevant metrics already in mind.

For example, the precise terminology outlined by pivotal entities as the Early Treatment Diabetic Retinopathy Study and the DRCR Retina Network have aided in unifying research and clinical treatment of diabetic retinopathy.

Other disease entities currently lack such robust descriptions. For ophthalmic technicians who may have differing techniques for patient positioning and image acquisition, translating important clinical characteristics in the form of cohesive and widely accepted imaging protocols might aid in reducing some of the variation in imaging data caused well before the data handling stage.

As AI develops further, and new biomarkers emerge, clinicians and researchers should clearly communicate the clinical intent and associated areas of interest for each type of scan to technicians to ensure that the utility of each image is maximized.

NEXT STEPS

A common sentiment among several experts on this issue at the ARVO 2021 special interest group was, “It feels like we have been here before.” The AAO-sponsored discussions regarding this issue were paused several years ago, and progress has been slow.

To bridge the gap between the consumers of imaging technology, research groups, and device manufacturers, existing guidelines such as DICOM should be fully embraced, and consensus on unresolved standards should be reached between clinicians, academia, and industry.

A robust, and global, partnership among all players in the field is necessary to reach an agreed-upon consensus and build a strong and sustainable foundation, which will play a major role in accelerating scientific progress.

Let's Check What Others Think & Say