Portable ophthalmoscope, machine learning pair to better assess preterm birth

Nov. 21, 2018
The approach involves using an ophthalmoscope attached to the lens of a smartphone camera to acquire video of blood vessels.

A team of researchers at Duke University (Durham, NC) has developed an algorithm that, combined with a handheld, smartphone-based ophthalmoscope (a handheld instrument used to inspect the eye), could aid health care workers in remote locations to estimate degrees of prematurity for affected infants. Such information can be critical for administering life-saving treatments.

The method is based on previous clinical studies showing that gestational age can be calculated from the density of blood vessels in a specific region of the eye. In their paper describing the work, the researchers report that their automated method for analyzing video of the eye in most cases outperformed a manual method for determining the gestational age of 124 newborns.

"We invented a fully automatic, machine learning algorithm that uses images acquired with an inexpensive, portable smartphone-based device to classify the gestational age of a newborn," says Arjun D. Desai from the Departments of Biomedical Engineering and Computer Science at Duke University, who is the first author of the paper. "We expect the algorithm to be useful for remote and point-of-care gestational age estimation of premature newborns in low-income countries without the need for medical experts."

The researchers have made software using the new algorithm open-source and freely available online. In collaboration with Jennifer B. Griffin of RTI International, the software will be further tested and fine-tuned during an upcoming large-scale clinical trial in sub-Saharan Africa and South Asia, where more than 60% of the world's preterm births occur. The trial is being funded by the Bill and Melinda Gates Foundation.

"Our work demonstrates that machine learning approaches combined with inexpensive, noninvasive optical imaging systems can address resource-intensive, complex global health problems," says Sina Farsiu of the Departments of Biomedical Engineering and Ophthalmology at Duke University, who is the paper's senior author.

The new approach involves using an ophthalmoscope attached to the lens of a smartphone camera to acquire video of blood vessels in a part of the eye known as the anterior lens capsule. To remove the need for an expert to capture images, the researchers developed an algorithm that automatically parses through video to identify the highest-quality frame and the region of interest for analysis.

Once the video is captured, the system applies computational techniques, including convolutional neural networks and machine learning algorithms to assess image features in the region of interest and estimate the gestational age. These artificial intelligence approaches allow the computer system to learn from data and improve with experience.

The researchers tested their new approach on a group of 124 newborns in the U.S. They compared their automated method to the best performing manual method, which involves manually selecting the highest quality frame in the video, identifying the area showing the anterior lens capsule, and then applying a model of the relationship between the density of blood vessels and gestational age. They performed both methods on newborns that were six gestational ages: less than or equal to 33, 34, 35, 36, 37 and 38 weeks. The automatic method performed as well as or better than the manual method at all the gestational ages except for 33 weeks.

"Our work is a first step to developing a fully automatic pipeline for determining gestational age that is accurate and robust to differences across newborns," Desai says. "If needed, we will fine-tune our algorithm using data from populations with different geographical, racial, and socioeconomic backgrounds."

During the upcoming clinical trial, the researchers plan to collect videos from newborns in low-income countries to see how well the new method works for these children. They expect that the automated imaging analysis method combined with other noninvasive imaging biomarkers will obtain the best results.

Full details of the work appear in the journal Biomedical Optics Express.

Sponsored Recommendations

How to Tune Servo Systems: Force Control

Oct. 23, 2024
Tuning the servo system to meet or exceed the performance specification can be a troubling task, join our webinar to learn to optimize performance.

Laser Machining: Dynamic Error Reduction via Galvo Compensation

Oct. 23, 2024
A common misconception is that high throughput implies higher speeds, but the real factor that impacts throughput is higher accelerations. Read more here!

Boost Productivity and Process Quality in High-Performance Laser Processing

Oct. 23, 2024
Read a discussion about developments in high-dynamic laser processing that improve process throughput and part quality.

Precision Automation Technologies that Minimize Laser Cut Hypotube Manufacturing Risk

Oct. 23, 2024
In this webinar, you will discover the precision automation technologies essential for manufacturing high-quality laser-cut hypotubes. Learn key processes, techniques, and best...

Voice your opinion!

To join the conversation, and become an exclusive member of Laser Focus World, create an account today!