Researchers have developed a way of mathematically removing aberrations in the eye by using wavefront-sensing techniques. At the University of Murcia (Murcia, Spain), physicists have not only demonstrated the concept both theoretically and through computer simulation, they have carried out experiments both in an artificial eye and in vivo. Many improvements are necessary before the technique can be used in practice, but researchers say the initial results are promising.
Imaging through the eye is difficult for two reasons: first, because the eye is a complicated optical system in its own right and, second, because it is continuously changing. The latter problem is due to eye movements, microfluctuations in the shape of the eye's lens as it tries to focus at different distances (accommodation), and the constantly varying tear film (the layer of liquid across the front of the eye). These aberrations must be taken into account in order to capture an image of the retina.
The University of Murcia ophthalmoscope works by combining information that is captured in two different ways at the same time as ensured by an electronic shutter (see Fig. 1).1 The optical system sends light from the imaged retina to two charge-coupled-device (CCD) detectors. One of these detects the focused image, while the other detects many small copies of the image focused by a lenslet array.
The latter provides a mechanism for the wavefront shape to be calculated after image processing, which involves several steps. First, the central lenslet image has to be found (see Fig. 2). Next, this image is correlated with the other lenslet images (using a Fast Fourier Transform to implement the correlation algorithmically), followed by a secondary crosscorrelation step designed to refine the results acquired from the first. These steps produce a set of coordinates: the positions of the surrounding lenslet images with respect to the central one. Finally, these coordinates are used to produce a map of the aberrated wavefront, which is in turn used to calculate the point spread function (PSF) of the eye.The retinal image data has to be processed separately before it is used. Several CCD frames are detected over the acquisition time, and these have to be averaged in order to reduce noise. Before this can be done, however, they have to be recentered to compensate for tip and tilt aberrations that may have shifted position with respect to each other. This process is carried out by maximizing the correlation of the first image with each of the successive images.
After these recentered images have been averaged, the result can finally be improved by deconvolution with the PSF that had been determined for that particular data set. The deconvolution extracts the calculated wavefront aberration from the image, leaving a higher-resolution result.
After performing computer simulations to demonstrate the principle, the Murcia team used as subjects both an artificial eye with a test chart as its "retina" and a real human eye. In the former, changing aberration was performed manually by adjusting the optics during the exposure, and 20 CCD frames of retinal image were then processed. The noise dropped significantly through the averaging and and the resolution improved markedly after the deconvolution with the calculated PSF (see Fig. 2). The same was true for the human experiment, though only nine CCD frames were averaged. Work continues to improve the system further.
REFERENCES
- I. Iglesias and P. Artal, Opt. Lett. 25 (24), 1804 (Dec. 15, 2000).
Sunny Bains | Contributing Editor
Sunny Bains is a contributing editor for Laser Focus World and a technical journalist based in London, England.