Software creates 3D movie of scene using two still frames from stationary camera
Cambridge, MA--Researchers at the Harvard School of Engineering and Applied Sciences (SEAS) have developed a way for photographers and microscopists to create a 3D image through a single lens, without moving the camera.1 Called light-field moment imaging (LMI), the technique computationally extracts useful depth information from the different angular ranges of light arriving at each pixel; no additional hardware of any kind is required.
The software developed by associate professor Ken Crozier and graduate student Antony Orth takes two images from the same camera position but focused at different depths and, from the slight differences between these two images, provides enough information for a computer to mathematically create a brand-new image as if the camera had been moved to one side. By stitching these two images together into an animation, Crozier and Orth provide a way for amateur photographers and microscopists alike to create the impression of a stereo image without the need for expensive hardware.
"Cameras have been developed with all kinds of new hardware -- microlens arrays and absorbing masks -- that can record the direction of the light, and that allows you to do some very interesting things, such as take a picture and focus it later, or change the perspective view," says Crozier. "That's great, but the question we asked was, can we get some of that functionality with a regular camera, without adding any extra hardware?"
(Video: Harvard School of Engineering and Applied Sciences)
The technique offers a new and accessible way to create 3D images of translucent materials, such asbiological tissues. Biologists can use a variety of tools to create 3D optical images, including light-field microscopes, which are limited in terms of spatial resolution and are not yet commercially available; confocal microscopes, which are expensive; and a computational method called "shape from focus," which uses a stack of images focused at different depths to identify at which layer each object is most in focus. That's less sophisticated than Crozier and Orth's new technique because it makes no allowance for overlapping materials, such as a nucleus that might be visible through a cell membrane, or a sheet of tissue that's folded over on itself. Stereo microscopes may be the most flexible and affordable option right now, but they are still not as common in laboratories as traditional, monocular microscopes.
For the 3D effect to be noticeable, the camera aperture must be wide enough to let in light from a wide range of angles so that the differences between the two images focused at different depths are distinct. However, while a cellphone camera proves too small (Orth tried it on his iPhone), a standard 50-mm lens on a single-lens reflex camera is more than adequate.
REFERENCE:
1. Antony Orth and Kenneth B. Crozier, Optics Letters, Vol. 38, Issue 15, p. 2666 (2013); http://dx.doi.org/10.1364/OL.38.002666