LUMINESCENT FILMS: Transparent image sensor is based on luminescent concentrator
A new way of capturing images based on a flat, flexible, transparent, and potentially disposable polymer sheet has been developed at Johannes Kepler University Linz (Linz, Austria).1 “To our knowledge, we are the first to present an image sensor that is fully transparent—no integrated microstructures, such as circuits—and is flexible and scalable at the same time,” says Oliver Bimber, one of the two researchers.
The new imager uses fluorescent molecules to capture incoming light and channel a portion of it to an array of sensors framing the sheet. With no electronics or internal components, the imager’s design makes it ideal for a new breed of imaging technologies, including user-interface devices that can respond not to a touch, but merely to a simple gesture.
The sensor is based on a polymer film known as a luminescent concentrator (LC), which is suffused with fluorescent dye particles that absorb a specific wavelength (blue light, for example) and then reemit it at a longer wavelength (such as green light). Some of the reemitted fluorescent light is scattered out of the imager but a portion of it travels via total internal reflection within the film to the outer edges, where arrays of optical sensors (in essence, 1D pinhole cameras) capture the light. A computer then combines the signals to create a grayscale image. “With fluorescence, a portion of the light that is reemitted actually stays inside the film,” says Bimber. “This is the basic principle of our sensor.”
Slits with a triangular cross-section were cut into the LC edges and filled with opaque plasticine. The remaining narrow transparent areas served as 1D pinholes, and were combined with line-scan cameras with 1728 sensor elements over a 210 mm length. Multiple exposures (up to 11) were taken of the light field to increase the dynamic range and signal-to-noise ratio (SNR).
Image reconstruction
For the LC to work as an imager, Bimber and his colleagues had to determine precisely where light was falling across the entire surface of the film. This was the major technical challenge because the polymer sheet cannot be divided into individual pixels like conventional image sensors. Instead, fluorescent light from all points across its surface travels to all the edge sensors. Calculating where each bit of light entered the imager would be like determining where along a subway line a passenger got on after the train reached its final destination and all the passengers exited at once.
The solution came from analyzing attenuation of the light as it travels through the polymer; by measuring the relative brightness of light reaching the sensor array, it was possible to calculate where the light entered the film. The researchers reconstruct the image by using a technique similar to that in x-ray computed tomography (CT) scans. (This same principle has already been used in an optical input device that tracks the location of a single laser point on a screen.)
Currently, the resolution from this image sensor is low (32 × 32 pixels with the first prototypes). The main reason for this is the low SNR of the low-cost photodiodes being used. In addition, image quality at the image center was lower than that near the borders. The researchers are planning better prototypes that cool the photodiodes to achieve a higher SNR.
The main application the researchers envision for this new technology is in touch-free, transparent user interfaces that could seamlessly overlay a television or other display technology. This would give computer operators or video-game players full gesture control without the need for cameras or other external motion-tracking devices. The polymer sheet could also be wrapped around objects to provide them with sensor capabilities. Since the material is transparent, it’s also possible to use multiple layers that each fluoresce at different wavelengths to capture color images. Touch sensing based on frustrated total internal reflection is also a possibility.
The researchers also are considering placing the new sensor in front of a conventional high-resolution CCD sensor. This would allow recording of two images at the same time at two different exposures. “Combining both would give us a high-resolution image with less overexposed or underexposed regions if scenes with a high dynamic range or contrast are captured,” Bimber says.
REFERENCE
1. A. Koppelhuber and O. Bimber, Opt. Exp., 21, 4, 4796 (2013).
John Wallace | Senior Technical Editor (1998-2022)
John Wallace was with Laser Focus World for nearly 25 years, retiring in late June 2022. He obtained a bachelor's degree in mechanical engineering and physics at Rutgers University and a master's in optical engineering at the University of Rochester. Before becoming an editor, John worked as an engineer at RCA, Exxon, Eastman Kodak, and GCA Corporation.
Gail Overton | Senior Editor (2004-2020)
Gail has more than 30 years of engineering, marketing, product management, and editorial experience in the photonics and optical communications industry. Before joining the staff at Laser Focus World in 2004, she held many product management and product marketing roles in the fiber-optics industry, most notably at Hughes (El Segundo, CA), GTE Labs (Waltham, MA), Corning (Corning, NY), Photon Kinetics (Beaverton, OR), and Newport Corporation (Irvine, CA). During her marketing career, Gail published articles in WDM Solutions and Sensors magazine and traveled internationally to conduct product and sales training. Gail received her BS degree in physics, with an emphasis in optics, from San Diego State University in San Diego, CA in May 1986.