Home > Personal Technology > Digital Cameras > New Technique Improves Single Sensor Cameras
Digital CamerasScience

New Technique Improves Single Sensor Cameras

New technique makes “Compressed Sensing” 50 times more efficient.

Compressed sensing is a new technique which can be used to extract a large amount of information from a signal. In a traditional camera, millions of light sensors each capture a portion of an image and then builds an image by compositing the light captured at each point (pixel). However, as demonstrated by researchers from Rice University, with compressed sensing, you can produce 2D images using only a single optical sensor.

The drawback with compressed sensing is that it’s inefficient. You can capture a larger image with one sensor, but it requires thousands of exposures, and that makes the technique unfeasible as a competitor to conventional cameras. However, researchers at MIT have developed a new technique which they believe can increase the efficiency of compressed sensing 50 times over. They say that they can reduce those thousands of exposures to just dozens.

One advantage of using compressed sensing, is that they don’t require lenses. In a traditional camera, light enters the objective and then needs to be curved and refocused using a lens so it hits the light array at the focal point. In a camera using just a single sensor, that sensor can only receive light from a single point. A single point of light cannot be blurry, or out of focus, because light will always hit the sensor directly. This allows for higher resolution imagery which isn’t subjected to distortions due to light contamination from beyond the focal plane. The concept is the same as can be found in laboratory confocal microscopes (confocal microscopes use a pinhole to achieve the same effect). Another advantage is smaller, simpler imaging components.

A sample of image qualities obtained using compressed sensing.

“Formerly, imaging required a lens, and the lens would map pixels in space to sensors in an array, with everything precisely structured and engineered,” says Guy Satat, a graduate student at MIT and author on a new paper about the technique.  “With computational imaging, we began to ask: Is a lens necessary?  Does the sensor have to be a structured array? How many pixels should the sensor have? Is a single pixel sufficient? These questions essentially break down the fundamental idea of what a camera is.  The fact that only a single pixel is required and a lens is no longer necessary relaxes major design constraints, and enables the development of novel imaging systems. Using ultrafast sensing makes the measurement significantly more efficient.”

The camera works by passing incoming light through a filter. The filter blocks some parts of the image, and not others. The light sensor then makes an average measurement of the light passing through the filter, and repeats the measurement with a different filter. By analyzing how the filters effect the scene, computers can reconstruct what the image should look like.

source: IEEE

David F.
A grad student in experimental physics, David is fascinated by science, space and technology. When not buried in lecture books, he enjoys movies, gaming and mountainbiking

Leave a Reply

Your email address will not be published.

Read previous post:
Japanese Turbine Draws Power from Typhoons

A new startup hopes to harness wind power with bladeless turbines.

Close