Sensors

Depth sensors could be sensitive enough for self-driving cars

11th January 2018
Enaie Azambuja
0

 

For the past 10 years, the Camera Culture group at MIT’s Media Lab has been developing innovative imaging systems — from a camera that can see around corners to one that can read text in closed books — by using 'time of flight,' an approach that gauges distance by measuring the time it takes light projected into a scene to bounce back to a sensor.

In a new paper appearing in IEEE Access, members of the Camera Culture group present a new approach to time-of-flight imaging that increases its depth resolution 1,000-fold. That’s the type of resolution that could make self-driving cars practical.

The new approach could also enable accurate distance measurements through fog, which has proven to be a major obstacle to the development of self-driving cars. At a range of 2 meters, existing time-of-flight systems have a depth resolution of about a centimeter. That’s good enough for the assisted-parking and collision-detection systems on today’s cars.

But as Achuta Kadambi, a joint PhD student in electrical engineering and computer science and media arts and sciences and first author on the paper, explains, “As you increase the range, your resolution goes down exponentially.

Let’s say you have a long-range scenario, and you want your car to detect an object further away so it can make a fast update decision. You may have started at 1 centimeter, but now you’re back down to [a resolution of] a foot or even 5 feet. And if you make a mistake, it could lead to loss of life.”

At distances of 2 meters, the MIT researchers’ system, by contrast, has a depth resolution of 3 micrometers. Kadambi also conducted tests in which he sent a light signal through 500 meters of optical fiber with regularly spaced filters along its length, to simulate the power falloff incurred over longer distances, before feeding it to his system.

Those tests suggest that at a range of 500 meters, the MIT system should still achieve a depth resolution of only a centimeter. Kadambi is joined on the paper by his thesis advisor, Ramesh Raskar, an associate professor of media arts and sciences and head of the Camera Culture group.

With time-of-flight imaging, a short burst of light is fired into a scene, and a camera measures the time it takes to return, which indicates the distance of the object that reflected it. The longer the light burst, the more ambiguous the measurement of how far it’s traveled. So light-burst length is one of the factors that determines system resolution.

The other factor, however, is detection rate. Modulators, which turn a light beam off and on, can switch a billion times a second, but today’s detectors can make only about 100 million measurements a second. Detection rate is what limits existing time-of-flight systems to centimeter-scale resolution.

There is, however, another imaging technique that enables higher resolution, Kadambi says. That technique is interferometry, in which a light beam is split in two, and half of it is kept circulating locally while the other half — the “sample beam” — is fired into a visual scene.

The reflected sample beam is recombined with the locally circulated light, and the difference in phase between the two beams — the relative alignment of the troughs and crests of their electromagnetic waves — yields a very precise measure of the distance the sample beam has traveled.

But interferometry requires careful synchronisation of the two light beams. “You could never put interferometry on a car because it’s so sensitive to vibrations,” Kadambi says. “We’re using some ideas from interferometry and some of the ideas from LIDAR, and we’re really combining the two here.”


Discover more here.

Image credit: MIT.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier