Skip to main content

CMU researchers use computer vision to see around corners

Future autonomous vehicle and other machine intelligence systems might not need line-of-sight to gather incredibly detailed image data: New research from Carnegie Mellon University, the University of Toronto and University College London has devised a technique for “seeing around corners.” The method uses special sources of light, combined with sensors and computer vision processing to […]

Future autonomous vehicle and other machine intelligence systems might not need line-of-sight to gather incredibly detailed image data: New research from Carnegie Mellon University, the University of Toronto and University College London has devised a technique for “seeing around corners.”

The method uses special sources of light, combined with sensors and computer vision processing to effectively infer or rebuild extremely detailed imagery, much more detailed than has been possible previously, without having photographed it or otherwise ‘viewed’ it directly.

There are some limitations – so far, researchers working on the project have only been able to use this technique effectively for “relatively small areas,” according to CMU Robotics Institute Professor Srinivasa Narasimhan.

That limitation could be mitigated by employing this technique alongside others used in the field of non-line-of-site (or NLOS) computer vision research. Some such techniques are already in use in the market, including how Tesla’s Autopilot system (and other driver-assist technologies) makes use of reflected or bounced radar signals to see around the cars immediately in front of the Tesla vehicle.

The technique used in this new study is actually similar to what happens in a LiDAR system used in many autonomous vehicle systems (though Tesla famously eschews use of laser-based vision systems in its tech stack). CMU and its partner institutions use ultrafast laser light in their system, bouncing it off a wall to light an object hidden around a corner.

Sensors then capture the reflected light when it bounces back, and researchers measure and calculate how long it took for the reflected light to return to the point of origin. Taking a number of measurements, and using information regarding the target object’s geometry, the team was able to then reconstruct the objects with remarkable accuracy and detail. Their method was so effective that it even works through semi-osbscruingf materials, including heavy paper – another big benefit when it comes to its potential for use in environment sensors that work in real-world conditions.

At left, an image of a quarter scanned using non-line-of-sight imaging. At right, an image of a quarter scanned using line-of-sight imaging.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.