MIT researchers use shadows to create a video of what happens off camera

Sponsored Links

Corinna Dumat / EyeEm via Getty Images

In order for self-driving cars to park themselves, they’ll need to be able to see around corners. A team from MIT’s CSAIL may have a new way to do that. Using video footage of shadows, they’ve developed an algorithm that can recreate video of what’s happening off the screen.

In their experiment, the team filmed a pile of clutter. Off screen, someone created shadows by moving blocks and other objects. Their algorithm predicted the light transport, or the way light is expected to move in a scene, and compared that to the shadows. It then used that info to reconstruct the off-screen video.

While the results of the work are still blurry and unrefined — the reconstructed videos show color and motion but not detail — the system could one day help self-driving cars detect what’s happening around corners or improve search-and-rescue missions in obstructed areas.

This isn’t the first time MIT has attempted to see around corners. This method improves upon that work because it doesn’t require laser-powered cameras, and it can recreate an off-screen image using any video scene, not just video of changes in lighting on the floor. Next, the CSAIL team plans to improve the resolution of their reconstructed video and to test the video in uncontrolled environments.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.