The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Exoskeletons
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Markets
    • Agriculture
    • Healthcare
    • Logistics
    • Manufacturing
    • Mining
    • Security
  • Financial
    • Investments
    • Mergers & Acquisitions
    • Earnings
  • Resources
    • Careers
    • COVID-19
    • Digital Issues
    • Publications
      • Collaborative Robotics Trends
      • Robotics Business Review
    • RBR50 Winners 2022
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • Healthcare Robotics Engineering Forum
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
    • Leave a voicemail

Computer vision researchers use special light sources to see around corners

By The Robot Report Staff | June 22, 2019


Computer vision researchers have demonstrated they can use special light sources and sensors to see around corners or through gauzy filters, enabling them to reconstruct the shapes of unseen objects.

The researchers from Carnegie Mellon University, the University of Toronto and University College London said this technique enables them to reconstruct images in great detail, including the relief of George Washington’s profile on a U.S. quarter.

Ioannis Gkioulekas, an assistant professor in Carnegie Mellon’s Robotics Institute, said this is the first time researchers have been able to compute millimeter- and micrometer-scale shapes of curved objects, providing an important new component to a larger suite of non-line-of-sight (NLOS) imaging techniques now being developed by computer vision researchers.

“It is exciting to see the quality of reconstructions of hidden objects get closer to the scans we’re used to seeing for objects that are in the line of sight,” said Srinivasa Narasimhan, a professor in the Robotics Institute. “Thus far, we can achieve this level of detail for only relatively small areas, but this capability will complement other NLOS techniques.”

This work was supported by the Defense Advanced Research Project Agency’s REVEAL program, which is developing NLOS capabilities. The research was presented tomorrow at the 2019 Conference on Computer Vision and Pattern Recognition in Long Beach, California, where it has received a Best Paper award.

“This paper makes significant advances in non-line-of-sight reconstruction – in essence, the ability to see around corners,” the award citation says. “It is both a beautiful paper theoretically as well as inspiring. It continues to push the boundaries of what is possible in computer vision.”

Most of what people see – and what cameras detect – comes from light that reflects off an object and bounces directly to the eye or the lens. But light also reflects off the objects in other directions, bouncing off walls and objects. A faint bit of this scattered light ultimately might reach the eye or the lens, but is washed out by more direct, powerful light sources. NLOS techniques try to extract information from scattered light – naturally occurring or otherwise – and produce images of scenes, objects or parts of objects not otherwise visible.

“Other NLOS researchers have already demonstrated NLOS imaging systems that can understand room-size scenes, or even extract information using only naturally occurring light,” Gkioulekas said. “We’re doing something that’s complementary to those approaches – enabling NLOS systems to capture fine detail over a small area.”

In this case, the researchers used an ultrafast laser to bounce light off a wall to illuminate a hidden object. By knowing when the laser fired pulses of light, the researchers could calculate the time the light took to reflect off the object, bounce off the wall on its return trip and reach a sensor.

At left, an image of a quarter scanned using non-line-of-sight imaging. At right, an image of a quarter scanned using line-of-sight imaging. | Credit: CMU

“This time-of-flight technique is similar to that of the lidars often used by self-driving cars to build a 3D map of the car’s surroundings,” said Shumian Xin, a Ph.D. student in robotics.

Previous attempts to use these time-of-flight calculations to reconstruct an image of the object have depended on the brightness of the reflections off it. But in this study, Gkioulekas said the researchers developed a new method based purely on the geometry of the object, which in turn enabled them to create an algorithm for measuring its curvature.

The researchers used an imaging system that is effectively a lidar capable of sensing single particles of light to test the technique on objects such as a plastic jug, a glass bowl, a plastic bowl and a ball bearing. They also combined this technique with an imaging method called optical coherence tomography to reconstruct the images of U.S. quarters.

In addition to seeing around corners, the technique proved effective in seeing through diffusing filters, such as thick paper.

The technique thus far has been demonstrated only at short distances – a meter at most. But the researchers speculate that their technique, based on geometric measurements of objects, might be combined with other, complementary approaches to improve NLOS imaging. It might also be employed in other applications, such as seismic imaging and acoustic and ultrasound imaging.

In addition to Narasimhan, Gkioulekas and Xin, the research team included Aswin Sankaranarayanan, assistant professor in CMU’s Department of Electrical and Computer Engineering; Sotiris Nousias, a Ph.D student in medical physics and bioengineering at University College London; and Kiriakos N. Kutulakos, a professor of computer science at the University of Toronto.

The researchers are part of a larger collaborative team, which includes researchers from Stanford University, the University of Wisconsin Madison, the University of Zaragosa, Politecnico di Milano and the French-German Research Institute of Saint-Louis, that is developing a suite of complementary techniques for NLOS imaging.

In addition to DARPA, the National Science Foundation, the Office of Naval Research and the Natural Sciences and Engineering Research Council of Canada supported this research.

Editor’s Note: This article was republished from Carnegie Mellon University

Tell Us What You Think! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles Read More >

myoshirt
ETH Zurich develops wearable muscles
International Federation of Robotics appoints new president, VP
impossible mining auv
Impossible Mining raises $10.1M for underwater mining robot
UR20 cobot Universal Robots
Anders Beck introduces the UR20; California bans autonomous tractors

2021 Robotics Handbook

The Robot Report Listing Database

Latest Robotics News

Robot Report Podcast

Anders Beck introduces the UR20; California bans autonomous tractors
See More >

Sponsored Content

  • Magnetic encoders support the stabilization control of a self-balancing two-wheeled robotic vehicle
  • How to best choose your AGV’s Wheel Drive provider
  • Meet Trey, the autonomous trailer (un)loading forklift
  • Kinova Robotics launches Link 6, the first Canadian industrial collaborative robot
  • Torque sensors help make human/robot collaborations safer for workers

RBR50 Innovation Awards

Leave us a voicemail

The Robot Report
  • Mobile Robot Guide
  • Collaborative Robotics Trends
  • Field Robotics Forum
  • Healthcare Robotics Engineering Forum
  • RoboBusiness Event
  • Robotics Business Review
  • Robotics Summit & Expo
  • About The Robot Report
  • Subscribe
  • Advertising
  • Contact Us

Copyright © 2022 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search The Robot Report

  • Home
  • News
  • Technologies
    • Batteries / Power Supplies
    • Cameras / Imaging / Vision
    • Controllers
    • End Effectors
    • Microprocessors / SoCs
    • Motion Control
    • Sensors
    • Soft Robotics
    • Software / Simulation
  • Development
    • Artificial Intelligence
    • Human Robot Interaction / Haptics
    • Mobility / Navigation
    • Research
  • Robots
    • AGVs
    • AMRs
    • Consumer
    • Collaborative Robots
    • Drones
    • Exoskeletons
    • Industrial
    • Self-Driving Vehicles
    • Unmanned Maritime Systems
  • Markets
    • Agriculture
    • Healthcare
    • Logistics
    • Manufacturing
    • Mining
    • Security
  • Financial
    • Investments
    • Mergers & Acquisitions
    • Earnings
  • Resources
    • Careers
    • COVID-19
    • Digital Issues
    • Publications
      • Collaborative Robotics Trends
      • Robotics Business Review
    • RBR50 Winners 2022
    • Search Robotics Database
    • Videos
    • Webinars / Digital Events
  • Events
    • RoboBusiness
    • Robotics Summit & Expo
    • Healthcare Robotics Engineering Forum
    • DeviceTalks
    • R&D 100
    • Robotics Weeks
  • Podcast
    • Episodes
    • Leave a voicemail