Active Vision

Active vision is concerned with obtaining more information from the environment by actively choosing where and how to observe it using a camera.

Perception-aware Path Planning

TRO16_Costante

While most of the existing work on path planning focuses on reaching a goal as fast as possible, or with minimal effort, these approaches disregard the appearance of the environment and only consider the geometric structure. Vision-controlled robots, however, need to leverage the photometric information in the scene to localize themselves and perform egomotion estimation. In this work, we argue that motion planning for vision-controlled robots should be perception-aware in that the robot should also favor texture-rich areas to minimize the localization uncertainty during a goal-reaching task. Thus, we describe how to optimally incorporate the photometric information (i.e., texture) of the scene, in addition to the the geometric information, to compute the uncertainty of vision-based localization during path planning.

References

TRO16_Costante

G. Costante, J. Delmerico, M. Werlberger, P. Valigi, D. Scaramuzza

Exploiting Photometric Information for Planning under Uncertainty

Springer Tracts in Advanced Robotics (International Symposium on Robotic Research), 2017.

PDF (PDF, 5017 KB) PDF of longer paper version (Technical report) YouTube

Information Gain Based Active Reconstruction

ICRA16_Isler

The estimation of the depth uncertainty makes REMODE extremely attractive for motion planning and active-vision problems. In this work, we investigate the following problem: Given the image of a scene, what is the trajectory that a robot-mounted camera should follow to allow optimal dense 3D reconstruction? The solution we propose is based on maximizing the information gain over a set of candidate trajectories. In order to estimate the information that we expect from a camera pose, we introduce a novel formulation of the measurement uncertainty that accounts for the scene appearance (i.e., texture in the reference view), the scene depth, and the vehicle pose. We successfully demonstrate our approach in the case of realtime, monocular reconstruction from a small quadrotor and validate the effectiveness of our solution in both synthetic and real experiments. This is the first work on active, monocular dense reconstruction, which chooses motion trajectories that minimize perceptual ambiguities inferred by the texture in the scene.

Download the code from GitHub.

References

ISER

S. Isler, R. Sabzevari, J. Delmerico, D. Scaramuzza

An Information Gain Formulation for Active Volumetric 3D Reconstruction

IEEE International Conference on Robotics and Automation (ICRA), Stockholm, 2016.

PDF (PDF, 3836 KB)  YouTube Software

Active, Dense Reconstruction

Youtube video:  Appearance-based Active, Monocular, Dense Reconstruction for Micro Aerial Vehicles
The estimation of the depth uncertainty makes REMODE extremely attractive for motion planning and active-vision problems. In this work, we investigate the following problem: Given the image of a scene, what is the trajectory that a robot-mounted camera should follow to allow optimal dense 3D reconstruction? The solution we propose is based on maximizing the information gain over a set of candidate trajectories. In order to estimate the information that we expect from a camera pose, we introduce a novel formulation of the measurement uncertainty that accounts for the scene appearance (i.e., texture in the reference view), the scene depth, and the vehicle pose. We successfully demonstrate our approach in the case of realtime, monocular reconstruction from a small quadrotor and validate the effectiveness of our solution in both synthetic and real experiments. This is the first work on active, monocular dense reconstruction, which chooses motion trajectories that minimize perceptual ambiguities inferred by the texture in the scene.

References

Paper cover

C. Forster, M. Pizzoli, D. Scaramuzza

Appearance-based Active, Monocular, Dense Reconstruction for Micro Aerial Vehicles

Robotics: Science and Systems, Berkely, 2014.

PDF (PDF, 7383 KB)