Navigation auf uzh.ch

Suche

Department of Informatics Robotics and Perception Group

Active Vision

Active vision is concerned with obtaining more information from the environment by actively choosing where and how to observe it using a camera.

Fisher Information Field for Active Visual Localization

icra19_zhang

For mobile robots to localize robustly, actively considering the perception requirement at the planning stage is essential. In this paper, we propose a novel representation for active visual localization. By formulating the Fisher information and sensor visibility carefully, we are able to summarize the localization information into a discrete grid, namely the Fisher information field. The information for arbitrary poses can then be computed from the field in constant time, without the need of costly iterating all the 3D landmarks. Experimental results on simulated and real-world data show the great potential of our method in efficient active localization and perception- aware planning. To benefit related research, we release our implementation of the information field to the public.

References

ICRA_Zhang

 

Z. Zhang, D. Scaramuzza

Beyond Point Clouds: Fisher Information Field for Active Visual Localization

IEEE International Conference on Robotics and Automation (ICRA), 2019.

PDF (PDF, 1 MB)  YouTube Code (Coming soon)

Perception-aware Receding Horizon Navigation for MAVs

icra18_zhang

To reach a given destination safely and accurately, a micro aerial vehicle needs to be able to avoid obstacles and minimize its state estimation uncertainty at the same time. To achieve this goal, we propose a perception-aware receding horizon approach. In our method, a single forward- looking camera is used for state estimation and mapping. Using the information from the monocular state estimation and mapping system, we generate a library of candidate trajectories and evaluate them in terms of perception quality, collision probability, and distance to the goal. The best trajectory to execute is then selected as the one that maximizes a reward function based on these three metrics. To the best of our knowledge, this is the first work that integrates active vision within a receding horizon navigation framework for a goal reaching task. We demonstrate by simulation and real-world experiments on an actual quadrotor that our active approach leads to improved state estimation accuracy in a goal-reaching task when compared to a purely-reactive navigation system, especially in difficult scenes (e.g., weak texture).

References

ICRA18_Zhang

Z. Zhang, D. Scaramuzza

Perception-aware Receding Horizon Navigation for MAVs

IEEE International Conference on Robotics and Automation (ICRA), 2018.

PDF (PDF, 1 MB)  Video  ICRA18 Video Pitch  PPT (PPTX, 93 MB)

Perception-aware Path Planning

TRO16_Costante

While most of the existing work on path planning focuses on reaching a goal as fast as possible, or with minimal effort, these approaches disregard the appearance of the environment and only consider the geometric structure. Vision-controlled robots, however, need to leverage the photometric information in the scene to localize themselves and perform egomotion estimation. In this work, we argue that motion planning for vision-controlled robots should be perception-aware in that the robot should also favor texture-rich areas to minimize the localization uncertainty during a goal-reaching task. Thus, we describe how to optimally incorporate the photometric information (i.e., texture) of the scene, in addition to the the geometric information, to compute the uncertainty of vision-based localization during path planning.

References

TRO16_Costante

G. Costante, J. Delmerico, M. Werlberger, P. Valigi, D. Scaramuzza

Exploiting Photometric Information for Planning under Uncertainty

Springer Tracts in Advanced Robotics (International Symposium on Robotic Research), 2017.

PDF (PDF, 4 MB) PDF of longer paper version (Technical report) YouTube

Information Gain Based Active Reconstruction

ICRA16_Isler

The estimation of the depth uncertainty makes REMODE extremely attractive for motion planning and active-vision problems. In this work, we investigate the following problem: Given the image of a scene, what is the trajectory that a robot-mounted camera should follow to allow optimal dense 3D reconstruction? The solution we propose is based on maximizing the information gain over a set of candidate trajectories. In order to estimate the information that we expect from a camera pose, we introduce a novel formulation of the measurement uncertainty that accounts for the scene appearance (i.e., texture in the reference view), the scene depth, and the vehicle pose. We successfully demonstrate our approach in the case of realtime, monocular reconstruction from a small quadrotor and validate the effectiveness of our solution in both synthetic and real experiments. This is the first work on active, monocular dense reconstruction, which chooses motion trajectories that minimize perceptual ambiguities inferred by the texture in the scene.

Download the code from GitHub.

References

ISER

S. Isler, R. Sabzevari, J. Delmerico, D. Scaramuzza

An Information Gain Formulation for Active Volumetric 3D Reconstruction

IEEE International Conference on Robotics and Automation (ICRA), Stockholm, 2016.

PDF (PDF, 3 MB)  YouTube Software

Active, Dense Reconstruction

Youtube video:  Appearance-based Active, Monocular, Dense Reconstruction for Micro Aerial Vehicles
The estimation of the depth uncertainty makes REMODE extremely attractive for motion planning and active-vision problems. In this work, we investigate the following problem: Given the image of a scene, what is the trajectory that a robot-mounted camera should follow to allow optimal dense 3D reconstruction? The solution we propose is based on maximizing the information gain over a set of candidate trajectories. In order to estimate the information that we expect from a camera pose, we introduce a novel formulation of the measurement uncertainty that accounts for the scene appearance (i.e., texture in the reference view), the scene depth, and the vehicle pose. We successfully demonstrate our approach in the case of realtime, monocular reconstruction from a small quadrotor and validate the effectiveness of our solution in both synthetic and real experiments. This is the first work on active, monocular dense reconstruction, which chooses motion trajectories that minimize perceptual ambiguities inferred by the texture in the scene.

References

Paper cover

C. Forster, M. Pizzoli, D. Scaramuzza

Appearance-based Active, Monocular, Dense Reconstruction for Micro Aerial Vehicles

Robotics: Science and Systems, Berkely, 2014.

PDF (PDF, 7 MB)