Visual and Inertial Odometry

Active Exposure Control for Robust Visual Odometry in High Dynamic Range (HDR) Environments

ICRA17_Zhang

In this paper, we propose an active exposure control method to improve the robustness of visual odometry in HDR (high dynamic range) environments. Our method evaluates the proper exposure time by maximizing a robust gradient-based image quality metric. The optimization is achieved by exploiting the photometric response function of the camera. Our exposure control method is evaluated in different real world environments and outperforms both the built-in auto-exposure function of the camera and a fixed exposure time. To validate the benefit of our approach, we test different state-of-the-art visual odometry pipelines (namely, ORB-SLAM2, DSO, and SVO 2.0) and demonstrate significant improved performance using our exposure control method in very challenging HDR environments. Datasets and code will be released soon!

References

ICRA17_Zhang

 

Z. Zhang, C. Forster, D. Scaramuzza

Active Exposure Control for Robust Visual Odometry in HDR Environments

IEEE International Conference on Robotics and Automation (ICRA), 2017.

PDF (PDF, 1450 KB)    Video

IMU Preintegration on Manifold for Efficient Visual-Inertial Maximum-a-Posteriori Estimation

rss15_forster

Recent results in monocular visual-inertial navigation (VIN) have shown that optimization-based approaches outperform filtering methods in terms of accuracy due to their capability to relinearize past states. However, the improvement comes at the cost of increased computational complexity. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes. The preintegration allows us to accurately summarize hundreds of inertial measurements into a single relative motion constraint. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group and carefully deals with uncertainty propagation. The measurements are integrated in a local frame, which eliminates the need to repeat the integration when the linearization point changes while leaving the opportunity for belated bias corrections. The second contribution is to show that the preintegrated IMU model can be seamlessly integrated in a visual-inertial pipeline under the unifying framework of factor graphs. This enables the use of a structureless model for visual measurements, further accelerating the computation. The third contribution is an extensive evaluation of our monocular VIN pipeline: experimental results confirm that our system is very fast and demonstrates superior accuracy with respect to competitive state-of-the-art filtering and optimization algorithms, including off-the-shelf systems such as Google Tango.

References

RSS15_Forster

C. Forster, L. Carlone, F. Dellaert, D. Scaramuzza

On-Manifold Preintegration for Real-Time Visual-Inertial Odometry

IEEE Transactions on Robotics, in press, 2016.

 

PDF (PDF, 3504 KB)  YouTube

RSS15_Forster

C. Forster, L. Carlone, F. Dellaert, D. Scaramuzza

IMU Preintegration on Manifold for Efficient Visual-Inertial Maximum-a-Posteriori Estimation

Robotics: Science and Systems (RSS), Rome, 2015.

Best Paper Award Finalist! Oral Presentation: Acceptance Rate 4%

PDF (PDF, 2212 KB) Supplementary material (PDF, 498 KB)  YouTube

SVO: Fast Semi-Direct Monocular Visual Odometry

tro16_forster

 

icra14_forster

We propose a semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods. The semi-direct approach eliminates the need of costly feature extraction and robust matching techniques for motion estimation. Our algorithm operates directly on pixel intensities, which results in subpixel precision at high frame-rates. A probabilistic mapping method that explicitly models outlier measurements is used to estimate 3D points, which results in fewer outliers and more reliable points. Precise and high frame-rate motion estimation brings increased robustness in scenes of little, repetitive, and high-frequency texture. The algorithm is applied to micro-aerial-vehicle stateestimation in GPS-denied environments and runs at 55 frames per second on the onboard embedded computer and at more than 300 frames per second on a consumer laptop.

svo_4cam

 

This video shows results from a modification of the SVO algorithm that generalizes to a set of rigidly attached (not necessarily overlapping) cameras. Simultaneously, we run a CPU implementation of the REMODE algorithm on the front, left, and right camera. Everything runs in real-time on a laptop computer. Parking garage dataset courtesy of NVIDIA.

References

 

svo2                   

Christian Forster, Zichao Zhang, Michael Gassner, Manuel Werlberger, Davide Scaramuzza

SVO: Semi-Direct Visual Odometry for Monocular and Multi-Camera Systems

IEEE Transactions on Robotics and Automation, to appear, 2016.

Includes comparison against ORB-SLAM, LSD-SLAM, and DSO and comparison among Dense, Semi-dense, and Sparse Direct Image Alignment.

PDF (PDF, 13022 KB)   YouTube

RSS15_Forster

C. Forster, M. Pizzoli, D. Scaramuzza

SVO: Fast Semi-Direct Monocular Visual Odometry

IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, 2014.

 

PDF (PDF, 1435 KB)  YouTube  Software

REMODE

M. Pizzoli, C. Forster, D. Scaramuzza

REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time

IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, 2014.

PDF (PDF, 3255 KB)  YouTube Software

 

1-point RANSAC

Given a car equipped with an omnidirectional camera, the motion of the vehicle can be purely recovered from salient features tracked over time. We propose the 1-Point RANSAC algorithm which runs at 800 Hz on a normal laptop. To our knowledge, this is the most efficient visual odometry algorithm.

teaser1

 

 
teaser2

 

icra09_scaramuzza

 

This video shows the estimation of the vehicle motion from image features. The video demonstrate the approach described in our paper which uses 1-point RANSAC algorithm to remove the outliers. Except for the features extraction process, the outlier removal and the motion estimation steps take less than 1ms on a normal laptop computer.

References

D. Scaramuzza and F. Fraundorfer. Visual Odometry: Part I - The First 30 Years and Fundamentals. IEEE Robotics and Automation Magazine, Volume 18, issue 4, 2011. [ PDF (PDF, 526 KB) ]
F. Fraundorfer and D. Scaramuzza. Visual odometry: Part II - Matching, robustness, optimization, and applications. IEEE Robotics and Automation Magazine, Volume 19, issue 2, 2012. [ PDF (PDF, 1710 KB) ]
D. Scaramuzza. 1-Point-RANSAC Structure from Motion for Vehicle-Mounted Cameras by Exploiting Non-holonomic Constraints. International Journal of Computer Vision, Volume 95, Issue 1, 2011. [ PDF (PDF, 947 KB) ]
D. Scaramuzza. Performance Evaluation of 1-Point-RANSAC Visual Odometry. Journal of Field Robotics, Volume 28, issue 5, 2011. PDF (PDF, 1607 KB) ]
D. Scaramuzza, A. Censi, K. Daniilidis. Exploiting Motion Priors in Visual Odometry for Vehicle-Mounted Cameras with Non-holonomic Constraints. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011), San Francisco, September, 2011. [ PDF (PDF, 707 KB) ]
L. Kneip, D. Scaramuzza, R. Siegwart. A Novel Parameterization of the Perspective-Three-Point Problem for a Direct Computation of Absolute Camera Position and Orientation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, USA, 2011. [ PDF (PDF, 269 KB) ] [C/C++ code (ZIP, 9 KB)]
L. Kneip, A. Martinelli, S. Weiss, D. Scaramuzza, R. Siegwart. A Closed-Form Solution for Absolute Scale Velocity Determination Combining Inertial Measurements and a Single Feature Correspondence. IEEE International Conference on Robotics and Automation (ICRA 2011), Shanghai, 2011. [ PDF (PDF, 1274 KB) ]
D. Scaramuzza, F. Fraundorfer, and M. Pollefeys. Closing the Loop in Appearance-Guided Omnidirectional Visual Odometry by Using Vocabulary Trees. Robotics and Autonomous System Journal (Elsevier), Volume 58, issue 6, June, 2010. [ PDF (PDF, 2292 KB) ]
L. Kneip, D. Scaramuzza, R. Siegwart. On the Initialization of Statistical Optimum Filters with Application to Motion Estimation. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010), Taipei, October, 2010. [ PDF (PDF, 208 KB) ]
F. Fraundorfer, D. Scaramuzza, M. Pollefeys. A Constricted Bundle Adjustment Parameterization for Relative Scale Estimation in Visual Odometry. IEEE International Conference on Robotics and Automation (ICRA 2010), Anchorage, Alaska, May, 2010. [ PDF (PDF, 858 KB) ]
D. Scaramuzza, L. Spinello, R. Triebel, R., Siegwart. Key Technologies for Intelligent and Safer Cars from Motion Estimation to Predictive Motion Planning. IEEE International Conference on Industrial Electronics, Bari, Italy, July, 2010. [ PDF (PDF, 1035 KB) ]
D. Sabatta, D. Scaramuzza, R. Siegwart. Improved Appearance-Based Matching in Similar and Dynamic Environments Using a Vocabulary Tree. IEEE International Conference on Robotics and Automation (ICRA 2010), Anchorage, Alaska, May, 2010. [ PDF (PDF, 1142 KB) ]
D. Scaramuzza, F. Fraundorfer, M. Pollefeys, R. Siegwart. Absolute Scale in Structure from Motion from a Single Vehicle Mounted Camera by Exploiting Nonholonomic Constraints. IEEE International Conference on Computer Vision (ICCV 2009), Kyoto, September-October, 2009. [ PDF (PDF, 478 KB) ]
D. Scaramuzza, F. Fraundorfer, R. Siegwart. Real-Time Monocular Visual Odometry for On-Road Vehicles with 1-Point RANSAC. IEEE International Conference on Robotics and Automation (ICRA 2009), Kobe, Japan, May, 2009. [ PDF (PDF, 7208 KB) ]
D. Scaramuzza, R. Siegwart. Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles.IEEE Transactions on Robotics, Volume 24, issue 5, October 2008. [ PDF (PDF, 1262 KB) ]