In this project we develop a robotic system that will be able to
navigate in an unknown indoor environment in real-time and to generate a
3D point based reconstruction of the surrounding environment. Our
approach is based solely on vision. A stereo camera is used to acquire
images from different orientation angles. Feature points are detected
inside these images and their 3D position is obtained by triangulation.
3D points acquired from different positions are integrated into a single
point cloud using optimization techniques. The robot uses these point
clouds as a basis for navigation. Unknown parts are explored so that
step by step a 3D point representation of the environment can be
In the following link we provide some sample datasets for evaluation purposes.