Student Projects

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).

Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).

  • Camera calibration is a paramount pre-processing stage of many robotic vision applications such as 3D reconstruction, obstacle avoidance and ego-motion estimation. The goal of this project is to develop a user-friendly, single and multi-camera calibration toolbox adapted to our robotic system.

    More …

  • Hand-Eye calibration is a paramount pre-processing stage of many robotic and augmented reality applications. The goal of this project is to develop a user-friendly hand-eye calibration toolbox integrated with our robotic system.

    More …

  • The goal of this project is to develop visual-inertial pipeline for the Dynamic and Active Vision Sensor (DAVIS). The system will estimate the pose of the DAVIS using the event stream and IMU measurements delivered by the sensor.

    More …

  • The goal of this project is to turn an event camera into a high-speed camera, by designing an algorithm to recover images from the compressed event stream.

    More …

  • Event cameras such as the Dynamic and Active Pixel Vision Sensor (DAVIS) are recent sensors with large potential for high-speed and high dynamic range robotic applications.

    More …

  • The goal of this project is to use event cameras to compute the optical flow in the image plane induced by either a moving camera in a scene or by moving objects with respect to a static event camera.

    More …

  • Explore an unknown space in 3D, relying only on visual-inertial odometry (with drift) and basic place recognition (but no loop closure/map optimization).

    More …

  • This project aims to improve the robustness of visual-inertial odometry using machine learning methods.

    More …

  • Title says it all

    More …

  • Make a full simulation of a decentralized multi-quadrotor SLAM system, in preparation for real world experiments.

    More …

  • The project aims to benchmark different camera control algorithms and create related tools.

    More …

  • Learning to Plan

    More …

  • Automatic Hyperparameter Optimization

    More …

  • Morphing endows quadrotors with the capability to achieve task-specific morphologies without compromising their performance during nominal flight conditions. The ability of changing their morphology can therefore widely broaden the spectrum of tasks that quadrotors can execute.

    More …

  • Morphing quadrotors are an increasingly hot topic in the field of micro aerial vehicles. One of the open questions is to find the optimal morphology to execute a given task.

    More …

  • Onboard vision is one of the most common sensing modality for autonomous quadrotors. A state estimate from onboard vision can be intermittent, noisy, and delayed.The goal of this project is to experimentally evaluate the impact of degraded vision-based state estimation on the closed-loop performance of a quadrotor for different tasks.

    More …

  • During this project, we will develop machine learning based techniques to let a (real) drone learn to fly nimbly through gaps and gates, while minimizing the risk of critical failures and collisions.

    More …

  • The project aims to develop an algorithm to estimate the time offset between a camera and an IMU.

    More …

  • The student is expected to study how motion estimation is affected by feature selection (e.g., number of features, different feature locations). The ultimate goal will be to implement a smart feature selection mechanism in our visual odometry framework.

    More …

  • Learn depth from RGB frames and sparse depth information.

    More …

  • MPC for high speed trajectory tracking

    More …

  • Build a custom machine learning framework tailored to event-based data.

    More …

  • Teach a drone to fly like a human pilot.

    More …

  • Develop a robust feature detection and tracking method using only events from an event camera

    More …

  • Investigate the usability of a learned visual odometry pipeline for quadrotor flight.

    More …

  • Learning representations for navigation in a self-supervised manner.

    More …

  • The goal of this project is to develop an open-source simulation framework to simulate different robot platforms with event cameras.

    More …

  • The goal of this project is to explore well-established adaptive control techniques combined with recent advances in machine learning for high-performance quadrotor control.

    More …

  • Perception latency often represents a limitation for the achievable agility of an autonomous robot. Faster sensors and low-latency processing would allow to obtain more agile robots. The goal of this project is to explore low-latency, event-based cameras for closed-loop, high-speed quadrotor control

    More …