Navigation auf


Department of Informatics Robotics and Perception Group

Student Projects

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at

Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).

  • Efficient Processing of Event Data for Deep Learning

    This project investigates new paradigms for low-level event data processing (from event cameras) to enable expressive and efficient feature extraction.

  • Neural-based scene reconstruction and synthesis using event cameras

    The project will focus on exploring the use of event-based cameras in neural-based scene reconstruction and synthesis, extending available approaches to event-based data.

  • Adversarial Robustness in Event-Based Neural Networks

    The project will focus on studying various neural network architectures for event-based inference datasets and evaluate their performance in the presence of adversarial attacks.

  • Developing Smart Vision Assistive Technology

    Developing Smart Vision Assistive Technology

  • Efficient Learning-aided Visual Inertial Odometry

    Online learning-aided visual inertial odometry for robust state estimation

  • Domain Transfer between Events and Frames

    In this project, the student extends upon a previous student project (published at ECCV22) and current advances from the UDA literature in order to transfer multiple tasks from frames to events. The approach should be validated on several tasks in challenging environments (night, high-dynamic scenes) to highlight the benefits of event cameras.

  • Data-driven Keypoint Extractor for Event Data

    The project aims to develop a data-driven keypoint extractor, which computes interest points for event camera data. Based on a previous student project (submitted to CVPR23), the approach will leverage neural network architectures to extract and describe keypoints in an event stream.

  • Data-driven Event Generation from Images

    In this project, the student applies concepts from current advances in image generation to create artificial events from standard frames. Multiple state-of-the-art deep learning methods will be explored in the scope of this project.

  • Development of Off-Board Vision System Software for Autonomous Drones

    This project aims to develop the software for an off-board vision system for an autonomous drone. The objective is to enhance the capabilities of the existing drone hardware by integrating a real-time image transmission system. The software will enable the drone camera to transmit high-quality images and videos in real-time to a remote receiver.

  • Model-based Reinforcement Learning for Autonomous Drone Racing

    Recent advances in model-free Reinforcement Learning have shown superior performance in different complex tasks, such as the game of chess, quadrupedal locomotion, or even drone racing. Given a reward function, Reinforcement Learning is able to find the optimal policy through trial-and-error in simulation, that can directly be deployed in the real-world. In our lab, we have been able to outrace professional human pilots using model-free Reinforcement Learning trained solely in simulation.

  • Learning to calibrate an event camera

    This work will address intrinsic calibration of event cameras, a fundamental problem for application of event cameras to many computer vision tasks, by incorporating deep learning.

  • Event-based depth estimation​

    This project will focus on event-based depth estimation using structured light systems.

  • 3D reconstruction with event cameras

    This project will explore the application of event camera setups for scene reconstruction. Accurate and efficient reconstructions using event-camera setups is still an unexplored topic. This project will focus on solving the problem of 3D reconstruction using active perception with event cameras​.

  • Deep learning based motion estimation from events

    Optical flow estimation is the mainstay of dynamic scene understanding in robotics and computer vision. It finds application in SLAM, dynamic obstacle detection, computational photography, and beyond. However, extracting the optical flow from frames is hard due to the discrete nature of frame-based acquisition. Instead, events from an event camera indirectly provide information about optical flow in continuous time. Hence, the intuition is that event cameras are the ideal sensors for optical flow estimation. In this project, you will dig deep into optical flow estimation from events. We will make use of recent innovations in neural network architectures and insights of event camera models to push the state-of-the-art in the field

  • Computational Photography and Videography

    Combine the complementary information from standard and event cameras to enhance images and video.

  • Localization techniques for drone racing

    Benchmark comparison of localization techniques.

  • End-to-End Vision-Based Landing

    Land a UAV safely relying on vision.

  • Drone to Drone Interaction Effects

    Use aerodynamic effects to infer the other drones’ locations.

  • Bayesian Optimization for Racing Aerial Vehicle MPC Tuning

    In recent years, model predictive control, one of the most popular methods for controlling constrained systems, has benefitted from the advancements of learning methods. Many applications showed the potential of the cross fertilization between the two fields, i.e., autonomous drone racing, autonomous car racing, etc. Most of the research efforts have been dedicated to learn and improve the model dynamics, however, the controller tuning, which has a crucial importance, have not been studied much.

  • Enhancing Event Data Processing with Irregular Time Series Modeling

    This project focuses on utilizing an advanced approach to time series modeling for efficient event data processing.

  • Exploring Multimodal Strategies for Event-Based Vision

    This project focuses on utilizing multi-purpose vision models in the realm of Event-Based Vision.

  • Vision-based State Estimation for Flying Cars

    Vision-based State Estimation for Flying Cars