Student Projects
To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP . Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).
Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).
-
Neural Architecture Knowledge Transfer for Event-based Vision
Perform knowledge distillation from Transformers to more energy-efficient neural network architectures for Event-based Vision.
-
Leveraging Long Sequence Modeling for Drone Racing
Study the application of Long Sequence Modeling techniques within Reinforcement Learning (RL) to improve autonomous drone racing capabilities.
-
Electrical Flow-Based Graph Embeddings for Event-based Vision and other downstream tasks
This project explores a novel approach to graph embeddings using electrical flow computations.
-
Better Scaling Laws for Neuromorphic Systems
This project explores and extends the novel "deep state-space models" framework by leveraging their transfer function representations.
-
Time-continuous Facial Motion Capture Using Event Cameras
Traditional facial motion capture systems, including marker-based methods and multi-camera rigs, often struggle to capture fine details such as micro-expressions and subtle wrinkles. While learning-based techniques using monocular RGB images have improved tracking fidelity, their temporal resolution remains limited by conventional camera frame rates. Event-based cameras present a compelling solution, offering superior temporal resolution without the cost and complexity of high-speed RGB cameras. This project explores the potential of event-based cameras to enhance facial motion tracking, enabling the precise capture of subtle facial dynamics over time.
-
Advancing Space Navigation and Landing with Event-Based Camera in collaboration with the European Space Agency
In this project, you will investigate the use of event-based cameras for vision-based landing on celestial bodies such as Mars or the Moon.
-
Meta-model-based-RL for adaptive flight control
This research project aims to develop and evaluate a meta model-based reinforcement learning (RL) framework for addressing variable dynamics in flight control.
-
Vision-Based Reinforcement Learning in the Real World
We aim to learn vision-based policies in the real world using state-of-the-art model-based reinforcement learning.
-
Vision-Based World Models for Real-Time Robot Control
This project aims to use vision-based world models as a basis for model-based reinforcement learning, aiming to achieve a generalizable approach for drone navigation.
-
Agile Flight of Flexible Drones in Confined Spaces
The project aims to create a controller for an interesting and challenging type of quadrotor, where the rotors are connected via flexible joints.
-
Advancing Low-Latency Processing for Event-Based Neural Networks
Design and implement efficient event-based networks to achieve low latency inference.
-
Inverse Reinforcement Learning from Expert Pilots
Use Inverse Reinforcement Learning (IRL) to learn reward functions from previous expert drone demonstrations.
-
Fine-tuning Policies in the Real World with Reinforcement Learning
Explore online fine-tuning in the real world of sub-optimal policies.
-
Vision-Based Agile Aerial Transportation
Develop a vision-based aerial transportation system with reinforcement / imitation learning.
-
Event-based Particle Image Velocimetry
When drones are operated in industrial environments, they are often flown in close proximity to large structures, such as bridges, buildings or ballast tanks. In those applications, the interactions of the induced flow produced by the drone’s propellers with the surrounding structures are significant and pose challenges to the stability and control of the vehicle. A common methodology to measure the airflow is particle image velocimetry (PIV). Here, smoke and small particles suspended in the surrounding air are tracked to estimate the flow field. In this project, we aim to leverage the high temporal resolution of event cameras to perform smoke-PIV, overcoming the main limitation of frame-based cameras in PIV setups. Applicants should have a strong background in machine learning and programming with Python/C++. Experience in fluid mechanics is beneficial but not a hard requirement.
-
Energy-Efficient Path Planning for Autonomous Quadrotors in Inspection Tasks
Autonomous quadrotors are increasingly used in inspection tasks, where flight time is often limited by battery capacity. his project aims to explore and evaluate state-of-the-art path planning approaches that incorporate energy efficiency into trajectory optimization.
-
Learning Rapid UAV Exploration with Foundation Models
Recent research has demonstrated significant success in integrating foundational models with robotic systems. In this project, we aim to investigate how these foundational models can enhance the vision-based navigation of UAVs. The drone will utilize learned semantic relationships from extensive world-scale data to actively explore and navigate through unfamiliar environments. While previous research primarily focused on ground-based robots, our project seeks to explore the potential of integrating foundational models with aerial robots to enhance agility and flexibility.
-
Vision-based Navigation in Dynamic Environment via Reinforcement Learning
In this project, we are going to develop a vision-based reinforcement learning policy for drone navigation in dynamic environments. The policy should adapt to two potentially conflicting navigation objectives: maximizing the visibility of a visual object as a perceptual constraint and obstacle avoidance to ensure safe flight.
-
Learning Robust Agile Flight via Adaptive Curriculum
This project focuses on developing robust reinforcement learning controllers for agile drone navigation using adaptive curricula. Commonly, these controllers are trained with a static, pre-defined curriculum. The goal is to develop a dynamic, adaptive curriculum that evolves online based on the agents' performance to increase the robustness of the controllers.
-
Automatic Failure Detection for Drones
Automatic failure detection is an essential topic for aerial robots as small failures can already lead to catastrophic crashes. Classical methods in fault detection typically use a system model as a reference and check that the observed system dynamics are within a certain error margin. In this project, we want to explore sequence modeling as an alternative approach that feeds all available sensor data into a neural network. The network will be pre-trained on simulation data and finetuned on real-world flight data. Such a machine learning-based approach has significant potential because neural networks are very good at picking up patterns in the data that are hidden/invisible to hand-crafted detection algorithms.