Navigation auf uzh.ch
To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP . Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).
Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).
This project investigates new paradigms for low-level event data processing (from event cameras) to enable expressive and efficient feature extraction.
The project will focus on exploring the use of event-based cameras in neural-based scene reconstruction and synthesis, extending available approaches to event-based data.
The project will focus on studying various neural network architectures for event-based inference datasets and evaluate their performance in the presence of adversarial attacks.
Developing Smart Vision Assistive Technology
Online learning-aided visual inertial odometry for robust state estimation
In this project, the student extends upon a previous student project (published at ECCV22) and current advances from the UDA literature in order to transfer multiple tasks from frames to events. The approach should be validated on several tasks in challenging environments (night, high-dynamic scenes) to highlight the benefits of event cameras.
The project aims to develop a data-driven keypoint extractor, which computes interest points for event camera data. Based on a previous student project (submitted to CVPR23), the approach will leverage neural network architectures to extract and describe keypoints in an event stream.
In this project, the student applies concepts from current advances in image generation to create artificial events from standard frames. Multiple state-of-the-art deep learning methods will be explored in the scope of this project.
This project aims to develop the software for an off-board vision system for an autonomous drone. The objective is to enhance the capabilities of the existing drone hardware by integrating a real-time image transmission system. The software will enable the drone camera to transmit high-quality images and videos in real-time to a remote receiver.
Recent advances in model-free Reinforcement Learning have shown superior performance in different complex tasks, such as the game of chess, quadrupedal locomotion, or even drone racing. Given a reward function, Reinforcement Learning is able to find the optimal policy through trial-and-error in simulation, that can directly be deployed in the real-world. In our lab, we have been able to outrace professional human pilots using model-free Reinforcement Learning trained solely in simulation.
This work will address intrinsic calibration of event cameras, a fundamental problem for application of event cameras to many computer vision tasks, by incorporating deep learning.
This project will focus on event-based depth estimation using structured light systems.
This project will explore the application of event camera setups for scene reconstruction. Accurate and efficient reconstructions using event-camera setups is still an unexplored topic. This project will focus on solving the problem of 3D reconstruction using active perception with event cameras.
Optical flow estimation is the mainstay of dynamic scene understanding in robotics and computer vision. It finds application in SLAM, dynamic obstacle detection, computational photography, and beyond. However, extracting the optical flow from frames is hard due to the discrete nature of frame-based acquisition. Instead, events from an event camera indirectly provide information about optical flow in continuous time. Hence, the intuition is that event cameras are the ideal sensors for optical flow estimation. In this project, you will dig deep into optical flow estimation from events. We will make use of recent innovations in neural network architectures and insights of event camera models to push the state-of-the-art in the field
Combine the complementary information from standard and event cameras to enhance images and video.
Benchmark comparison of localization techniques.
Land a UAV safely relying on vision.
Use aerodynamic effects to infer the other drones’ locations.
In recent years, model predictive control, one of the most popular methods for controlling constrained systems, has benefitted from the advancements of learning methods. Many applications showed the potential of the cross fertilization between the two fields, i.e., autonomous drone racing, autonomous car racing, etc. Most of the research efforts have been dedicated to learn and improve the model dynamics, however, the controller tuning, which has a crucial importance, have not been studied much.
This project focuses on utilizing an advanced approach to time series modeling for efficient event data processing.
This project focuses on utilizing multi-purpose vision models in the realm of Event-Based Vision.
Vision-based State Estimation for Flying Cars