Navigation auf uzh.ch

Suche

Department of Informatics Robotics and Perception Group

Student Projects

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).

Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).

  • Model-based Reinforcement Learning for Autonomous Drone Racing

    Recent advances in model-free Reinforcement Learning have shown superior performance in different complex tasks, such as the game of chess, quadrupedal locomotion, or even drone racing. Given a reward function, Reinforcement Learning is able to find the optimal policy through trial-and-error in simulation, that can directly be deployed in the real-world. In our lab, we have been able to outrace professional human pilots using model-free Reinforcement Learning trained solely in simulation.

  • 3D reconstruction with event cameras

    This project will explore the application of event camera setups for scene reconstruction. Accurate and efficient reconstructions using event-camera setups is still an unexplored topic. This project will focus on solving the problem of 3D reconstruction using active perception with event cameras​.

  • High-speed drone flight with spiking neural networks

    This project investigates the deployment and evaluation of SNN models on real drones, identifying and addressing potential sim-to-real gap stemming from the differences between the simulation and the real world.

  • Vision-based Dynamic Obstacle Avoidance

    Dynamic obstacle avoidance is a grand challenge in vision-based drone navigation. The classical mapping-planning-control pipeline might have difficulties when facing dynamic objects.

  • Reinforcement Learning for Drone Racing

    Reinforcement Learning for Drone Racing

  • Learning features for efficient deep reinforcement learning

    Recent work has shown that it is possible to learn temporally and geometrically aligned keypoints given only videos.

  • Advancing Augmented Reality Helmets for motorcyclists and racecars: Independence through Self-Localization

    Advancing Augmented Reality Helmets for motorcyclists and racecars: Independence through Self-Localization

  • What can Large Language Models offer to Event-based Vision?

    This project focuses on combining Large Language Models within the area of Event-based Computer Vision.

  • Foundation Models for Event-based Segmentation

    Work on design, implementation, and validation of famous foundation models (CLIP and Segment Anything Model - SAM) in context of Event-based Segmentation.

  • Event-based occlusion removal

    Unwanted camera occlusions, such as debris, dust, raindrops, and snow, can severely degrade the performance of computer-vision systems. Dynamic occlusions are particularly challenging because of the continuously changing pattern. This project aims to leverage the unique capabilities of event-based vision sensors to address the challenge of dynamic occlusions. By improving the reliability and accuracy of vision systems, this work could benefit a wide range of applications, from autonomous driving and drone navigation to environmental monitoring and augmented reality.

  • Low Latency Occlusion-aware Object Tracking

    Low latency Occlusion-aware object tracking

  • HDR NERF: Neural Scene reconstruction in low light

    Implicit scene representations, particularly Neural Radiance Fields (NeRF), have significantly advanced scene reconstruction and synthesis, surpassing traditional methods in creating photorealistic renderings from sparse images. However, the potential of integrating these methods with advanced sensor technologies that measure light at the granularity of a photon remains largely unexplored. These sensors, known for their exceptional low-light sensitivity and high dynamic range, could address the limitations of current NeRF implementations in challenging lighting conditions, offering a novel approach to neural-based scene reconstruction.

  • Data-driven Keypoint Extractor for Event Data

    This project focuses on enhancing camera pose estimation by exploring a data-driven approach to keypoint extraction, leveraging recent advancements in frame-based keypoint extraction techniques.

  • Domain Transfer between Events and Frames for Motor Policies

    The goal of this project is to develop a shared embedding space for events and frames, enabling the training of a motor policy on simulated frames and deployment on real-world event data.

  • Multimodal Fusion for Enhanced Neural Scene Reconstruction Quality

    The project aims to explore how prior 3D information can assist in reconstructing fine details in NeRFs and how the help of high-temporal resolution data can enhance modeling in the case of scene and camera motion.

  • Efficient Neural Scene Reconstruction with Event Cameras

    This project seeks to leverage the sparse nature of events to accelerate the training of radiance fields.

  • Data-driven Event Generation from Images

    In this project, the student applies concepts from current advances in image generation to create artificial events from standard frames. Multiple state-of-the-art deep learning methods will be explored in the scope of this project.

  • Navigating on Mars

    The first ever Mars helicopter Ingenuity flew over a texture-poor terrain and RANSAC wasn’t able to find inliers: https://spectrum.ieee.org/mars-helicopter-ingenuity-end-mission Navigating the Martian terrain poses significant challenges due to its unique and often featureless landscape, compounded by factors such as dust storms, lack of distinct textures, and extreme environmental conditions. The absence of prominent landmarks and the homogeneity of the surface can severely disrupt optical navigation systems, leading to decreased accuracy in localization and path planning.

  • IMU-centric Odometry for Drone Racing and Beyond

    IMU-centric Odometry for Drone Racing and Beyond

  • Gaussian Splatting Visual Odometry

    Gaussian Splatting Visual Odometry

  • Autonomous Drone Navigation via Learning from YouTube Videos

    Inspired by how humans learn, this project aims to explore the possibility of learning flight patterns, obstacle avoidance, and navigation strategies by simply watching drone flight videos available on YouTube.

  • Learning Rapid UAV Exploration with Foundation Models

    Recent research has demonstrated significant success in integrating foundational models with robotic systems. In this project, we aim to investigate how these foundational models can enhance the vision-based navigation of UAVs. The drone will utilize learned semantic relationships from extensive world-scale data to actively explore and navigate through unfamiliar environments. While previous research primarily focused on ground-based robots, our project seeks to explore the potential of integrating foundational models with aerial robots to enhance agility and flexibility.

  • Vision-based Navigation in Dynamic Environment via Reinforcement Learning

    In this project, we are going to develop a vision-based reinforcement learning policy for drone navigation in dynamic environments. The policy should adapt to two potentially conflicting navigation objectives: maximizing the visibility of a visual object as a perceptual constraint and obstacle avoidance to ensure safe flight.

  • Vision-Based Autonomous Drone Recovery Using Reinforcement Learning

    This project is focused on developing a vision-only flight recovery system for autonomous drones. A critical capability for autonomous drones is to recover safely from any unstable state. This project explores the potential of using reinforcement learning to enable a drone to transition from an unstable to a stable state, using only vision sensors. The challenge lies in creating a system that not only stabilizes the drone but also ensures it can safely land in various unforeseen scenarios.

  • Segmentation and Object Detection in Neural Radiance Fields (NeRFs) for Enhanced 3D Scene Understanding

    This master thesis project focuses on advancing 3D scene understanding through the integration of segmentation and object detection techniques within Neural Radiance Fields (NeRFs).

  • Exploring Multimodal Strategies for Event-Based Vision

    This project focuses on utilizing multi-purpose vision models in the realm of Event-Based Vision.

  • Drone Racing End-to-end policy learning: from features to commands

    This project focuses on using RL to learn quadrotor policies to fly at high speeds in complex tracks, directly from features.

  • Autonomously traversing ship manholes using end-to-end vision-based control

    Develop an end-to-end learning-based approach for autonomous drone navigation in ship ballast tank manholes, incorporating both real and simulated training data. The project aims to emphasize speed, a high success rate, and safety in flying through the confined spaces of ship interiors.

  • Foundation Model for Drone Navigation in Confined Spaces

    This master thesis project centers on the development of a foundation model for drone navigation within confined spaces such as ballast tanks of ships.