August 25, 2023
IROS2023 Workshop: Learning Robot Super Autonomy
Do not miss our IROS2023 Workshop: Learning Robot Super Autonomy! The workshop features an incredible
speakers lineup and we will have a best paper award with prize money.
Checkout the agenda and join the presentations at our
workshop website.
Organized by Giuseppe Loianno and Davide Scaramuzza.
August 15, 2023
Scientifica - come and see our drones!
Our lab will open the doors of its large drone testing arena on August 30th, 14:00h. Bring your family
and friends to learn more about drones and watch an autonomous drone race. If you are interested, please
register here!
August 14, 2023
New Senior Scientist
We welcome Harmish Khambhaita as our new Senior Scientist. He obtained his Ph.D. in Toulouse and
previously worked, among others, for Anybotics as the Autonomy and Perception Lead.
July 28, 2023
Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone Racing
We tackle the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
We use contrastive learning to extract robust feature representations from the input images
and leverage a learning-by-cheating framework for training a neural network policy.
For more information, check out our IROS23 paper and video.
July 28, 2023
Our Science Robotics 2021 paper wins prestigious Chinese award!
We are truly honored to receive the prestigious Frontiers of Science Award in the category Robotics
Science and Systems, which was presented on July 16th 2023 at the International Congress of Basic
Science in the Beijing's People's Hall of China for our Science Robotics 2021's paper "Learning High
Speed Flight in the Wild"! Congratulations to the entire team: Antonio Loquercio, Elia Kaufmann Rene
Ranftl, Matthias Mueller, Vladlen Koltun. Many thanks to the award committee! Congratulations to other winners too.
Paper, open-source code,
and video.
July 04, 2023
Our paper on Authorship Attribution through Deep Learning accepted at PLOS ONE
We are excited to announce that our paper on authorship attribution for research papers has just been
published in PLOS
ONE. We developed a transformer-based AI that achieves over 70% accuracy on the newly created,
largest-to-date, authorship-attribution dataset with over 2000 authors. For more information check out
our
PDF and open-source
code.
July 03, 2023
Video Recordings of the 4th International
Workshop on Event-Based Vision at CVPR 2023 available!
The recordings of the 4th international workshop on event-based vision at CVPR 2023 are available here.
The event was co-organized by Guillermo Gallego, Davide Scaramuzza, Kostas Daniilidis, Cornelia
Femueller, Davide Migliore.
June 21, 2023
Microgravity induces overconfidence in perceptual decision-making
We are excited to present our paper on the effects of microgravity
on perceptual decision-making published in Nature Scientific Reports.
PDF
YouTube
Dataset
June 20, 2023
HDVIO: Improving Localization and Disturbance Estimation with Hybrid Dynamics VIO
We are excited to present our new RSS paper on state and disturbance estimation for flying vehicles. We
propose a hybrid dynamics model that combines a point-mass vehicle model with a learning-based component
that captures complex aerodynamic effects. We include our hybrid dynamics model in an optimization-based
VIO system that estimates external disturbance acting on the robot as well as the robot's state. HDVIO
improves the motion and external force estimation compared to the state-of-the-art.
For more information, check out our
paper and
video.
June 13, 2023
Our CVPR Paper is Featured in Computer Vision News
Our CVPR highlight and award-candidate work "Data-driven Feature Tracking for Event Cameras" is
featured on Computer Vision News. Find out more and read the complete interview with the authors Nico
Messikommer, Mathias Gehrig and Carter Fang here!
Jun 13, 2023
DSEC-Detection Dataset Release
We release a new dataset for event- and frame-based object
detection, DSEC-Detection based on the DSEC dataset, with aligned frames, events and object tracks. For
more details visit the dataset website.
PDF
YouTube
Dataset
Code
June 08, 2023
Our PhD student Manasi Muglikar is awarded UZH Candoc Grant
Manasi, PhD student in our lab, is awarded the UZH Candoc Grant 2023 for her outstanding research!
Congratulations!
Checkout her latest work on event-based vision here.
May 13, 2023
Training Efficient Controllers via Analytic Policy Gradient
In systems with limited compute, such as aerial vehicles, an accurate controller that is efficient at
execution time
is imperative. We propose an Analytic Policy Gradient (APG) method to tackle this problem. APG exploits
the availability of differentiable simulators by training a
controller offline with gradient descent on the tracking error. Our proposed method outperforms both
model-based and model-free RL
methods in terms of tracking error. Concurrently, it achieves similar performance to MPC while requiring
more than an order of magnitude less computation time.
Our work provides insights into the potential of APG as a promising control method for robotics.
PDF
YouTube
Code
May 10, 2023
We are hiring
We have multiple openings for a Scientific Research Manager, Phd students and Postdocs in Reinforcement
Learning for Agile Vision-based Navigation and Computer vision with Standard Cameras and Event Cameras.
Job descriptions and how to apply:
https://rpg.ifi.uzh.ch/positions.html
May 09, 2023
NCCR Robotics Documentary
Check out this amazing 45-minute documentary on YouTube about
the story of twelve years of groundbreaking robotics research by the Swiss National Competence Center of
Research in Robotics (NCCR Robotics). The documentary summarizes all the key achievements, from
assistive technologies that allowed patients with completely paralyzed legs to walk again to legged and
flying robots with self-learning capabilities for disaster mitigation to educational robots used by
thousands of children worldwide! Congrats to all NCCR Robotics members who have made this possible! And
congratulations to the coordinator, Dario Floreano, and his management team! We are very proud to have
been part of this! NCCR Robotics will continue to operate in four different projects. Check out this article to learn more.
May 04, 2023
Code Release: Tightly coupling global position measurements in VIO
We are excited to release fully open-source our code to tightly fuse global positional measurements in
visual-inertial odometry (VIO)!
Our code integrates global positional measurements, for example GPS, in SVO Pro, a sliding-window optimization-based
VIO that uses the SVO frontend. We leverage the IMU preintegration theory to efficiently include the
global position measurements in the VIO problem formulation. Our system outperforms the loosely-coupled
approach in terms of absolute trajectory error up to 50% with negligible increase of the computational
cost.
For more information, have a look at our paper and code.
May 03, 2023
We win the ICRA Agile Movements Workshop Poster Award
Congratulations to Yunlong Song for winning the ICRA "Agile Movements: Animal Behaviour, Biomechanics, and Robot Devices" workshop poster award with his work "Fly fast with Reinforcement Learning".
April 25, 2023
Our work was selected as a CVPR Award Candidate
We are honored that our 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
paper "Data-driven Feature Tracking for Event Cameras" was selected as an award candidate.
Congratulations to all collaborators!
PDF
YouTube
Code
April 17, 2023
Neuromorphic Optical Flow and Real-time Implementation with Event Cameras (CVPRW 2023)
We present a new spiking neural network (SNN) architecture that significantly improves optical flow
prediction accuracy while reducing complexity, making it ideal for real-time applications in edge
devices and robots. By leveraging event-based vision and SNNs, our solution achieves high-speed optical
flow prediction with nearly two orders of magnitude less complexity, without compromising accuracy. This
breakthrough paves the way for efficient real-time deployments in various computer vision pipelines.
For more information, have a look at our paper.
April 13, 2023
Our Master student Asude Aydin wins the UZH Award for her Master Thesis
Asude Aydin, who did his Master thesis A Hybrid ANN-SNN Architecture for Low-Power and Low-Latency
Visual Perception at RPG
has received the UZH Award 2023 for her outstanding work.
Check out her paper here, which is based on her Master thesis.
April 11, 2023
Event-based Shape from Polarization
We introduce a novel shape-from-polarization technique using an event camera (accepted at CVPR 2023).
Our setup consists of a linear polarizer rotating at high-speeds in front of an event camera.
Our method uses the continuous event stream caused by the rotation to reconstruct relative intensities
at multiple polarizer angles.
Experiments demonstrate that our method outperforms physics-based baselines using frames, reducing the
MAE by 25% in synthetic and real-world dataset.
For more information, have a look at our paper.
April 07, 2023
Recurrent Vision Transformers for Object Detection with Event Cameras (CVPR 2023)
We introduce a novel efficient and highly-performant object detection backbone for event-based vision.
Through extensive architecture study, we find that vision transformers can be combined with recurrent
neural networks to effectively extract spatio-temporal features for object detection.
Our proposed architecture can be trained from scratch on publicly available real-world data to reach
state-of-the-art performance while lowering inference time compared to prior work by up to 6 times.
For more information, have a look at our paper and code.
April 3, 2023
Data-driven Feature Tracking for Event Cameras
We are excited to announce that our paper on Data-driven Feature Tracking for Event Cameras was accepted
at CVPR 2023. In this work, we introduce the first data-driven feature tracker for event cameras, which
leverages low-latency events to track features detected in a grayscale frame. Our data-driven tracker
outperforms existing approaches in relative feature age by up to 130 % while also achieving the lowest
latency
For more information, check out our
paper,
video
and
code.
April 3, 2023
Autonomous Power Line Inspection with Drones via Perception-Aware MPC
We are excited to present our new work on autonomous power line inspection with drones using
perception-aware model predictive control (MPC). We propose a MPC that tightly couples perception and
action. Our controller generates commands that maximize the visibility of the power lines while, at the
same time, safely avoiding the power masts. For power line detection, we propose a lightweight
learning-based detector that is trained only on synthetic data and is able to transfer zero-shot to
real-world power line images.
For more information, check out our
paper and
video.
April 3, 2023
RPG and LINA Project featured in RSI
In the recent news broadcast by RSI, our lab is featured for its efforts in developing and boosting
research on civil applications for drones. The LINA project at the Dübendorf airport is making its
infrastructure availble to researchers and industries to facilitate the testing and developing of
autonomous flying systems hardware and software.
RSI [IT]
April 1, 2023
New PhD Student
We welcome Nikola Zubić as a new PhD student in our lab!
March 30, 2023
Event-based Agile Object Catching with a Quadrupedal Robot
This work the low-latency advantages of event cameras for agil object catching with
a quadrupedal robot. We use the event camera to estimate the trajectory of the object, which
is then caught using an RL-trained policy. Our robot catches objects at up to 15 m/s with a 83% success
rate.
For more information, have a look at our ICRA 2023 paper,
video and open-source code.
March 27, 2023
A Hybrid ANN-SNN Architecture for Low-Power and Low-Latency Visual Perception
This work proposes a hybrid model combining Spiking Neural Networks (SNN) and classical Artificial
Neural Networks (ANN) to optimize power efficiency and latency in edge devices. The hybrid ANN-SNN model
overcomes state transients and state decay issues while maintaining high temporal resolution, low
latency, and low power consumption. In the context of 2D and 3D human pose estimation, the method
achieves an 88% reduction in power consumption with only a 4% decrease in performance compared to fully
ANN counterparts, and a 74% lower error compared to SNNs.
For more information, have a look at our paper.
March 10, 2023
HILTI-SLAM Challenge 2023
RPG and HILTI are organizing the ICRA2023 HILTI SLAM Challenge! Instructions here.
The HILTI SLAM Challenge dataset is a real-life, multi-sensor dataset with accurate ground
truth to advance the state of the art in highly accurate state estimation in challenging
environments. Participants will be ranked by the completeness of their trajectories and by
the achieved accuracy.
HILTI is a multinational company that offers premium
products and services for professionals on construction sites around the globe. Behind this
vast catalog is a global team comprising of 30.000 team members from 133 different
nationalities located in more than 120 countries.
March 09, 2023
LINA Testing Facility at Dübendorf Airport
UZH Magazin releases a news article about our research on autonomous drones and our new testing facility
at Dübendorf Airport that enables researchers to develop autonomous systems such as drones and
ground-based robots from idea to marketable product. Read the article
in English or
in German. More information about the
LINA project can be found
here.
March 7, 2023
Our Master student Fang Nan wins ETH Medal for Best Master Thesis
Fang Nan, who did his Master thesis Nonlinear MPC for Quadrotor
Fault-Tolerant Control at RPG
has received the ETH Medal 2023 and the Willi Studer Prize for his outstanding work.
Check out his RAL 2022 paper here, which is based
on his Master thesis.
March 2, 2023
Learning Perception-Aware Agile Flight in Cluttered Environments
We propose a method to learn neural network policies that achieve perception-aware, minimum-time flight in
cluttered environments.
Our method combines imitation learning and reinforcement learning by leveraging a privileged
learning-by-cheating framework.
For more information, check out our ICRA23
paper
or this
video.
March 2, 2023
Weighted Maximum Likelihood for Controller Tuning
We present our new ICRA23 paper that leverages a probabilistic Policy Search method, Weighted Maximum
Likelihood (WML), to automatically learn the optimal objective for MPCC. The data efficiency provided by
the use of a model-based approach in the loop allows us to directly train in a high-fidelity simulator,
which in turn makes our approach able to transfer zero-shot to the real world.
For more information, check out our ICRA23
paper
and
video.
March 2, 2023
User-Conditioned Neural Control Policies for Mobile Robotics
We present our new paper that leverages a feature-wise linear modulation layer to condition neural
control policies for mobile robotics. We demonstrate in simulation and in real-world experiments that a
single control policy can achieve close to time-optimal flight performance across the entire performance
envelope of the robot, reaching up to 60 km/h and 4.5 g in acceleration. The ability to guide a learned
controller during task execution has implications beyond agile quadrotor flight, as conditioning the
control policy on human intent helps safely bringing learning based systems out of the well-defined
laboratory environment into the wild.
For more information, check out our ICRA23 paper
and video.
February 28, 2023
Learned Inertial Odometry for Autonomous Drone Racing
We are excited to present our new RA-L paper on state estimation for autonomous drone racing. We propose a
learning-based odometry algorithm that uses an inertial measurement unit (IMU) as the only sensor modality
for autonomous drone racing tasks. The core idea of our system is to couple a model-based filter, driven
by the inertial measurements, with a learning-based module that has access to the control commands.
For more information, check out our
paper,
video, and
code.
Feburary 15, 2023
Agilicious: Open-Source and Open-Hardware Agile Quadrotor for Vision-Based Flight
We are excited to present Agilicious, a co-designed hardware and software framework tailored to
autonomous, agile quadrotor flight. It is completely open-source and open-hardware and supports both
model-based and neural-network-based controllers. Also, it provides high thrust-to-weight and
torque-to-inertia ratios for agility, onboard vision sensors, GPU-accelerated compute hardware for
real-time perception and neural-network inference, a real-time flight controller, and a versatile
software stack. In contrast to existing frameworks, Agilicious offers a unique combination of flexible
software stack and high-performance hardware. We compare Agilicious with prior works and demonstrate it
on different agile tasks, using both modelbased and neural-network-based controllers.
Our demonstrators include trajectory tracking at up to 5 g and 70 km/h in a motion-capture system, and
vision-based acrobatic flight and obstacle avoidance in both structured and unstructured environments
using solely onboard perception. Finally, we demonstrate its use for hardware-in-the-loop simulation in
virtual-reality environments. Thanks to its versatility, we believe that Agilicious supports the next
generation of scientific and industrial quadrotor research.
For more details check our paper, video and webpage.
January 17, 2023
Event-based Shape from Polarization
We introduce a novel shape-from-polarization technique using an event camera.
Our setup consists of a linear polarizer rotating at high-speeds in front of an event camera.
Our method uses the continuous event stream caused by the rotation to reconstruct relative intensities
at multiple polarizer angles.
Experiments demonstrate that our method outperforms physics-based baselines using frames, reducing the
MAE by 25% in synthetic and real-world dataset.
For more information, have a look at our paper.
January 11, 2023
Survey on Autonomous Drone Racing
We present our survey on Autonomous Drone Racing which covers the latest developments in agile flight for
both model based and learning based approaches. We include extensive coverage of
drone racing competitions, simulators, open source software, and the state of the art approaches for
flying autonomous drones at their limits!
For more information, see our paper