Open Source Robotics

NCCR Robotics publishes open source software and datasets, please see below for a list and links to where they can be downloaded.

Robogen

RoboGen™ is an open source platform for the co-evolution of robot bodies and brains. It has been designed with a primary focus on evolving robots that can be easily manufactured via 3D-printing and the use of a small set of low-cost, off-the-shelf electronic components. It features an evolution engine, and a physics simulation engine. Additionally it includes utilities for generating design files of body components to be used with a 3D-printer, and for compiling neural-network controllers to run on an Arduino microcontroller board. Read more and download.

The Zurich Urban Micro Aerial Vehicle Dataset for Appearance-based Localization, Visual Odometry, and SLAM

This presents the world’s first dataset recorded on-board a camera equipped Micro Aerial Vehicle (MAV) flying within urban streets at low altitudes (i.e., 5-15 meters above the ground). The 2 km dataset consists of time synchronized aerial high-resolution images, GPS and IMU sensor data, ground-level street view images, and ground truth data. The dataset is ideal to evaluate and benchmark appearance-based topological localization, monocular visual odometry, simultaneous localization and mapping (SLAM), and online 3D reconstruction algorithms for MAV in urban environments. Read more.


The Event-Camera Dataset and Simulator

This dataset presents the world’s first collection of datasets with an event-based camera for high-speed robotics. The data also include intensity images, inertial measurements, and ground truth from a motion-capture system (Scaramuzza and Delbruck Labs). Read more.


Information Gain Based Active Reconstruction Framework

The Information Gain Based Active Reconstruction Framework is a modular, robot-agnostic, software package for performing next-best-view planning for volumetric object reconstruction using a range sensor (Scaramuzza Lab). Read more.


Fisheye and Catadioptric Synthetic Datasets for Visual Odometry

We provide two synthetic scenes (vehicle moving in a city, and flying robot hovering in a confined room). For each scene, three different optics were used (perspective, fisheye and catadioptric), but the same sensor is used (keeping the image resolution constant) (Scaramuzza Lab). Read more.


Indoor Dataset of Quadrotor with Down-Looking Camera

This dataset contains the recording of the raw images, IMU measurements as well as the ground truth poses of a quadrotor flying a circular trajectory in a office size environment. Download Dataset.

REMODE: Real-time, Probabilistic, Monocular, Dense Reconstruction

REMODE is a novel method to estimate dense and accurate depth maps from a single moving camera. Download code.


SVO: Semi-direct Visual Odometry

SVO is a Semi-direct, monocular Visual Odometry algorithm that is precise, robust, and faster than current state-of-the-art methods. Download code.


ROS Driver and Calibration Tool for the Dynamic Vision Sensor (DVS)

The RPG DVS ROS Package allow the use of the Dynamic Vision Sensor (DVS) within the Robot Operating System (ROS). It also contains a calibration tool for intrinsic and stereo calibration using a blinking pattern. Download code.


A Monocular Pose Estimation System based on Infrared LEDs

Mutual localization is a fundamental component for multi-robot missions. Our monocular pose estimation system consists of multiple infrared LEDs and a camera with an infrared-pass filter. The LEDs are attached to the robot that we want to track, while the observing robot is equipped with the camera.

The code with instructions on how to use it is hosted on GitHub.


Torque Control of a KUKA youBot Arm

Existing control schemes for the KUKA youBot arm, such as directly controlling joint positions or velocities, are not suited for close tracking of end effector trajectories. A torque controller, based on the dynamical model of the youBot arm, was implemented to overcome this limitation. Complementary to the controller, a framework to automatically generate trajectories was developed.

The code with instructions on how to use it is hosted on GitHub. Details are provided in the Master Thesis of Benjamin Keiser.

Authors: Benjamin Keiser, Matthias Faessler, Elias Mueggler

Reference

B. Keiser, E. Mueggler, M. Faessler, D. Scaramuzza Torque Control of a KUKA youBot Arm, Master Thesis, University of Zurich, September, 2013. [ PDF ]


Dataset: Air-Ground Matching of Airborne images with Google Street View data

Matching airborne images to ground level ones is a challenging problem since in this case extreme changes in viewpoint and scale can be found between the aerial Micro Aerial Vehicle (MAV) images and the ground-level images, aside the challenges present in ground visual search algorithms used in UGV applications, such as illumination, lens distortions, over season variation of the vegetation, and scene changes between the query and the database images.

Our dataset consists of image data captured with a small quadroctopter flying in the streets of Zurich (up to 15 meters from the ground), along a path of 2km, including: (1) aerial MAV Images, (2) ground-level Google Street View Images, (3) ground-truth confusion matrix, and (4) GPS data (geotags) for every database image. Read more or download the dataset.


Perspective 3-Point (P3P) Algorithm

p3p algorithmThe Perspective-Three-Point (P3P) problem aims at determining the position and orientation of a camera in the world reference frame from three 2D-3D point correspondences. Most solutions attempt to first solve for the position of the points in the camera reference frame, and then compute the point aligning transformation between the camera and the world frame. In contrast, this work proposes a novel closed-form solution to the P3P problem, which computes the aligning transformation directly in a single stage, without the intermediate derivation of the points in the camera frame. This is made possible by introducing intermediate camera and world reference frames, and expressing their relative position and orientation using only two parameters. The projection of a world point into the parametrized camera pose then leads to two conditions and finally a quartic equation for finding up to four solutions for the parameter pair. A subsequent backsubstitution directly leads to the corresponding camera poses with respect to the world reference frame. The superior computational efficiency is particularly suitable for any RANSAC-outlier-rejection step, which is always recommended before applying PnP or non-linear optimization of the final solution. Read more or download the C/C++ code.


OCamCalib: Omnidirectional Camera Calibration Toolbox for Matlab

Omnidirectional Camera Calibration Toolbox for Matlab (for Windows, MacOS, and Linux) for catadioptric and fisheye cameras up to 195 degrees.

Code, tutorials, and datasets can be found here.

Author: Davide Scaramuzza

More Resources