Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
We present an accurate, efficient, and robust pose estimation system based on infrared LEDs. They are mounted on a target object and are observed by a camera that is equipped with an infrared-pass filter. The correspondences between LEDs and image detections are first determined using a combinatorial approach and then tracked using a constant-velocity model. The pose of the target object is estimated with a P3P algorithm and optimized by minimizing the reprojection error. Since the system works in the infrared spectrum, it is robust to cluttered environments and illumination changes. In a variety of experiments, we show that our system outperforms state-of-the-art approaches. Furthermore, we successfully apply our system to stabilize a quadrotor both indoors and outdoors under challenging conditions. We release our implementation as open-source software.
Posted on: June 16, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
We propose a semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods. The semi-direct approach eliminates the need of costly feature extraction and robust matching techniques for motion estimation. Our algorithm operates directly on pixel intensities, which results in subpixel precision at high frame-rates. A probabilistic mapping method that explicitly models outlier measurements is used to estimate 3D points, which results in fewer outliers and more reliable points. Precise and high frame-rate motion estimation brings increased robustness in scenes of little, repetitive, and high-frequency texture. The algorithm is applied to micro-aerial-vehicle state-estimation in GPS-denied environments and runs at 55 frames per second on the onboard embedded computer and at more than 300 frames per second on a consumer laptop. We call our approach SVO (Semi-direct Visual Odometry) and release our implementation as open-source software.
Posted on: June 16, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
The agility of a robotic system is ultimately limited by the speed of its processing pipeline. The use of a Dynamic Vision Sensors (DVS), a sensor producing asynchronous events as luminance changes are perceived by its pixels, makes it possible to have a sensing pipeline of a theoretical latency of a few microseconds. However, several challenges must be overcome: a DVS does not provide the grayscale value but only changes in the luminance; and because the output is composed by a sequence of events, traditional frame-based visual odometry methods are not applicable. This paper presents the first visual odometry system based on a DVS plus a normal CMOS camera to provide the absolute brightness values. The two sources of data are automatically spatiotemporally calibrated from logs taken during normal operation. We design a visual odometry method that uses the DVS events to estimate the relative displacement since the previous CMOS frame by processing each event individually. Experiments show that the rotation can be estimated with surprising accuracy, while the translation can be estimated only very noisily, because it produces few events due to very small apparent motion.
Posted on: June 16, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
In this paper, we solve the problem of estimating dense and accurate depth maps from a single moving camera. A probabilistic depth measurement is carried out in real time on a per-pixel basis and the computed uncertainty is used to reject erroneous estimations and provide live feedback on the reconstruction progress. Our contribution is a novel approach to depth map computation that combines Bayesian estimation and recent development on convex optimization for image processing. We demonstrate that our method outperforms state-of-the-art techniques in terms of accuracy, while exhibiting high efficiency in memory usage and computing power. We call our approach REMODE (REgularized MOnocular Depth Estimation) and the CUDA-based implementation runs at 30Hz on a laptop computer.
Posted on: June 16, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
We propose a new method for the localization of a Micro Aerial Vehicle (MAV) with respect to a ground robot. We solve the problem of registering the 3D maps computed by the robots using different sensors: a dense 3D reconstruction from the MAV monocular camera is aligned with the map computed from the depth sensor on the ground robot. Once aligned, the dense reconstruction from the MAV is used to augment the map computed by the ground robot, by extending it with the information conveyed by the aerial views. The overall approach is novel, as it builds on recent developments in live dense reconstruction from moving cameras to address the problem of air-ground localization. The core of our contribution is constituted by a novel algorithm integrating dense reconstructions from monocular views, Monte Carlo localization, and an iterative pose refinement. In spite of the radically different vantage points from which the maps are acquired, the proposed method achieves high accuracy whereas appearance-based, state-of-the-art approaches fail. Experimental validation in indoor and outdoor scenarios reported an accuracy in position estimation of 0.08 meters and real time performance. This demonstrates that our new approach effectively overcomes the limitations imposed by the difference in sensors and vantage points that negatively affect previous techniques relying on matching visual features.
Posted on: June 16, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
At the current state of the art, the agility of an autonomous flying robot is limited by its sensing pipeline, because the relatively high latency and low sampling frequency limit the aggressiveness of the control strategies that can be implemented. To obtain more agile robots, we need faster sensing pipelines. A Dynamic Vision Sensor (DVS) is a very different sensor than a normal CMOS camera: rather than providing discrete frames like a CMOS camera, the sensor output is a sequence of asynchronous timestamped events each describing a change in the perceived brightness at a single pixel. The latency of such sensors can be measured in the microseconds, thus offering the theoretical possibility of creating a sensing pipeline whose latency is negligible compared to the dynamics of the platform. However, to use these sensors we must rethink the way we interpret visual data. This paper presents a method for low-latency pose tracking using a DVS and Active Led Markers (ALMs), which are LEDs blinking at high frequency (>1 KHz). The sensor’s time resolution allows distinguishing different frequencies, thus avoiding the need for data association. This approach is compared to traditional pose tracking based on a CMOS camera. The DVS performance is not affected by fast motion, unlike the CMOS camera, which suffers from motion blur.
Posted on: June 16, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
This paper presents a framework for collaborative localization and mapping with multiple Micro Aerial Vehicles (MAVs) in unknown environments. Each MAV estimates its motion individually using an onboard, monocular visual odometry algorithm. The system of MAVs acts as a distributed preprocessor that streams only features of selected keyframes and relative-pose estimates to a centralized ground station. The ground station creates an individual map for each MAV and merges them together whenever it detects overlaps. This allows the MAVs to express their position in a common, global coordinate frame. The key to real-time performance is the design of data-structures and processes that allow multiple threads to concurrently read and modify the same map. The presented framework is tested in both indoor and outdoor environments with up to three MAVs. To the best of our knowledge, this is the first work on real-time collaborative monocular SLAM, which has also been applied to MAVs.
Posted on: June 16, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
We tackle the problem of globally localizing a camera-equipped micro aerial vehicle flying within urban environments for which a Google Street View image database exists. To avoid the caveats of current image-search algorithms in case of severe viewpoint changes between the query and the database images, we propose to generate virtual views of the scene, which exploit the air-ground geometry of the system. To limit the computational complexity of the algorithm, we rely on a histogram-voting scheme to select the best putative image correspondences. The proposed approach is tested on a 2km image dataset captured with a small quadroctopter flying in the streets of Zurich. The success of our approach shows that our new air-ground matching algorithm can robustly handle extreme changes in viewpoint, illumination, perceptual aliasing, and over-season variations, thus, outperforming conventional visual place-recognition approaches.
Posted on: June 16, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
This paper presents a new intrinsic calibration method that allows us to calibrate a generic single-view point camera just by waving it around. From the video sequence obtained while the camera undergoes random motion, we compute the pairwise time correlation of the luminance signal for a subset of the pixels. We show that if the camera undergoes a random uniform motion, then the pairwise correlation of any pixels pair is a function of the distance between the pixel directions on the visual sphere. This leads to formalizing calibration as a problem of metric embedding from nonmetric measurements: We want to find the disposition of pixels on the visual sphere from similarities that are an unknown function of the distances. This problem is a generalization of multidimensional scaling (MDS) that has so far resisted a comprehensive observability analysis (can we reconstruct a metrically accurate embedding?) and a solid generic solution (how do we do so?). We show that the observability depends both on the local geometric properties (curvature) as well as on the global topological properties (connectedness) of the target manifold. We show that, in contrast to the euclidean case, on the sphere we can recover the scale of the points distribution, therefore obtaining a metrically accurate solution from nonmetric measurements. We describe an algorithm that is robust across manifolds and can recover a metrically accurate solution when the metric information is observable. We demonstrate the performance of the algorithm for several cameras (pin-hole, fish-eye, omnidirectional), and we obtain results comparable to calibration using classical methods. Additional synthetic benchmarks show that the algorithm performs as theoretically predicted for all corner cases of the observability analysis.
Posted on: June 16, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Programmable self-assembly of chained modules holds potential for the automatic shape formation of morphologically adapted robots. However, current systems are limited to modules of uniform rigidity, which restricts the range of obtainable morphologies and thus the functionalities of the system. To address these challenges, we previously introduced soft cells as modules that can obtain different mechanical softness pre-setting. We showed that such a system can obtain a higher diversity of morphologies compared to state-of-the-art systems and we illustrated the system’s potential by demonstrating the self-assembly of complex morphologies. In this paper, we extend our previous work and present an automatic method that exploits our system’s capabilities in order to find a linear chain of soft cells that self-folds into a target 2-D shape.
Posted on: May 19, 2014