Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
The ability to recognize errors is crucial for efficient behavior. Numerous studies have identified electrophysiological correlates of error recognition in the human brain (error-related potentials, ErrPs). Consequently, it has been proposed to use these signals to improve human-computer interaction (HCI) or brain-machine interfacing (BMI). Here, we present a review of over a decade of developments towards this goal. This body of work provides consistent evidence that ErrPs can be successfully detected on a single-trial basis, and that they can be effectively used in both HCI and BMI applications. We first describe the ErrP phenomenon and follow up with an analysis of different strategies to increase the robustness of a system by incorporating single-trial ErrP recognition, either by correcting the machine’s actions or by providing means for its error-based adaptation. These approaches can be applied both when the user employs traditional HCI input devices or in combination with another BMI channel. Finally, we discuss the current challenges that have to be overcome in order to fully integrate ErrPs into practical applications. This includes, in particular, the characterization of such signals during real(istic) applications, as well as the possibility of extracting richer information from them, going beyond the time-locked decoding that dominates current approaches.
Posted on: July 4, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Legged robots come in a range of sizes and capabilities. By combining these robots into heterogeneous teams, joint locomotion and perception tasks can be achieved by utilizing the diversified features of each robot. In this work we present a framework for using a heterogeneous team of legged robots to detect slippery terrain. StarlETH, a large and highly capable quadruped uses the VelociRoACH as a novel remote probe to detect regions of slippery terrain. StarlETH localizes the team using internal state estimation. To classify slippage of the VelociRoACH, we develop several Support Vector Machines (SVM) based on data from both StarlETH and VelociRoACH. By combining the team’s information about the motion of VelociRoACH, a classifier was built which could detect slippery spots with 92% (125/135) accuracy using only four features.
Posted on: June 30, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
In this paper we study the automatic synthesis of robotic controllers for the coordinated movement of multiple mobile robots. The algorithm used to learn the controllers is a noise-resistant version of Particle Swarm Optimization, which is applied in two different settings: centralized and distributed learning. In centralized learning, every robot runs the same controller and the performance is evaluated with a global metric. In the distributed learning, robots run different controllers and the performance is evaluated independently on each robot with a local metric. Our results from learning in simulation show that it is possible to learn a cooperative task in a fully distributed way employing a local metric, and we validate the simulations with real robot experiments where the best solutions from distributed and centralized learning achieve similar performances.
Posted on: June 27, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Population-based learning techniques have been proven to be effective in dealing with noise and are thus promising tools for the optimization of robotic controllers, which have inherently noisy performance evaluations. This article discusses how the results and guidelines derived from tests on benchmark functions can be extended to the fitness distributions encountered in robotic learning. We show that the large-amplitude noise found in robotic evaluations is disruptive to the initial phases of the learning process of PSO. Under these conditions, neither increasing the population size nor increasing the number of iterations are efficient strategies to improve the performance of the learning. We also show that PSO is more sensitive to good spurious evaluations of bad solutions than bad evaluations of good solutions, i.e., there is a non-symmetric effect of noise on the performance of the learning.
Posted on: June 27, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Programmable self-assembly of chained robotic systems holds potential for the automatic construction of complex robots from a minimal set of building blocks. However, current robotic platforms are limited to modules of uniform rigidity, which results in a limited range of obtainable morphologies and thus functionalities of the system. To address these challenges, we investigate in this paper the role of softness in a programmed self-assembling chain system. We rely on a model system consisting of “soft cells” as modules that can obtain different mechanical softness presettings. Starting from a linear chain configuration, the system self-folds into a target morphology based on the intercellular interactions. We systematically investigate the influence of mechanical softness of the individual cells on the self-assembly process. Also, we test the hypothesis that a mixed distribution of cells of different softness enhances the diversity of achievable morphologies at a given resolution compared to systems with modules of uniform rigidity. Finally, we illustrate the potential of our system by the programmable self-assembly of complex and curvilinear morphologies that state-of-the-art systems can only achieve by significantly increasing their number of modules.
Posted on: June 23, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Due to technological limitations robot actuators are often designed for specific tasks with narrow performance goals, whereas a wide range of output and behaviours is necessary for robots to operate autonomously in uncertain complex environments. We present a design framework that employs dynamic couplings in the form of brakes and clutches to increase the performance and diversity of linear actuators. The couplings are used to switch between a diverse range of discrete modes of operation within a single actuator. We also provide a design solution for miniaturized couplings that use dry friction to produce rapid switching and high braking forces. The couplings are designed so that once engaged or disengaged no extra energy is consumed. We apply the design framework and coupling design to a linear series elastic actuator (SEA) and show that this relatively simple implementation increases the performance and adds new behaviours to the standard design. Through a number of performance tests we are able to show rapid switching between a high and a low impedance output mode; that the actuator’s spring can be charged to produce short bursts of high output power; and that the actuator has additional passive and rigid modes that consume no power once activated. Robots using actuators from this design framework would see a vast increase in their behavioural diversity and improvements in their performance not yet possible with conventional actuator design.
Posted on: June 18, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Several design strategies for rehabilitation robotics have aimed to improve patients’ experiences using motivating and engaging virtual environments. This paper presents a new design strategy: enhancing patient freedom with a complex virtual environment that intelligently detects patients’ intentions and supports the intended actions. A `virtual kitchen’ scenario has been developed in which many possible actions can be performed at any time, allowing patients to experiment and giving them more freedom. Remote eye tracking is used to detect the intended action and trigger appropriate support by a rehabilitation robot. This approach requires no additional equipment attached to the patient and has a calibration time of less than a minute. The system was tested on healthy subjects using the ARMin III arm rehabilitation robot. It was found to be technically feasible and usable by healthy subjects. However, the intention detection algorithm should be improved using better sensor fusion, and clinical tests with patients are needed to evaluate the system’s usability and potential therapeutic benefits.
Posted on: June 18, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
It is still not known how the ‘rudimentary’ movements of fetuses and infants are transformed into the coordinated, flexible and adaptive movements of adults. In addressing this important issue, we consider a behavior that has been perennially viewed as a functionless by-product of a dreaming brain: the jerky limb movements called myoclonic twitches. Recent work has identified the neural mechanisms that produce twitching as well as those that convey sensory feedback from twitching limbs to the spinal cord and brain. In turn, these mechanistic insights have helped inspire new ideas about the functional roles that twitching might play in the self-organization of spinal and supraspinal sensorimotor circuits. Striking support for these ideas is coming from the field of developmental robotics: when twitches are mimicked in robot models of the musculoskeletal system, the basic neural circuitry undergoes self-organization. Mutually inspired biological and synthetic approaches promise not only to produce better robots, but also to solve fundamental problems concerning the developmental origins of sensorimotor maps in the spinal cord and brain.
Posted on: June 18, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Decoding the user intention from non-invasive EEG signals is a challenging problem. In this paper, we study the feasibility of predicting the goal for controlling the robot arm in self-paced reaching movements, i.e., spontaneous movements that do not require an external cue. Our proposed system continuously estimates the goal throughout a trial starting before the movement onset by online classification and generates optimal trajectories for driving the robot arm to the estimated goal. Experiments using EEG signals of one healthy subject (right arm) yield smooth reaching movements of the simulated 7 degrees of freedom KUKA robot arm in planar center-out reaching task with approximately 80 % accuracy of reaching the actual goal.
Posted on: June 18, 2014
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
This paper presents a new intrinsic calibration method that allows us to calibrate a generic single-view point camera just by waving it around. From the video sequence obtained while the camera undergoes random motion, we compute the pairwise time correlation of the luminance signal for a subset of the pixels. We show that if the camera undergoes a random uniform motion, then the pairwise correlation of any pixels pair is a function of the distance between the pixel directions on the visual sphere. This leads to formalizing calibration as a problem of metric embedding from nonmetric measurements: We want to find the disposition of pixels on the visual sphere from similarities that are an unknown function of the distances. This problem is a generalization of multidimensional scaling (MDS) that has so far resisted a comprehensive observability analysis (can we reconstruct a metrically accurate embedding?) and a solid generic solution (how do we do so?). We show that the observability depends both on the local geometric properties (curvature) as well as on the global topological properties (connectedness) of the target manifold. We show that, in contrast to the euclidean case, on the sphere we can recover the scale of the points distribution, therefore obtaining a metrically accurate solution from nonmetric measurements. We describe an algorithm that is robust across manifolds and can recover a metrically accurate solution when the metric information is observable. We demonstrate the performance of the algorithm for several cameras (pin-hole, fish-eye, omnidirectional), and we obtain results comparable to calibration using classical methods. Additional synthetic benchmarks show that the algorithm performs as theoretically predicted for all corner cases of the observability analysis.
Posted on: June 16, 2014