Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Steady State Visually Evoked Potentials (SSVEP) are signals produced in the occipital part of the brain when someone gaze a light flickering at a fixed frequency. These signals have been used for Brain Machine Interfacing (BMI), where one or more stimuli are presented and the system has to detect what is the stimulus the user is attending to. It has been proposed that the SSVEP signal is produced by superposition of Visually Evoked Potentials (VEP) but there is not a model that shows that. We propose a model for a SSVEP signal that is a superposition of the response to the rising and falling edges of the stimuli and that can be calculated for different frequencies. This model is based in the phase between the stimulus and the SSVEP signal considering that the phase is stable over the time. We fit the model for 4 volunteers that gazed stimuli in the frequencies of 9hz, 11hz, 13hz and 15hz, and duty-cycles of 20%, 35%, 50%, 65% and 80%. We found the parameters of the model for every volunteer using the data of Oz electrode and a genetic algorithm. The proposed model is useful for find the best duty-cycle of the stimulus and it can be useful for select a code in the stimuli different for a square signal, the model only consider one frequency at the same time, but the results showed that it could be possible to find a more generic model.
Posted on: December 20, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Small-winged drones can face highly varied aerodynamic requirements, such as high manoeuvrability for flight among obstacles and high wind resistance for constant ground speed against strong headwinds that cannot all be optimally addressed by a single aerodynamic profile. Several bird species solve this problem by changing the shape of their wings to adapt to the different aerodynamic requirements. Here, we describe a novel morphing wing design composed of artificial feathers that can rapidly modify its geometry to fulfil different aerodynamic requirements. We show that a fully deployed configuration enhances manoeuvrability while a folded configuration offers low drag at high speeds and is beneficial in strong headwinds. We also show that asymmetric folding of the wings can be used for roll control of the drone. The aerodynamic performance of the morphing wing is characterized in simulations, in wind tunnel measurements and validated in outdoor flights with a small drone.
Posted on: December 20, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Research in brain-computer interfaces has achieved impressive progress towards implementing assistive technologies for restoration or substitution of lost motor capabilities, as well as supporting technologies for able-bodied subjects. Notwithstanding this progress, effective translation of these interfaces from proof-of concept prototypes into reliable applications remains elusive. As a matter of fact, most of the current BCI systems cannot be used independently for long periods of time by their intended end-users. Multiple factors that impair achieving this goal have already been identified. However, it is not clear how do they affect the overall BCI performance or how they should be tackled. This is worsened by the publication bias where only positive results are disseminated, preventing the research community from learning from its errors. This paper is the result of a workshop held at the 6th International BCI meeting in Asilomar. We summarize here the discussion on concrete research avenues and guidelines that may help overcoming common pitfalls and make BCIs become a useful alternative communication device.
Posted on: December 18, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Posted on: November 21, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Dielectric elastomer actuators (DEAs), a soft actuator technology, hold great promise for biomimetic underwater robots. The high-voltages required to drive DEAs can however make them challenging to use in water. This paper demonstrates a method to create DEA-based biomimetic swimming robots that operate reliably even in conductive liquids. We ensure the insulation of the high-voltage DEA electrodes without degrading actuation performance by laminating silicone layers. A fish and a jellyfish were fabricated and tested in water. The fish robot has a length of 120 mm and a mass of 3.8 g. The jellyfish robot has a 61 mm diameter for a mass of 2.6 g. The measured swimming speeds for a periodic 3 kV drive voltage were 8 mm/s for the fish robot, and 1.5 mm/s for the jellyfish robot.
Posted on: October 27, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Search and rescue, autonomous construction, and many other semi-autonomous multi-robot applications can benefit from proximal interactions between an operator and a swarm of robots. Most research on proximal interaction is based on explicit communication techniques such as gesture and speech. This study proposes a new implicit proximal communication technique to approach the problem of robot selection. We use electroencephalography (EEG) signals to select the robot at which the operator is looking. This is achieved using steady-state visually evoked potential (SSVEP), a repeatable neural response to a regularly blinking visual stimulus that varies predictively based on the blinking frequency. In our experiments, each robot was equipped with LEDs blinking at a different frequency, and the operator’s SSVEP neural response was extracted from the EEG signal to detect and select the robot without requiring any conscious action by the user. This study systematically investigates several parameters affecting the SSVEP neural response: blinking frequency of the LED, distance between the robot and the operator, and color of the LED. Based on these parameters, we study two signal processing approaches and critically analyze their performance on 10 subjects controlling a set of physical robots. Our results show that despite numerous artifacts, it is possible to achieve a recognition rate higher than 85% on some subjects, while the average over the ten subjects was 75%.
Posted on: October 14, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Current approaches for visual-inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence leading to fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a-posteriori bias correction in analytic form. The second contribution is to show that the preintegrated IMU model can be seamlessly integrated into a visual-inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modelling effort leads to accurate state estimation in real-time, outperforming state-of-the-art approaches.
Posted on: September 30, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of Event-based Multi-View Stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (i) its ability to respond to scene edges—which naturally provide semidense geometric information without any pre-processing operation—and (ii) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semidense depth maps. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU.
Posted on: September 30, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
We consider the problem of performing rapid training of a terrain classier in the context of a collaborative robotic search and rescue system. Our system uses a vision-based flying robot to guide a ground robot through unknown terrain to a goal location by building a map of terrain class and elevation. However, due to the unknown environments present in search and rescue scenarios, our system requires a terrain classifier that can be trained and deployed quickly, based on data collected on the spot. We investigate the relationship of training set size and complexity on training time and accuracy, for both feature-based and convolutional neural network classifiers in this scenario. Our goal is to minimize the deployment time of the classifiers in our terrain mapping system within acceptable classifcation accuracy tolerances. So we are not concerned with training a classifier that generalizes well, only one that works well for this particular environment. We demonstrate that we can launch our aerial robot, gather data, train a classifier, and begin building a terrain map after only 60 seconds of flight.
Posted on: September 30, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.
Posted on: September 27, 2016