Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
The CoWriter activity involves a child in a rich and complex interaction where he has to teach handwriting to a robot. The robot must convince the child it needs his help and it actually learns from his lessons. To keep the child engaged, the robot must learn at the right rate, not too fast otherwise the kid will have no opportunity for improving his skills and not too slow otherwise he may loose trust in his ability to improve the robot’ skills. We tested this approach in real pedagogic/therapeutic contexts with children in difficulty over repeated long sessions (40-60 min). Through 3 different case studies, we explored and refined experimental designs and algorithms in order for the robot to adapt to the troubles of each child and to promote their motivation and self-confidence. We report positive observations, suggesting commitment of children to help the robot, and their comprehension that they were good enough to be teachers, overcoming their initial low confidence with handwriting.
Posted on: February 22, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Measuring “how much the human is in the interaction” — the level of engagement — is instrumental in building effective interactive robots. Engagement, however, is a complex, multi-faceted cognitive mechanism that is only indirectly observable. This article formalizes with-me-ness as one of such indirect measures. With-me-ness, a concept borrowed from the field of Computer-Supported Collaborative Learning, measures in a well-defined way to what extent the human is with the robot over the course of an interactive task. As such, it is a meaningful precursor of engagement. We expose in this paper the full methodology, from real-time estimation of the human’s focus of attention (relying on a novel, open-source, vision-based head pose estimator), to on-line computation of with-me-ness. We report as well on the experimental validation of this approach, using a naturalistic setup involving children during a complex robot-teaching task.
Posted on: February 22, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
In social robotics, robots needs to be able to be understood by humans. Especially in collaborative tasks where they have to share mutual knowledge. For instance, in an educative scenario, learners share their knowledge and they must adapt their behaviour in order to make sure they are understood by others. Learners display behaviours in order to show their understanding and teachers adapt in order to make sure that the learners’ knowledge is the required one. This ability requires a model of their own mental states perceived by others: “has the human understood that I(robot) need this object for the task or should I explain it once again ?" In this paper, we discuss the importance of a cognitive architecture enabling second-order Mutual Modelling for Human-Robot Interaction in educative contexts
Posted on: February 22, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
We aim at developing autonomous miniature hovering flying robots capable of navigating in unstructured GPS-denied environments. A major challenge is the miniaturization of the embedded sensors and processors that allow such platforms to fly by themselves. In this paper, we propose a novel ego-motion estimation algorithm for hovering robots equipped with inertial and optic-flow sensors that runs in real- time on a microcontroller and enables autonomous flight. Unlike many vision-based methods, this algorithm does not rely on feature tracking, structure estimation, additional dis- tance sensors or assumptions about the environment. In this method, we introduce the translational optic-flow direction constraint, which uses the optic-flow direction but not its scale to correct for inertial sensor drift during changes of direction. This solution requires comparatively much sim- pler electronics and sensors and works in environments of any geometry. Here we describe the implementation and per- formance of the method on a hovering robot equipped with eight 0.65 g optic-flow sensors, and show that it can be used for closed-loop control of various motions.
Posted on: February 1, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
This paper presents the results of a study on the effect of in-series compliance on the locomotion of a simulated 8-DoF Lola-OP Modular Snake Robot with added compliant elements. We explore whether there is an optimal stiffness for gait, terrain type, or several gaits and several terrains (i.e. a good “general-purpose” stiffness). Compliance was simulated using ball joints with eight different levels of stiffness. Two snake locomotion gaits (rolling and sidewinding) were tested over flat ground and three different types of rough terrains. We performed grid search and Particle Swarm Optimization to identify the locomotion parameters leading to fast locomotion and analyzed the best candidates in terms of locomotion speed and energy efficiency (cost of transport). Contrary to our expectations, we did not observe a clear trend that would favor the use of compliant elements over rigid structures. For sidewinding, compliant and stiff elements lead to comparable performances. For rolling gait, the general rule seems to be “the stiffer, the better”.
Posted on: January 5, 2016
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
This paper studies existing direct transcription methods for trajectory optimization for robot motion planning. These methods have demonstrated to be favorable for planning dynamically feasible motions for high dimensional robots with complex dynamics. However, an important disadvantage is the augmented size and complexity of the associated multivariate nonlinear programming problem (NLP). Due to this complexity, preliminary results suggest that these methods are not suitable for performing the motion planning for high degree of freedom (DOF) robots online. Furthermore, there is insufficient evidence about the successful use of these approaches on real robots. To gain deeper insight into the performance of trajectory optimization methods, we analyze the influence of the choice of different transcription techniques as well as NLP solvers on the run time. There are different alternatives for the problem transcription, mainly determined by the selection of the integration rule. In this study these alternatives are evaluated with a focus on robotics, measuring the performance of the methods in terms of computational time, quality of the solution, sensitivity to open parameters and complexity of the problem. Additionally, we compare two optimization methodologies, namely Sequential Quadratic Programming (SQP) and Interior Point Methods (IPM), which are used to solve the transcribed problem. As a performance measure as well as a verification of using trajectory optimization on real robots, we are presenting hardware experiments performed on an underactuated, nonminimal-phase, ball-balancing robot with a 10 dimensional state space and 3 dimensional input space. The benchmark tasks solved with the real robot take into account path constraints and actuation limits. These experiments constitute one of very few examples of full-state trajectory optimization applied to real hardware.
Posted on: December 17, 2015
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Posted on: December 17, 2015
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Posted on: December 17, 2015
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Event-based vision sensors, such as the Dynamic Vision Sensor (DVS), do not output a sequence of video frames like standard cameras, but a stream of asynchronous events. An event is triggered when a pixel detects a change of brightness in the scene. An event contains the location, sign, and precise timestamp of the change. The high dynamic range and temporal resolution of the DVS, which is in the order of micro-seconds, make this a very promising sensor for high-speed applications, such as robotics and wearable computing. However, due to the fundamentally different structure of the sensor’s output, new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required. In this paper, we address ego-motion estimation for an event-based vision sensor using a continuous-time framework to directly integrate the information conveyed by the sensor. The DVS pose trajectory is approximated by a smooth curve in the space of rigid-body motions using cubic splines and it is optimized according to the observed events. We evaluate our method using datasets acquired from sensor-in-the-loop simulations and onboard a quadrotor performing flips. The results are compared to the ground truth, showing the good performance of the proposed technique.
Posted on: December 17, 2015
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
51
Warning: Use of undefined constant citation_author - assumed 'citation_author' (this will throw an Error in a future version of PHP) in
/home/clients/89f5f0444c120951cfdb7adc5e3aa2bf/web/dev-nccr-robotics/wp-content/themes/nccr-twentyseventeen-child/template-parts/post/content-publication.php on line
52
Recent results in monocular visual-inertial navigation (VIN) have shown that optimization-based approaches outperform filtering methods in terms of accuracy due to their capability to relinearize past states. However, the improvement comes at the cost of increased computational complexity. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes. The preintegration allows us to accurately summarize hundreds of inertial measurements into a single relative motion constraint. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group and carefully deals with uncertainty propagation. The measurements are integrated in a local frame, which eliminates the need to repeat the integration when the linearization point changes while leaving the opportunity for belated bias corrections. The second contribution is to show that the preintegrated IMU model can be seamlessly integrated in a visual-inertial pipeline under the unifying framework of factor graphs. This enables the use of a structureless model for visual measurements, further accelerating the computation. The third contribution is an extensive evaluation of our monocular VIN pipeline: experimental results confirm that our system is very fast and demonstrates superior accuracy with respect to competitive state-of-the-art filtering and optimization algorithms, including off-the-shelf systems such as Google Tango
Posted on: December 17, 2015