Personal tools
You are here: Home Results PDFs of PUBLIC Year 3 deliverables
About NIFTi
NIFTi is about human-robot cooperation. About teams of robots and humans doing tasks together, interacting together to try and reach a shared goal. NIFTi looks at how the robot could bear the human in mind. Literally. When determining what to do or say next in human-robot interaction; when, and how. NIFTi puts the human factor into cognitive robots, and human-robot team interaction in particular.   Each year, NIFTi evaluates its systems together with several USAR organizations. Rescue personnel teams up with NIFTi robots to carry out realistic missions, in real-life training areas. 

This site uses Google Analytics to record statistics on site visits - see Legal information.


PDFs of PUBLIC Year 3 deliverables

PDFs of all the NIFTi deliverables due in Year 3. The versions are PUBLIC. In some deliverables, not-yet-published work may not be concluded. Please contact the authors should you be interested.

File NIFTi Year 3 Summary
Provides a summary of the results achieved in NIFTi in Year 3. The project focused on "human-assisted joint exploration" of a low-speed car-train accident at a train station. We approached the problem from the viewpoint of a human-robot team, involving several humans at a remote command post, one in-field human to pilot a UAV, and a UGV (operated from the remote command post).
File DR 1.3.3: Acquisition of human-compatible hybrid maps of semi-structured environments
Search and Rescue tasks are tackled in NIFTi with a team of robots and human operators. They take place in specific environments outside of research laboratories and require good communication between robot and users. This relies on a common understanding of space. According to the reviewers comments, the focus of this year was to be on operators and their needs. This report presents the result of year 3 development in work package 1 on spatio-temporal modeling for situation awareness and, more specifically, the work, following DR1.2.2 to achieve consistent hybrid representations of USAR sites (MS1.4). We developed a full 3D localization and mapping system for NIFTi UGVs which has been designed and validated according to operator needs. We also improved on our topological segmentation tools in order to accommodate user inputs. Finally, functional mapping now is based on a bi-directional connection between 3D mapping and functional projections from conceptual knowledge. The latter includes reasoning about (flexible) robot morphology in establishing in-situ affordances.
File DR 2.3.3: Bi-directional cooperation of low-level vision modules and higher level control
This report presents the results of WP2 for the third year of NIFTi. Vision plays an essential role in the robot perception and also serves as the primary source of information for robot tele-operation. We focused on vision for shared situation awareness (MS2.3). We contributed in machine learning for image understanding, object detection, navigation and orientation, and human/fire detection. We developed an algorithm for measuring terrain profile and applied it on automatic flipper control based. Machine learning approach was adopted for terrain classification from vibrations. We added thermal imaging to the sensory field and further advanced visual streaming for teleoperation.
File DR 3.3.3: Adaptive situated HRI for human-assisted navigation
This report presents the results of WP3 for NIFTi Year 3. In Year 3, WP3 extended the human-robot team setup to include an in-field human team member. This resulted in geographic distribution of both robot and human team members. WP3 developed different versions of mobile interfaces to facilitate multi-modal communication between the in-field rescuer, and the rest of the remotely located human team members. The effects of this distribution on the nature of communication within the team was investigated in the setting of human-assisted exploration of a large-scale area (train accident use case). The mobile interface was integrated into the overall NIFTi setup for multi-view operational and tactical communication. The multi-modal operational interfaces were developed further to facilitate more advanced 3D- and qualitative interaction with exploration information. This setup was deployed during the end user evaluations at FDDO in December 2012.
File DR 4.3.2: Theory and evaluation of working agreement method and HRI-adaptation to di erent contexts
This report presents the results of WP4 for the third year of NIFTi. The overall objective of WP4 is to improve joint exploration by decreasing the cognitive task load for the human workers and optimizing the robots operational deployment. This objective will result in core UI design and evaluation activities aiming at theoretical and empirical founded solutions to support human's situation awareness and joint human-robot performance. This support instantiates working agreements for shared situation awareness and task load allocation, which should be adaptable to the current operational context. In year 3, the work in WP4 focused on four core functions of such adaptive support. For the first function, real-time operator task load assessment, the load model was refined, parameterized, implemented and tested. The second function concerns the setting of working agreements for robot's level of team-membership. A first experiment provided requirements for the communication and task load that should drive the actual setting of robot autonomy. The third function centers on adapting the tactical display to the momentary user needs and context. An agent-based architecture was developed that enables real-time decisions of (adaptive) information presentation (i.e., to establish context-sensitive "Right Messages at the Right Moment in the Right Modality" (called situated (RM) 3). A first implementation was tested in the end-user evaluation. The fourth function focuses on selectional attention, the modeling of visual search and task-switching. This WP provided preliminary computational models for (a) attention in real-world scenarios and (b) shifting and inhibition controls.
File DR 5.3.5: Flexible planning with time constraints and compatibilities
This document describes the progress status of the research on exible planning with time constraints and compatibilities performed by the NIFTi Consortium. The research reported in this document concerns the WP5 for the second half of Year 3 in the NIFTi project. Planned work is introduced and the actual work is discussed, highlighting the relevant achievements, how these contribute to the current state of the art and to the aims of the project.
File DR 6.3.4: User interaction and trajectory planning in unstructured environment based on 3D perceptual data: principle and evaluation
One main objective of NIFTi is to explore a disaster area using the NIFTi UGV. This robot hardware has been designed to fulfill this task but autonomous navigation in unstructured environments is still an open research topic. In this report, we present several attempts at trajectory planning and autonomous navigation. First, using only the omnicamera, it is possible to do visual navigation based on visual homing. Then, continuing the work presented in DR6.2.3, we present a complete trajectory planning and execution framework applied on the NIFTi UGV. Finally, we discuss another approach dedicated to stair climbing.
File DR 7.3.5: Integration and end-user evaluation for in-fi eld joint exploration planning
This document describes the updates and integration of the third prototype of the NIFTi robot and its evaluation performed at the training centre of FDDo in Dortmund for the train accident scenario, Germany.
File DR 8.3.5: Proceedings of the NIFTi summer school Yr3
DR8.3.5 describes the second NIFTi summer school, which was a joint event consisting of the 'Vision and Sports Summer School 2012', 5 days in August 27 - 31, 2012, and the 'Computer Vision for Mobile Robots Workshop 2012', 2 days in September 1 - 2, 2012. The goal of this NIFTi summer school was two-fold. First, provide theoretical knowledge on a expert level to the participants. During the first 5 days of the 'Vision and Sports Summer School 2012' students were given lectures on state-of-the-art computer vision techniques and pattern recognition by the invited experts. The given lectures were consistent with theory and methodology applicable in mobile robotics and hence suitable for the purpose of NIFTi. Second, give the participants an opportunity to test the theory and apply it in practice during the 2 days 'Computer Vision for Mobile Robots Workshop 2012'. Workshop participants were divided into teams and in a form of competition they worked intensively on robot control algorithm based on computer algorithm. The competition motivated participants to propose an original solution and experimentally evaluate it using the NIFTi platform. About 50 international students attended the lectures, 20 students then participated in the workshop.
Document Actions