Personal tools
You are here: Home Results NIFTi Year 2 results
About NIFTi
NIFTi is about human-robot cooperation. About teams of robots and humans doing tasks together, interacting together to try and reach a shared goal. NIFTi looks at how the robot could bear the human in mind. Literally. When determining what to do or say next in human-robot interaction; when, and how. NIFTi puts the human factor into cognitive robots, and human-robot team interaction in particular.   Each year, NIFTi evaluates its systems together with several USAR organizations. Rescue personnel teams up with NIFTi robots to carry out realistic missions, in real-life training areas. 
Impressum

This site uses Google Analytics to record statistics on site visits - see Legal information.

 

NIFTi Year 2 results

NIFTi investigates what it takes to make a robot a good team-player. What should it try to do, say, see, and how to do all that, so that it can best support its human team members under varying circumstances? In Year 2, we continued R&D towards more-autonomous-and-more-collaborative team-player-robots: We moved to a full human-robot team setup, within the setting of the tunnel accident use case.

Overview of the NIFTi Year 2 results

NIFTi investigates what it takes to make a robot a good team-player. What should it try to do, say, see, and how to do all that, so that it can best support its human team members under varying circumstances? In NIFTi, we take the view that human performance should drive these considerations. A robot should understand how factors like cognitive load and stress influence humans, in how they see the environment, in what they can do (or not), in what they need to know. The robot should understand this, and adapt its behavior accordingly. In short, NIFTi puts the human factor into cognitive systems. NIFTi takes this to the level of the entire socio-technical system in which a human-robot team works together. Humans, robots, intelligent interfaces all mesh and adapt to provide optimal tactical and operational performance for building up situation awareness in a geographically distributed team.

In NIFTi we focus on human- robot teaming in Urban Search & Rescue (USAR). We work with end users who stand to have a real benefit from working with such robots. Our robots are to assist people in assessing disaster situations that are too dangerous or too difficult for humans to enter right away. These are the settings we want to make human-robot teaming work in. Circumstances which are difficult, stressful and tiring for the humans. Circumstances which are neither human-friendly nor robot-friendly.
We believe there is only one way to really tackle this issue, and that is by going in there. In NIFTi we adopt a user-centric design methodology. Together with the end users involved in NIFTi, we formulate design requirements. We gather data in the field, and field-test our components. We then run end user trials and experiments to see how well our approaches work, in high-fidelity scenarios. And the next year, we do that again.

Year 1: Brief overview

In Year 1, we focused on experimenting with a single operator-single robot setup, in a tunnel accident use case. The use case was formulated together with our end users. The operator was located at a remote control post, and interacted with the robot using a multi-modal Operator Control Unit (OCU).

In parallel to using a commercially available UGV platform (ActivMedia P3-AT) equipped with a 2D laser and a LadyBug  omnidirectional camera, we started the development of the NIFTi UGV platform, and a micro-UAV. The design of the NIFTi UGV involved close cooperation with the NIFTi end users. The UAV is to act as remote sensor for the UGV operator. Throughout Year 1 we ran experiments at various end user training sites, including the Scuola di Formazione Operativa (SFO) training site of the Vigili del Fuoco in Montelibretti (Italy), and the training school of the Fire Department of Dortmund (FDDO) in Dortmund (Germany).

These experiments included recording human behavior: How do professional first responders explore a disaster site? We analyzed what they attended to, where they went, how they described situations back to people outside the disaster site. Using these analyses, we built up a first integrated system which had enough semantic understanding of a situation to be able to mimic such a behavior. The underlying motivation: This could make robot behavior more transparent to a user, thus making the robot easier to work with.

Year 1: Lessons learnt

We ended Year 1 with a pilot study, with end users operating a robot using the NIFTi OCU. The end users used the OCU to tele-operate the robot. The OCU displayed a 2D map of the environment, and provided a video stream from the robot’s omnidirectional camera. Next to the OCU, end users recorded their situation awareness on a separate map (white board).

Operating the robot under realistic field conditions turned out to be harder than expected. Problems included insufficient battery power and network coverage, issues in network bandwidth (e.g. to provide >8Hz frame rate for video), or synchronization among distributed processes.

At the same time, users were able to build an accurate situation awareness of the disaster site, even in the presence of smoke. Further analysis of their operational behavior revealed that also in tele-operation, they followed exploration patterns similar to those we observed for first responders in the field, and matching the robot’s semantic understanding of the environment. Stress and cognitive task load stayed within moderate ranges. Looking at the entire socio-technical system though, one thing became very clear. NIFTi would need to move away from the single operator-single robot setup, and consider supporting multiple humans and multiple robots to collaborate in a team.

Year 2: Overview

Our goal for Year 2 was to investigate human-assisted joint exploration: How could a user take a more supervisory role, sharing control with a robot capable of more autonomy in exploring the disaster site?

Following up on the lessons from Year 1, we put this goal in the context of a human-robot team. A team of users would be located at a remote command post, performing roles such as Mission Commander (tactical coordination), UAV Mission Specialist (assessing and communicating situation awareness given UAV video stream), or UGV Operator. These users would interact over radio with an in-field UAV Operator (flying the UAV in line- of-sight), and with the robots.

New to the field in Year 2 were the NIFTi UGV, and a new UAV prototype. Both were deployed for the first time at the NIFTi Joint Exercises (NJEx), which we organized at the FDDO training school in July 2011. NJEx 2011 brought together researchers and end users. They formed up human-robot teams, and performed a variety of missions in complex environments together. The purpose of this was twofold: To study (1) the dynamics of human-robot teams when they are geographically distributed, and (2) how they communicate amongst each other to build up situation awareness, and coordinate joint exploration.

During NJEx, the UGV and the UAV were fully tele-operated. Improved OCUs for the UGV and UAV provided feedback to the remote command post. System communication used a new wireless network setup which proved to be robust and to provide sufficient bandwidth for running multiple robots in parallel (incl. video streaming).

We gained several insights at NJEx, and through analysis of the behavior and communication patterns of the individual teams. Effective teams consolidated situation awareness at the command post, and provided coordination to the overall exploration effort. Situation awareness coming directly from the field, or suggestions for alternative actions, typically would only be accepted if the remotely located team was able to ground these in their own tactical-operational picture. Team effectivity was closely related to team coherence, i.e. everybody playing his or her assigned role. Finally, also at NJEx we could observe variations in stress (detectable in spoken communication), for example in situations which were difficult to assess or operate in.

We took these initial insights on board for the developments leading up to the end user evaluations in December 2011, held at the SFO training site. The end user evaluations at SFO focused on the UGV operator, within the entire context of a human-robot team. The UGV operator was located at a remote command post, and interacted there directly with an experienced end user doubling as Mission Commander, and UAV Mission Specialist. The UGV Operator had access to an OCU for the UGV, and a multi- modal GUI for maintaining team situation awareness. The information on the UGV operator’s team situation awareness interface was synchronized with an identical interface maintained by the Mission Commander. Additionally, an OCU with a video stream from the UAV was available to the end user doubling as Mission Specialist. In-field, a UAV Operator controlled the UAV when instructed over radio by the Mission Commander to observe particular situations.

For more details

For more details on results, progress, and plans see the Year 2 Summary: [ PDF ]

Feel free to contact the NIFTi project coordinator, Geert-Jan M. Kruijff.

PDFs of the PUBLIC versions of the Year 2 deliverables are here: [ FOLDER ]

 

 

Document Actions