Personal tools
You are here: Home News End user evaluations 2011
About NIFTi
NIFTi is about human-robot cooperation. About teams of robots and humans doing tasks together, interacting together to try and reach a shared goal. NIFTi looks at how the robot could bear the human in mind. Literally. When determining what to do or say next in human-robot interaction; when, and how. NIFTi puts the human factor into cognitive robots, and human-robot team interaction in particular.   Each year, NIFTi evaluates its systems together with several USAR organizations. Rescue personnel teams up with NIFTi robots to carry out realistic missions, in real-life training areas. 
Impressum

This site uses Google Analytics to record statistics on site visits - see Legal information.

 

End user evaluations 2011

We have just successfully finished our end user evaluations for this year, at the SFO training area of Vigili del Fuoco in Montelibretti (Italy). The evaluations were set in the "tunnel accident" use case. Now, a team of three people used a semi-autonomous UGV, and a tele-operated UAV, to explore the disaster site. The human team-members performed the roles of Mission Commander, UGV Operator (both at a remote command post), and UAV Operator (in-field). The evaluation focused on the situation awareness of the UGV operator, and the ways in which the Operator and the UGV collaborated.

We have just successfully finished our end user evaluations for this year, at the SFO training area of Vigili del Fuoco in Montelibretti (Italy). The evaluations were set in the "tunnel accident" use case. Now, a team of three people used a semi-autonomous UGV, and a tele-operated UAV, to explore the disaster site. The human team-members performed the roles of Mission Commander, UGV Operator (both at a remote command post), and UAV Operator (in-field). The evaluation focused on the situation awareness of the UGV operator, and the ways in which the Operator and the UGV collaborated. 

The roles of Mission Commander and UAV Operator were played by experienced NIFTi'ers. Thus, only the UGV Operator "experimental subject" varied in each evaluation run. Each run took about 3.5 to 4 hours. In the end, we were able to gather evaluation results for 7 subjects. 

The UAV was fully tele-operated during the evaluation. The UGV was semi-autonomous (planning, mapping, observation, and part of its communication), but included a Wizard-of-Oz setup to bridge between speech recognition, and command interpretation. In the background, unbeknownst to the evaluation subject, a human Wizard interpreted what was said, and then translated this (through a button-click) into the appropriate command interpretation for the robot. Any ensuing action planning, feedback planning and -generation, etc. was performed autonomously by the system. 

Data we collected during this one week include:

  • for the UGV: robot operation data (rosbags), audio data (spoken feedback / running commentaries from robot), Wizard-of-Oz control data (communication)
  • for the UGV Operator: facial expression data, audio data (speech commands to robot; spoken feedback from robot), biometric data (heart-rate), subjective rating of cognitive load, pre- and post-evaluation questionnaires
  • for the remotely located human team members: video recording of the mission-part of the evaluation session
  • for the on-site located team members: video recording of the mission-part of the evaluation session

Over the next months, we will be analyzing this data for a variety of aspects, including human-robot team communication, autonomous behavior "vs." user acceptance, the role of running commentary / spoken feedback in making behavior more transparent, etc.  

Link to Italian press release from Vigili del Fuoco: [ WWW ]  

Document Actions