Personal tools
You are here: Home Results NIFTi Year 1 results
About NIFTi
NIFTi is about human-robot cooperation. About teams of robots and humans doing tasks together, interacting together to try and reach a shared goal. NIFTi looks at how the robot could bear the human in mind. Literally. When determining what to do or say next in human-robot interaction; when, and how. NIFTi puts the human factor into cognitive robots, and human-robot team interaction in particular.   Each year, NIFTi evaluates its systems together with several USAR organizations. Rescue personnel teams up with NIFTi robots to carry out realistic missions, in real-life training areas. 

This site uses Google Analytics to record statistics on site visits - see Legal information.


NIFTi Year 1 results

During Year 1, NIFTi developed the basis for human-robot collaboration in USAR environments. Together with end-users we defined use cases and requirements for robot platforms to be deployed. S&T development was set in the context of human-instructed exploration of an USAR disaster site. We made progress on all four project objectives: functional environment modeling, situated cognitive user modeling, adaptive interaction, and planning/execution with adaptive morphology.

Overview of the NIFTi Year 1 results

NIFTi aims to develop a theory of how robots can support humans in a team, performing task-oriented missions together. We place our efforts in the domain of Urban Search \& Rescue (USAR). Characteristic for this domain is that humans are working under difficult circumstances. This seriously affects their performance. Any robot which is to support humans thus needs to understand how human performance can dynamically vary under such circumstances. It should use that understanding to adapt its behavior to ensure optimal support for the human(s) in the team, as the mission unfolds. This requires knowing more about how situations affect human performance, and how this would impact collaboration. At the same time, any adaptation the robot applies needs to be balanced off against what it needs to do itself, to perform its own tasks. 

The NIFTi objectives address these different issues in balancing operational and cooperation demands. Objective 1 focuses on modeling situation awareness in the robot such that it can take into account where actions can be performed in the environment, and under what conditions. Objective 2 takes this functional view on the environment, and connects it to user modeling. Its aim is to investigate how this view helps a robot to estimate operator performance, and what the operator is attending to (or is likely to attend to, in a situation). Objective 3 builds on the dynamics of building situation awareness, and operator performance, to determine how team-communication between robots and humans should adapt to optimally fit into the current task-context and performance conditions. Finally, Objective 4 considers how to balance off these cooperation demands against operational adaptivity. We look at this as a combination of  flexibility in cooperative planning and execution, and adapting task execution to an optimal morphology for performing that task in the given situation. 

NIFTi follows a roadmap to guide its S&T development. The roadmap defines milestones for achieving the project objectives. These milestones formulate increasingly more sophisticated integrated theories and systems with specific, measurable capabilities -- underlining NIFTi's strong focus on integration. The development towards these milestones is iterative. For Year 1, the milestone was to achieve human-instructed exploration. We used the tunnel accident use case to provide a realistic setup for human-instructed exploration: A lorry lost its load of barrels and other materials in a tunnel, causing an accident involving multiple cars. 

  • Human-instructed exploration: The objective is to develop an integrated cognitive robot system for human-instructed exploration. Cooperation involves a human instructing a robot how to explore an environment of NIST USAR Yellow level. The robot can balance tele-operation with autonomous navigation (shared control) to jointly exploration the environment. It keeps up a running commentary of what it sees. Central issues for evaluating the human factor in this setting focus on user modeling, assessing variation in task load and attention when building up situation awareness using the robot’s experience. The team has a time-limit on exploring the disaster area. 

Within this scenario, we made the following progress on our objectives: 

  • Functional environment modeling: We have developed and tested a first approach to functional modeling, based on a combination of novel methods for 2D/3D mapping and topological segmentation, online learnable object tracking and -detection, and ontological inference. The approach makes it possible to project from a detected landmark back into the map where to be relative to that landmark, when the robot is to perform or support a specific action (e.g. looking into the car) 
  • Situated cognitive user models: We have developed a model of cognitive task load for humans operating with robots in a USAR context, and gathered baseline data on human factors driving performance during operation of a tele-operated robot, and a semi-autonomous robot.  
  • Adaptive human-robot interaction: We have developed a novel approach to reference processing in situated dialogue for human-robot teaming, based on field data gathered in NIFTi, investigated communication strategies in human-robot teaming for USAR, and gathered experimental data on OCU performance and models for adaptivity of information presentation in OCUs. 
  • Planning, execution and adaptive morphology: Together with the end users involved in NIFTi we have developed requirements and designs for new UGV and UAV platforms for use in NIFTi, and we have developed new approaches for skill learning and planning with skill hierarchies. 

Throughout Year 1 we have been following a user-centric design methodology, closely involving end users in the design, testing, and evaluation of approaches and systems. Tests and trials always happened under realistic circumstances, at one of the training sites we have access to. 

For more details, see the Year 1 publishable summary: [ PDF

Document Actions