Complex Task Learning from Verbal Instructions

  • Existing research on teaching robots by demonstration or verbal instruction focuses on learning tasks that mainly involve sequential constraints, building representations that encode steps which have to be executed in order. In practice, robot tasks may require more complex dependencies. For instance, some parts of the task could be allowed to execute in any order (e.g., adding ingredients for making cookies), leading to multiple ways in which the task can be performed. Other parts of the task may have to be executed in a specific order (e.g., adding ingredients before doing the mixing). Furthermore, other parts of the task could be achieved through entirely different paths of execution (e.g., could add either whole wheat, or white flour, or almond flour in a recipe). Such tasks are difficult for a human to teach by demonstration, as in order to capture the various different ways in which a task may be executed, a learning system may need to be provided with multiple demonstrations of the same task. In contrast, these types of complex dependencies can be efficiently conveyed by use of conjunctions in verbal instructions in a single command.
  • This work presents a novel approach to robot task learning from language-based instructions, which focuses on increasing the complexity of task representations that can be taught through verbal instruction. We developed a framework for directly mapping a complex verbal instruction to an executable task representation, from a single training experience. The method can handle the following types of complexities: 1) instructions that use conjunctions to convey complex execution constraints (such as alternative paths of execution, sequential or nonordering constraints, as well as hierarchical representations) and 2) instructions that use prepositions and multiple adjectives to specify action/object parameters relevant for the task. Specific algorithms have been developed for handling conjunctions, adjectives and prepositions as well as for translating the parsed instructions into parameterized executable task representations.
  • Monica Nicolescu, Natalie Arnold, Janelle Blankenburg, David Feil-Seifer, Santosh Banisetty, Mircea Nicolescu, Andrew Palmer, Thor Monteverde, "Learning of Complex-Structured Tasks from Verbal Instruction", Proceedings of the IEEE-RAS International Conference on Humanoid Robots, pages 1-8, Toronto, Canada, October 2019.
  • Janelle Blankenburg, Santosh Balajee Banisetty, Seyed Pourya Hoseini Alinodehi, Luke Fraser, David Feil-Seifer, Monica Nicolescu, Mircea Nicolescu, "A Distributed Control Architecture for Collaborative Multi-Robot Task Allocation", Proceedings of the IEEE-RAS International Conference on Humanoid Robots, pages 1-8, Birmingham, UK, November 2017.
  • Luke Fraser, Banafsheh Rekabdar, Monica Nicolescu, Mircea Nicolescu, David Feil-Seifer, George Bebis, "A Compact Task Representation for Hierarchical Robot Control", Proceedings of the IEEE-RAS International Conference on Humanoid Robots, pages 1-8, Cancun, Mexico, November 2016.

Teaching a task with complex execution constraints [making sandwich]

Teaching a task with complex execution constraints [making tea]

Teaching a high-level task based on prior sub-tasks

Teaching a task with constraints using prepositions and adjectives

  • Designing Collaborator Robots for Highly-Dynamic Multi-Human, Multi-Robot Teams, Office of Naval Research, PI (Co-PI: Mircea Nicolescu, David Feil-Seifer), Amount: $656,511, April 1, 2016 – March 31, 2019.