Distributed Architecture for Human-Robot Teams

  • In this project address the problem of designing collaborator robots that can operate effectively as members of multi-human, multi-robot teams. The solutions designed for this purpose would be applicable to multiple application domains: military (groups of human soldiers and robots), manufacturing (mixed human-robot assembly environments) or service robotics (humans and robots co-existing in physical worlds). The heterogeneous, multi-agent team poses different challenges from human-robot collaboration in single human-robot domains. In particular, successful interaction and collaboration among the agents of a group that work in close proximity requires several key capabilities: i) Awareness: the agents need to be continuously aware of their surroundings (e.g., what are the agent’s intentions, what types of interactions they are currently engaged in (who is working with whom, who is speaking to whom, etc.), ii) Flexibility in control: the agents should be prepared to engage in multiple interactions while switching back and forth between different tasks and roles (e.g., subordinate, peer, supervisor), iii) Sociability: the agents should be able to communicate using verbal cues, to either receive direct commands or instructions from human users, or to express their own goals/intentions to the human collaborators, and iv) Adaptability: the agents should remember and learn from their past experiences, in a lifelong learning process.
  • In our approach, we view that the main objective of a robot should not simply be to achieve its own goals, but rather to be an effective member of its team. Toward this end, the basic issues that this project will address focus on fundamental research on how a robot’s perception, control, social behavior and learning systems need to be designed and interconnected for multi-human multi-robot collaborative domains. Consequently, we will focus on developing and integrating in a unified system: i) a perceptual system that provides the robot with the awareness needed to engage and sustain dynamic, long-term interactions, ii) architectural control structures that enable fast task/role switching and task allocation, iii) natural language communication skills for sociable interactions, and iv) learning tools for adaptable robot teammates.
  • Janelle Blankenburg, Mariya Zagainova, Stephen M. Simmons, Gabrielle Talavera, Monica Nicolescu, and David Feil-Seifer. "Human-Robot Collaboration and Dialogue for Fault Recovery on Hierarchical Tasks." In International Conference on Social Robotics (ICSR), CO, Oct 2020.
  • Muhammed Tawfiq Chowdhury, Shuvo Kumar Paul, Monica Nicolescu, Mircea Nicolescu, David Feil-Seifer, Sergiu Dascalu, "Computation of Suitable Grasp Pose for Usage of Objects Based on Predefined Training and Real Time Pose Estimation", Proceedings of the International Conference on Autonomic and Autonomous Systems – Special Track on Vision and Learning for Robotic Applications, pages 91-96, Lisbon, Portugal, September-October 2020.
  • Shuvo Kumar Paul, Muhammed Tawfiq Chowdhury, Mircea Nicolescu, Monica Nicolescu, David Feil-Seifer, "Object Detection and Pose Estimation from RGB and Depth Data for Real-time, Adaptive Robotic Grasping", Proceedings of the International Conference on Image Processing, Computer Vision and Pattern Recognition, pages 1-10, Las Vegas, Nevada, July 2020.
  • Bashira Akter Anima, Mariya Zagainova, S. Pourya Hoseini Alinodehi, Muhammed Tawfiq Chowdhury, Janelle Blankenburg, Monica Nicolescu, David Feil-Seifer, Mircea Nicolescu, "Collaborative Human-Robot Hierarchical Task Execution with an Activation Spreading Architecture",Proceedings of the International Conference on Social Robotics, best paper award finalist, pages 301-310, Madrid, Spain, November 2019
  • Seyed Pourya Hoseini Alinodehi, Janelle Blankenburg, Mircea Nicolescu, Monica Nicolescu, David Feil-Seifer, "An Active Robotic Vision System with a Pair of Moving and Stationary Cameras", Proceedings of the International Symposium on Visual Computing, pages 184-195, Lake Tahoe, Nevada, October 2019.
  • Seyed Pourya Hoseini Alinodehi, Mircea Nicolescu, Monica Nicolescu, "Active Object Detection Through Dynamic Incorporation of Dempster-Shafer Fusion for Robotic Applications", Proceedings of the International Conference on Vision, Image and Signal Processing, pages 1-7, Las Vegas, Nevada, August 2018.
  • Seyed Pourya Hoseini Alinodehi, Mircea Nicolescu, Monica Nicolescu, "Handling Ambiguous Object Recognition Situations in a Robotic Environment via Dynamic Information Fusion", Proceedings of the IEEE International Conference on Cognitive and Computational Aspects of Situation Management, pages 56-62, Boston, Massachusetts, June 2018.
  • Janelle Blankenburg, Santosh Balajee Banisetty, Seyed Pourya Hoseini Alinodehi, Luke Fraser, David Feil-Seifer, Monica Nicolescu, Mircea Nicolescu, "A Distributed Control Architecture for Collaborative Multi-Robot Task Allocation", Proceedings of the IEEE-RAS International Conference on Humanoid Robots, pages 1-8, Birmingham, UK, November 2017.
  • Luke Fraser, Banafsheh Rekabdar, Monica Nicolescu, Mircea Nicolescu, David Feil-Seifer, George Bebis, "A Compact Task Representation for Hierarchical Robot Control", Proceedings of the IEEE-RAS International Conference on Humanoid Robots, pages 1-8, Cancun, Mexico, November 2016.
  • Luke Fraser, Banafsheh Rekabdar, Monica Nicolescu, Mircea Nicolescu, David Feil-Seifer, "A Hierarchical Control Architecture for Robust and Adaptive Collaborative Robot Task Execution", Proceedings of Planning for Human-Robot Interaction: Shared Autonomy and Collaborative Robotics – RSS Workshop, pages 1-3, Ann Arbor, Michigan, June 2016.

Single robot performing a task for making tea.

Two robots collaborating in making a sandwich and tea.

Collaborative task execution by heterogeneous robot team

Collaborative human-robot task execution using detected human intent

Automatic object detection and grasp points

Handling occlusions using Dempster-Shafer fusion

Handling occlusions using Bayesian fusion


Improved object recognition with multiple camera view

Collaborative human-robot execution with fault handling through dialog

Human-robot task execution, coordinated through dialog

Heterogeneous human multi-robot team


Real-time object pose detection with specialized grasping

Planner automatically selects secondary camera viewpoint

Active vision system integrated with task execution

Real-time grasping in human-robot collaboration

  • Designing Collaborator Robots for Highly-Dynamic Multi-Human, Multi-Robot Teams, Office of Naval Research, PI (Co-PI: Mircea Nicolescu, David Feil-Seifer), Amount: $656,511, April 1, 2016 – March 31, 2019.