Intent Understanding  
  Learning by Demonstration  
  Affordable Human Behavior Modeling  
  Multi-Robot Control  
  Human-Robot Interaction  
  Pattern Recognition  
  Autonomous Outdoor Navigation  
  Bioinformatics  

 

     HOME CV RESEARCH PUBLICATIONS LAB COURSES RESOURCES  

 


Intent Understanding

Motivation: Understanding intent is an important aspect of communication among people and is an essential component of the human cognitive system. This capability is particularly relevant for situations that involve collaboration among multiple agents or detection of situations that can pose a particular threat. For surveillance or military applications, it is highly important to enable understanding the intent of relevant agents in the environment, from their current actions, before any attack strategies are finalized. For collaboration between humans and robots, having the robots detect the intent of human actions enables the use of implicit communication, typical for collaborations between humans, and enhances the quality of humans' interaction with robots. This problem is also relevant for detecting threatining situations in naval domains.

Objectives: The goal of this project is to design an integrated system for automatic behavior modeling, which provides effective detection of intentions, before any actions have been finalized. The proposed approach relies on a novel formulation of Hidden Markov Models (HMMs), which allows a robot to understand the intent of other agents by virtually assuming their place and detecting their potential intentions based on the current situation. This allows the system to recognize the intent of observed actions before they have been completed, thus enabling helpful responses or preemptive actions for defense. The system's capability to observe and analyze the current scene employs novel vision-based techniques for target detection and tracking, using a non-parametric recursive modeling approach. This research is validated in human-robot interaction scenarios and in a naval simulation domain.

Videos:

  • Robot interacting with a human user based on detected intentions in a "homework" scenario [MPG] (18.9Mb)

  • Robot interacting with a human user based on detected intentions in an "eating" scenario [MPG] (40.1Mb)

  • Robot interacting with a human user based on detected intentions (history of past events is used context for disambiguation of high-level intentions) [MOV] (48.9Mb)

  • Detecting a human's intentions using hisory of past events as context for disambiguation [MOV] (35.1Mb)

  • Detecting a human's intentions in a "homework" scenario [MOV] (9.8Mb)

  • Detecting a human's intentions in an "eating" scenario [MOV] (10.3Mb)

  • Robot responds to detecting a theft [MOV] (4.4Mb)

  • Robot responds to detecting a person leaving an unattended baggage [MOV] (2.4Mb)

  • Robot interacting with a human user based on detected intentions [MOV] (66.3Mb)

  • Detecting meeting, passing, following of two people; the system detects the transitions between the activities (sequence 1) [MOV] (4.6Mb)

  • Detecting meeting, passing, following of two people; the system detects the transitions between the activities (sequence 2) [MOV] (4.9Mb)

  • Detecting a blockade by a group of small boats [MOV] (9.9Mb)

  • Detecting a hammer and anvil attack by a group of small boats [MOV] (15.5Mb)

  • Detecting of deceptive behavior (hiding behind other boats) [MOV] (16.3Mb)

  • Detecting of deceptive behavior (hiding behind other boats) [MOV] (7.6Mb)

  • Detecting threatening intentions in a complex scenario in San Diego Harbor (boats going in formation, potential attacks) [MOV] (102.7Mb)

  • Detecting threatening intentions in a complex scenario in open water (boats forming groups, splitting, hiding and attacks) [MOV] (56.6Mb)

  • Detecting threatening intentions in a complex scenario in the Straits of Hormuz (small boats harassing and attacking destroyer in channel) [MOV] (100.4Mb)

Publications:
Support:
This work is supported by the Office of Naval Research awards N00014-06-1-0611, N00014-12-1-0860, N00014-09-1-1121, and by the National Science Foundation EPSCoR Ring True III award EPS0447416.


Learning by Demonstration

Motivation: While recent advances in robotics research bring robots closer to entering our daily lives, real-world uses of autonomous robots are very limited. One of the main reasons for this is that designing robot controllers is still usually done by people specialized in programming robots: the lack of accessible methods for robot programming restricts the use of robots solely to people with programming skills. The motivation of this project is to provide algorithms that would enable non-expert users to design robot controllers for their specific needs, thus facilitating the integration of robots in people’s daily lives.

Objectives: The goal of this project is to develop algorithms for automated generation of robot controllers from demonstration and interaction with human users. The main research questions of this project pertain to the investigation, design, and implementation of: (1) an autonomous robot control architecture that provides support for task knowledge acquisition from user provided demonstration, (2) algorithms for robot learning by demonstration that facilitate training of robot assistants by non-specialist users, (3) quantitative evaluation metrics that provide objective means for assessing the performance of human-robot interaction in the context of robot teaching by demonstration. The proposed robot control architecture will create the infrastructure for complex task learning and will provide a new representation for multiple action selection mechanisms. The learning by demonstration algorithms will use a novel approach for interpreting a user’s demonstration, based on particle filtering that identifies superpositions of multiple concurrent activities. In addition, generalization algorithms will use inductive learning methods to capture and represent variations in task execution strategies. User feedback will allow for refinement of learned tasks, through verbal instructions or teleoperation interventions. The quantitative evaluation metrics will provide objective measures for the proposed interactive learning approach and could also serve as more general tools for the broader field of HRI.

Videos:

Generalization experiments based on ordering of task steps: robot mixing cookie ingredients (Pink/Red bowl (right) - Flour, Yellow bowl (middle) - Sugar, Green bowl (left) - Chocolate Chips, Blue bowl (front) - Mixing bowl). For details of the approach and experimemnts see the [scripts description] and [AIMSA_12].

  • Scenario 1 - training [MOV] (4Mb)

  • Scenario 1 - testing (robot performing learned task) [MOV] (18.5Mb)

  • Scenario 1 - testing (robot performing learned task with helper); the robot takes into account the steps performed by the robot and does not do them again [MOV] (21.3Mb)

  • Scenario 2 - training example 1 [MOV] (4.9Mb)

  • Scenario 2 - testing (robot performing learned task) [MOV] (18.8Mb)

  • Scenario 2 - training example 2 [MOV] (6.8Mb)

  • Scenario 2 - testing (robot performing learned task) [MOV] (18.6Mb)

  • Scenario 3 - training [MOV] (5Mb)

  • Scenario 3 - testing (robot performing learned task) [MOV] (18.9Mb)

  • Scenario 4 - training [MOV] (4.7Mb)

  • Scenario 4 - testing (robot performing learned task) [MOV] (18.6Mb)

  • Scenario 6 - training [MOV] (4.7Mb)

  • Scenario 6 - testing (robot performing learned task) [MOV] (19Mb)

  • Scenario 7 - training [MOV] (6.9Mb)

  • Scenario 7 - testing (robot performing learned task) [MOV] (21.6Mb)

Simultaneous learning of behavior fusion and sequencing. Additional training is provided after the initial demonstration, showing on-line refinement of learned skills. For details of the approach see [IJSR_08], [ICDL_07], [WDIs_06] and [RO-MAN_06].

  • Learning a left following behavior [MOV] (46.9Mb)

  • Learning a circling behavior [MOV] (60.9Mb)

  • Learning a sequence of fused behaviors [MOV] (30.6Mb)

  • Learning a composition (regular expression) of fused behaviors [MOV] (48.7Mb)

Publications:
Support:
This work is supported by the National Science Foundation under Award IIS-0546876 and a UNR Junior Faculty Award.


Affordable Human Behavior Modeling

Motivation: Live-fire military training exercises are expensive in terms of personnel, ordnance, fuel, and environmental damage. Virtual training technologies and the Navy and US Marine Corps’ family of tactical decision making simulations (TDSs) provide training tools for trainees to plan and execute operational plans in a force-on-force environment and through after action review, gain feedback about the effectiveness of their planning and decision making. However, TDSs require the participation and coordination of instructors (experts) and many human players because the capability for automatically controlling a realistic, competent opposing force is relatively non-existent. Furthermore, tactical decision making simulations are not designed to address more strategic decision making. Thus, the inability of current technology to provide a competitive, realistic, opposing force compromises the goal of inexpensive, anytime, anywhere training, especially for strategic decision making.

Objective: The goal of this project is to develop a computational approach to developing effective training systems for virtual simulation environments. Our proposed solution is to develop intelligent, autonomous controllers that drive the behavior of each boat in the virtual training environment. To increase the system’s efficiency we provide a mechanism for creating such controllers, from the demonstration of a navigation expert, using a simple programming interface. In addition, our approach deals with two significant and related challenges: the realism of behavior exhibited by the automated boats and their real-time response to changes in the environment.

Publications:

Support:
This work is supported by the Office of Naval Research under grant number N00014-05-1-0709.


Multi-Robot Control

Motivation: Service and security applications can greatly benefit from the use of multi-robot teams, in that the tasks can be accomplished faster and more reliably. The major challenges we address in our research are the methods communication and control that enable the cooperation of multiple robots in achieving a task.

Objective: The goal of this project is to develop an architecture with the following main features: 1) it allows for heterogeneous teams, such that robots with different capabilities can cooperate to achieve complex tasks, 2) it enables self-organization in task allocation, which is performed through a market-based publish subscribe mechanism, 3) it provides sensor sharing capabilities across robots, 4) it has a low computation overload, 5) it is platform independent, and 5) it is robust to environment and team changes.

Videos:

  • Distributed auction based task allocation with a heterogeneous robot group; "red" or "green" robots can attend to "red", "green" or "red-and-green" tasks. [MOV] (22.9Mb).

  • Building a map of the environment [MOV] (13.8Mb).

  • Robot coordination in a robot failure scenario; the failing robot uses another robot's sensor data to navigate around a corner [MOV] (17.8Mb).
Support: This work is supported by the Office of Naval Research under grant number N00014-05-1-0525.


Human-Robot Interaction

Motivation: Advances in robotics research bring robots closer to real world applications. Although robots have become increasingly capable, productive interaction is still restricted to specialists in the field. A major challenge in designing robots for real-world applications is to enable natural and accessible interaction between robots and nontechnical users, while ensuring long-term, robust performance in complex environments without the direct control of a human operator.

Objective: The goal of this project is to develop a control architecture that provides robots with social-awareness, allowing them to monitor their surroundings for other social agents (robots or people), detect their need for interaction, and respond appropriately. Our architecture provides means for long-term autonomy, enabling robots to manage a large repertoire of tasks over extended periods. Additionally, our system is designed for realistic assistive applications, where multiple people may be simultaneously competing for the robot’s assistance. The contribution of this work is our framework that addresses two key issues in human-robot interaction: awareness of the environment and other agents, and long-term interaction with multiple users.

Videos:

  • Long-term interaction with a service robot; tasks are serviced based on priority, using different postures [MOV] (18.9Mb).

Publications: Support: This work is supported by the National Science Foundation under Award IIS-0546876 and by the Office of Naval Research under grant number N00014-05-1-0525.


Pattern Recognition for Microcantilever Arrays

Motivation: Recently we are witnessing noticeable success in the development of a new class of chemical and biological sensors – microfabricated cantilever sensor arrays actuated at their resonance frequencies and functionalized by polymer coatings. The major advantages of such miniature sensors are their small size, fast response, remarkably high sensitivity, and the endless possibilities of reaching high selectivity via customized combination of polymer coatings. These devices are inexpensive, portable, and have the ability to operate in various environments, such as vacuum, air and liquids. The areas of applications of microfabricated cantilever sensor arrays are almost countless, including a variety of scientific research in physics, chemistry, biochemistry, biology, and genetics, food and beverage industry, perfume industry, pharmacology, medicine, environmental monitoring, and most recently, related to the national security due to a high risk of terrorist attacks. However, despite the remarkable achievements in fabrication of microcantilever sensor arrays, creating an accurate and reliable pattern recognition algorithm as a part of the sensory system is still an essential and not yet completely solved problem. Most pattern analysis algorithms that have been used with the cantilever sensor arrays today are highly customized, ad hoc algorithms. They often lack generality and cannot be easily carried from one set of experimental data to another.

Objective: The main goal of this project is to develop pattern recognition algorithms that can be effectively used as a reliable detection system with the specific sensory data obtained during the experiments with a microfabricated cantilever sensor array and further feature extracting procedure. Five different pattern recognition algorithms have been created for the current research. All of those algorithms and the open source implementation of the sixth algorithm (multiclass SVMs) were used for testing on benchmark data sets and collected sensory data. It has been shown that the kernel-based algorithms have the greatest potential to be used with the microfabricated cantilever sensor array in the detection systems. Four out of six pattern recognition algorithms have produced high accuracy classification results upon processing the cantilever sensor array data.

Publications:

Support: This work is supported by the National Science Foundation EPSCoR Ring True III award EPS0447416.


Autonomous Outdoor Navigation

Motivation:Robotic tasks for real-world applications typically involve temporal sequences and representations of data not available directly from sensors, which suggests that representations and deliberative components are essential for a robot’s control architecture. The majority of real-world robot tasks require not only the reliable real-time characteristics of reactive behavior, but also guided transitions from state-to-state, and the ability to make decisions based on abstract data acquired a-priori, from sources such as maps. This poses significant challenges for the development of controllers for robots that interact directly with the physical environment. A principal challenge is in dealing with the uncertainty introduced by noisy sensors. Pure deliberation falters in that even seemingly simple environments cannot be entirely represented by models accurately or quickly enough to guide a robot reliably in real-time. Reactive architectures are able to deal with uncertainties, but are limited to relatively simple tasks.

Objective: The goal of this project is to develop an architecture that addresses the above challenges and demonstrates a flexible method for building robot controllers, while allowing for both reactive and deliberative components. Our solution is to consider a controller as a composition of agents, with similar representations and inter-agent communication mechanisms, similar to behaviors in the behavior-based design paradigm. The agents can be composed of other agents in a nested fashion and use the implementation (reactive or deliberative) that is most appropriate for their task. Thus, the contribution of our work is a control architecture that has the following main features: 1) it enables modular, flexible, incremental design of controllers to execute complex, sequential tasks, which can be easily extended or improved and 2) it integrates reaction and deliberation within the same representational framework, using deliberation only as necessary. This approach is used to develop a controller on a custom robot, and was validated in an outdoor navigation and retrieval task.

Publications:

Support: This work is supported by the National Science Foundation EPSCoR Ring True III Award EPS0447416 and by NASA under Awards NSHE-07-34, NSHE-07-35, and NSHE-07-56.


Bioinformatics

Motivation: Traditional methods obtain a microorganism's DNA by culturing it individually. Recent advances in genomics have lead to the procurement of DNA of more than one organism from its natural habitat. Assembling these genomes is a crucial step irrespective of the method of obtaining the DNA.

Objective: The goal of this project is to develop a fuzzy method for multiple genome sequence assembly of cultured genomes (single organism) and environmental genomes (multiple organisms). Our proposed solution is a sequence assembly based on fuzzy logic, which allows for tolerance of inexactness or errors in fragment matching and that can be used for improved assembly. The method uses fuzzy classification using modified fuzzy weighted averages to classify fragments belonging to different organisms within an environmental genome population. The approach uses DNA-based signatures such as GC content and nucleotide frequencies as features for the classification.

Publications:

Support: This work is supported by the National Science Foundation EPSCoR Ring True III award EPS0447416.