|
|
Intent Understanding
|
Motivation: Understanding intent is an important aspect of communication among people and is
an essential component of the human cognitive system. This capability is
particularly relevant for situations that involve collaboration among multiple
agents or detection of situations that can pose a particular threat. For
surveillance or military applications, it is highly important to enable
understanding the intent of relevant agents in the environment, from their
current actions, before any attack strategies are finalized. For collaboration between
humans and robots, having the robots detect the intent of human actions enables the use
of implicit communication, typical for collaborations between humans, and enhances the
quality of humans' interaction with robots. This problem is also relevant for detecting threatining situations in naval domains.
Objectives: The goal of this project is to
design an integrated system for automatic behavior modeling, which
provides effective detection of intentions, before any actions have been finalized.
The proposed approach relies
on a novel formulation of Hidden Markov Models (HMMs), which allows a robot to
understand the intent of other agents by virtually assuming their place and
detecting their potential intentions based on the current situation. This allows
the system to recognize the intent of observed actions before they have been
completed, thus enabling helpful responses or preemptive actions for defense. The system's capability
to observe and analyze the current scene employs novel vision-based techniques
for target detection and tracking, using a non-parametric recursive modeling
approach. This research is validated in human-robot interaction scenarios and in a naval simulation domain.
Videos:
- Robot interacting with a human user based on detected intentions in a "homework" scenario [MPG] (18.9Mb)
- Robot interacting with a human user based on detected intentions in an "eating" scenario [MPG] (40.1Mb)
- Robot interacting with a human user based on detected intentions (history of past events is used context for disambiguation of high-level intentions) [MOV] (48.9Mb)
- Detecting a human's intentions using hisory of past events as context for disambiguation [MOV] (35.1Mb)
- Detecting a human's intentions in a "homework" scenario [MOV] (9.8Mb)
- Detecting a human's intentions in an "eating" scenario [MOV] (10.3Mb)
- Robot responds to detecting a theft [MOV] (4.4Mb)
- Robot responds to detecting a person leaving an unattended baggage [MOV] (2.4Mb)
- Robot interacting with a human user based on detected intentions [MOV] (66.3Mb)
- Detecting meeting, passing, following of two people; the system detects the transitions between the activities (sequence 1) [MOV] (4.6Mb)
- Detecting meeting, passing, following of two people; the system detects the transitions between the activities (sequence 2) [MOV] (4.9Mb)
- Detecting a blockade by a group of small boats [MOV] (9.9Mb)
- Detecting a hammer and anvil attack by a group of small boats [MOV] (15.5Mb)
- Detecting of deceptive behavior (hiding behind other boats) [MOV] (16.3Mb)
- Detecting of deceptive behavior (hiding behind other boats) [MOV] (7.6Mb)
- Detecting threatening intentions in a complex scenario in San Diego Harbor (boats going in formation, potential attacks) [MOV] (102.7Mb)
- Detecting threatening intentions in a complex scenario in open water (boats forming groups, splitting, hiding and attacks) [MOV] (56.6Mb)
- Detecting threatening intentions in a complex scenario in the Straits of Hormuz (small boats harassing and attacking destroyer in channel) [MOV] (100.4Mb)
Publications:
- Banafsheh Rekabdar, Monica Nicolescu, Richard Kelley, Mircea Nicolescu, "Unsupervised Learning of Spatio-Temporal Patterns Using Spike Timing Dependent Plasticity", to appear in the Proceedings of the Seventh Annual Conference on Artificial General Intelligence, Quebec City, Canada, August 2014. [PDF]
- Richard Kelley, Alireza Tavakkoli, Chris King, Amol Ambardekar, Liesl Wigand, Monica Nicolescu, Mircea Nicolescu, "Intent Recognition for Human-Robot Interaction", in Plan, Activity, and Intent Recognition, Gita Sukthankar, Christopher Geib, Hung Hai Bui, David Pynadath and Robert Goldman Editors, Elsevier, pp. 343-365, 2013. [Link]
- Richard Kelley, Alireza Tavakkoli, Chris King, Amol Ambardekar, Monica Nicolescu, Mircea Nicolescu, "Context-Based Bayesian Intent Recognition", in IEEE Transactions on Autonomous Mental Development, 4(3), 215-225, 2012. [PDF] [LINK]
- Richard Kelley, Liesl Wigand, Brian Hamilton, Katie Browne, Monica Nicolescu, Mircea Nicolescu, "Deep Networks for Predicting Human Intent with Respect to Objects", in Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Boston, MA, 2012. [Link]
- Richard Kelley, Amol Ambardekar, Liesl Wigand, Monica Nicolescu, Mircea Nicolescu, "Point Clouds and Range Images for Intent Recognition and Human-Robot Interaction", in Proceedings of the Advanced Reasoning with Depth Cameras Workshop (in conjunction with the Robotics: Science and Systems Conference), Los Angeles, California, June 2011. [PDF]
- Richard Kelley, Christopher King, Amol Ambardekar, Monica Nicolescu, Mircea Nicolescu and Alireza Tavakkoli, "Integrating Context into Intent Recognition Systems", in the "7th International Conference on Informatics in Control, Automation and Robotics", pages 315-320, Funchal, Madeira, Portugal, June, 2010. [PDF]
- Richard Kelley, Alireza Tavakkoli, Christopher King, Monica Nicolescu, Mircea Nicolescu, "Understanding Activities and Intentions for Human-Robot Interaction", in Advances in Human-Robot Interaction, Daisuke Chugo (editor), In-Tech, pages 288-305, February 2010. [PDF]
- Richard Kelley, Monica Nicolescu, Mircea Nicolescu, Sushil Louis, "An Evolutionary Approach to Maximum Likelihood Estimation for Generative Stochastic Models", the 40th International Symposium on Robotics, Barcelona, Spain, March 10-13, 2009. [PDF]
- Richard Kelley, Christopher King, Alireza Tavakkoli, Mircea Nicolescu, Monica
Nicolescu, George Bebis, "An Architecture for Understanding Intent Using a Novel
Hidden Markov Formulation", International Journal of Humanoid
Robotics, Special Issue on Cognitive Humanoid Robots, vol. 5, no. 2, pages
1-22, 2008. [PDF]
- Richard Kelley, Alireza Tavakkoli, Christopher King, Monica Nicolescu, Mircea Nicolescu, George
Bebis, "Understanding Human Intentions via Hidden Markov Models in Autonomous
Mobile Robots", Proceedings of the ACM/IEEE International Conference on
Human-Robot Interaction, pages 367-374, Amsterdam, Netherlands, March 2008. [PDF]
- Alireza Tavakkoli, Richard Kelley, Christopher King, Mircea Nicolescu, Monica
Nicolescu, George Bebis, "A Vision-Based Architecture for Intent Recognition",
Proceedings of the International Symposium on Visual Computing, pages 173-182,
Lake Tahoe, Nevada, November 2007. [PDF]
Support:
This work is supported
by the Office of Naval Research awards N00014-06-1-0611, N00014-12-1-0860, N00014-09-1-1121, and by the
National Science Foundation EPSCoR Ring True III award EPS0447416.
|
Learning by Demonstration
|
Motivation: While recent advances in robotics research bring
robots closer to entering our daily lives, real-world uses of autonomous
robots are very limited. One of the main reasons for this is that
designing robot controllers is still usually done by people specialized
in programming robots: the lack of accessible methods for robot
programming restricts the use of robots solely to people with programming
skills. The motivation of this project is to provide algorithms that
would enable non-expert users to design robot controllers for their
specific needs, thus facilitating the integration of robots in people’s
daily lives.
Objectives: The goal of this project is to develop algorithms for
automated generation of robot controllers from demonstration and
interaction with human users. The main research questions of this project
pertain to the investigation, design, and implementation of: (1) an
autonomous robot control architecture that provides support for task
knowledge acquisition from user provided demonstration, (2) algorithms
for robot learning by demonstration that facilitate training of robot
assistants by non-specialist users, (3) quantitative evaluation metrics
that provide objective means for assessing the performance of human-robot
interaction in the context of robot teaching by demonstration.
The proposed robot control architecture will create the infrastructure
for complex task learning and will provide
a new representation for multiple action selection mechanisms. The
learning by demonstration algorithms will use a novel approach for
interpreting a user’s demonstration, based on particle filtering that
identifies superpositions of multiple concurrent activities. In addition,
generalization algorithms will use inductive learning methods to capture
and represent variations in task execution strategies. User feedback will
allow for refinement of learned tasks, through verbal instructions or
teleoperation interventions. The quantitative evaluation metrics will
provide objective measures for the proposed interactive learning
approach and could also serve as more general tools for the broader
field of HRI.
Videos:
Generalization experiments based on ordering of task steps: robot mixing cookie ingredients (Pink/Red bowl (right) - Flour, Yellow bowl (middle) - Sugar, Green bowl (left) - Chocolate Chips, Blue bowl (front) - Mixing bowl). For details of the approach and experimemnts see the [scripts description] and [AIMSA_12].
- Scenario 1 - training [MOV] (4Mb)
- Scenario 1 - testing (robot performing learned task) [MOV] (18.5Mb)
- Scenario 1 - testing (robot performing learned task with helper); the robot takes into account the steps performed by the robot and does not do them again [MOV] (21.3Mb)
- Scenario 2 - training example 1 [MOV] (4.9Mb)
- Scenario 2 - testing (robot performing learned task) [MOV] (18.8Mb)
- Scenario 2 - training example 2 [MOV] (6.8Mb)
- Scenario 2 - testing (robot performing learned task) [MOV] (18.6Mb)
- Scenario 3 - training [MOV] (5Mb)
- Scenario 3 - testing (robot performing learned task) [MOV] (18.9Mb)
- Scenario 4 - training [MOV] (4.7Mb)
- Scenario 4 - testing (robot performing learned task) [MOV] (18.6Mb)
- Scenario 6 - training [MOV] (4.7Mb)
- Scenario 6 - testing (robot performing learned task) [MOV] (19Mb)
- Scenario 7 - training [MOV] (6.9Mb)
- Scenario 7 - testing (robot performing learned task) [MOV] (21.6Mb)
Simultaneous learning of behavior fusion and sequencing. Additional training is provided after the initial demonstration, showing on-line refinement of learned skills. For details of the approach see [IJSR_08], [ICDL_07], [WDIs_06] and [RO-MAN_06].
- Learning a left following behavior [MOV] (46.9Mb)
- Learning a circling behavior [MOV] (60.9Mb)
- Learning a sequence of fused behaviors [MOV] (30.6Mb)
- Learning a composition (regular expression) of fused behaviors [MOV] (48.7Mb)
Publications:
- Liesl Wigand, Monica Nicolescu, Mircea Nicolescu, "A Developmental Approach to Concept Learning", Proceedings of the International Conference on Informatics in Control, Automation and Robotics, pages 337-344, Reykjavik, Iceland, July 2013. [PDF] [Link]
- Katie Browne, Monica Nicolescu, "Learning to Generalize from Demonstrations", in Proceedings of AIMSA 2012 Workshop on Advances in Robot Learning and Human-Robot Interaction, 2012. [PDF]
- Amol Ambardekar, Alireza Tavakkoli, Mircea Nicolescu, and Monica Nicolescu, "A Developmental Framework for Visual Learning in Robotics", in the 2010 International Conference on Image Processing, Computer Vision, & Pattern Recognition, July 12-15, Las Vegas, USA, 2010. [PDF]
- Saul Byberg Reed, Tyson Richard Curtis Reed, Sergiu Dascalu, Monica Nicolescu, "Recursive, Hyperspherical Behavioral Learning for Robotic Control", in "World Automation Congress", Kobe, Japan, September 19-22, 2010. [PDF] [LINK]
- Monica Nicolescu, Odest Chadwicke Jenkins, Adam Olenderski, Eric Fritzinger, "Learning Behavior Fusion from Observation", Interactive Studies Journal, Special Issue on Robot and Human Interactive Communication ,
vol. 9, no. 2, pages 319-352, 2008. [PDF]
- Monica Nicolescu, Odest Chadwicke Jenkins, Austin Stanhope, "Fusing Robot Behaviors for Human-Level Tasks", in Proceedings, IEEE International Conference on Development and Learning (ICDL 07), London, UK, July 11-13, 2007. [PDF]
- Monica Nicolescu, Odest Chadwicke Jenkins, Adam Olenderski, "Learning Behavior Fusion Estimation from Demonstration", in Proceedings, IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 06), Hatfield, UK, pages 340-345, September 6-8, 2006. [Link]
- Monica Nicolescu, Odest Chadwicke Jenkins, Adam Olenderski, "Behavior Fusion Estimation for Robot Learning from Demonstration", in Proceedings, IEEE 2006 Workshop on Distributed Intelligent Systems (DIS06), Prague, Czech Republic, pages 31-36, June 15-16, 2006. [PDF]
- Monica Nicolescu, Maja J Mataric´, "Task Learning Through Imitation and Human-Robot Interaction", in Models and Mechanisms of Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions, Kerstin Dautenhahn and Chrystopher Nehaniv Eds., pages 407-424, 2006. [PDF]
- Monica Nicolescu, Maja J Mataric´, "Natural Methods for Robot Task Learning: Instructive Demonstration, Generalization and Practice", in Proceedings, Second International Joint Conference on Autonomous Agents and Multi-Agent Systems, Melbourne, AUSTRALIA, July 14-18, 2003 (best student paper nomination). [PDF]
- Monica Nicolescu, Maja J Mataric´, "Linking Perception and Action in a Control Architecture for Human-Robot Domains", Proceedings, Thirty-Sixth Hawaii International Conference on System Sciences (HICSS-36), Hawaii, USA, January 6-9, 2003 (best paper award). [PDF]
- Monica Nicolescu, Maja J Mataric´, "A Hierarchical Architecture for Behavior-Based Robots", Proceedings, First International Joint Conference on Autonomous Agents and Multi-Agent Systems, pages 227-233, Bologna, ITALY, July 15-19, 2002. [PDF]
- Monica Nicolescu, Maja J Mataric´, "Experience-Based Representation Construction: Learning from Human and Robot Teachers", Proceedings, IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 740-745, Maui, Hawaii, USA, October 29 - November 3, 2001. [PS], [PDF]
- Monica Nicolescu, Maja J Mataric´, "Experience-Based Learning of Task Representations from Human-robot Interaction", Proceedings, IEEE International Symposium on Computational Intelligence in Robotics and Automation, pages 463-468, Banff, Alberta, CANADA, July 29 - August 1, 2001. [PS], [PDF]
- Monica Nicolescu, Maja J Mataric´, "Learning and Interacting in Human-Robot Domains", Special Issue of IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans , Vol. 31, No. 5, pages 419-430, Chelsea C. White and Kerstin Dautenhahn Eds., September, 2001. [PS], [PDF]
Support:
This work is supported by the National Science Foundation under Award IIS-0546876 and a UNR Junior Faculty Award.
|
Affordable Human Behavior Modeling
|
Motivation: Live-fire military training exercises are expensive in
terms of personnel, ordnance, fuel, and environmental damage. Virtual training technologies
and the Navy and US Marine Corps’ family of tactical decision making
simulations (TDSs) provide training tools for trainees to plan and
execute operational plans in a force-on-force environment and through
after action review, gain feedback about the effectiveness of their
planning and decision making. However, TDSs require the participation
and coordination of instructors (experts) and many human players because
the capability for automatically controlling a realistic, competent
opposing force is relatively non-existent. Furthermore, tactical
decision making simulations are not designed to address more strategic
decision making. Thus, the inability of current technology to provide a
competitive, realistic, opposing force compromises the goal of
inexpensive, anytime, anywhere training, especially for strategic
decision making.
Objective: The goal of this project is to develop a computational
approach to developing effective training systems for virtual simulation
environments. Our proposed solution is to develop intelligent,
autonomous controllers that drive the behavior of each boat in the
virtual training environment. To increase the system’s efficiency we
provide a mechanism for creating such controllers, from the
demonstration of a navigation expert, using a simple programming
interface. In addition, our approach deals with two significant and
related challenges: the realism of behavior exhibited by the automated
boats and their real-time response to changes in the environment.
Publications:
- Siming Liu, Sushil J. Louis, and Monica Nicolescu, "Comparing Heuristic Search Methods for Finding Effective Group Behaviors in RTS Game", in 2013 IEEE Congress on Evolutionary Computation (CEC), pages 1371-1378, 2013. [PDF] [Link]
- Siming Liu, Sushil J. Louis, and Monica Nicolescu, "Using Cigar for Finding Effective Group Behaviors in RTS Game", in 2013 IEEE Conference on Computational Intelligence in Games (CIG), pages 1-8. IEEE, 2013. [PDF] [Link]
- Monica Nicolescu, Ryan Leigh, Adam Olenderski, Sushil Louis, Sergiu Dascalu, Chris Miles, Juan Quiroz, Ryan Aleson, "A Training Simulation System with Realistic Autonomous Ship Control", Computational Intelligence, Special Issue on Artificial Intelligence Methods for Ambient Intelligence, vol. 23, no. 4, pages 497-516, November 2007. [PDF]
- Adam Olenderski, Monica Nicolescu, Sushil Louis, "A Behavior-Based Architecture for Realistic Autonomous Ship Control", in Proceedings, IEEE Symposium on Computational Intelligence and Games (CIG06), Reno, NV, USA, May 22-24, 2006. [PDF]
Support:
This work is supported by the Office of Naval Research under grant number N00014-05-1-0709.
|
Multi-Robot Control
|
Motivation: Service and security applications can greatly benefit
from the use of multi-robot teams, in that the tasks can be accomplished
faster and more reliably. The major challenges we address in our research
are the methods communication and control that enable the cooperation of
multiple robots in achieving a task.
Objective: The goal of this project is to develop an architecture
with the following main features:
1) it allows for heterogeneous teams, such that robots with different
capabilities can cooperate to achieve complex tasks, 2) it enables
self-organization in task allocation, which is performed through a
market-based publish subscribe mechanism, 3) it provides sensor sharing
capabilities across robots, 4) it has a low computation overload, 5) it
is platform independent, and 5) it is robust to environment and team
changes.
Videos:
- Distributed auction based task allocation with a heterogeneous robot group; "red" or "green" robots can attend to "red", "green" or "red-and-green" tasks. [MOV] (22.9Mb).
- Building a map of the environment [MOV] (13.8Mb).
- Robot coordination in a robot failure scenario; the failing robot uses another robot's sensor data to navigate around a corner [MOV] (17.8Mb).
Support: This work is supported by the
Office of Naval Research under grant number
N00014-05-1-0525. |
Human-Robot Interaction
|
Motivation: Advances in robotics research bring robots closer to
real world applications. Although robots have become increasingly capable,
productive interaction is still restricted to specialists in the field. A
major challenge in designing robots for real-world applications is to enable
natural and accessible interaction between robots and nontechnical users,
while ensuring long-term, robust performance in complex environments without
the direct control of a human operator.
Objective:
The goal of this project is to develop a control architecture that provides robots with social-awareness,
allowing them to monitor their surroundings for other social agents
(robots or people), detect their need for interaction, and respond
appropriately. Our architecture provides means for long-term autonomy,
enabling robots to manage a large repertoire of tasks over extended
periods. Additionally, our system is designed for realistic assistive
applications, where multiple people may be simultaneously competing for
the robot’s assistance. The contribution of this work is our framework
that addresses two key issues in human-robot interaction: awareness of
the environment and other agents, and long-term interaction with
multiple users.
Videos:
- Long-term interaction with a service robot; tasks are serviced based on priority, using different postures [MOV] (18.9Mb).
Publications:
- Bradford Towle Jr, and Monica Nicolescu, "An Auction Behavior-Based Robotic Architecture for Service Robotics", in Intelligent Service Robotics, 1-18, 2014. [PDF] [LINK]
- Bradford Towle, Monica N. Nicolescu, "Incorporating a Reusable Human Robot Interface with an Auction Behavior-Based Robotic Architecture", in IEEE, ACM 4th International Workshop on Collaborative Robots and Human Robot Interaction, pp. 203 - 209, 2013. [PDF]
[Link]
- Bradford Towle, Monica Nicolescu, "Real-World Implementation of an Auction Behavior-Based Robotic Architecture (ABBRA)", in Proceedings of the IEEE International Conference on Technologies for Practical Robot Applications (TePRA), 2012. [Link]
- Bradford A. Towle Jr. and Monica Nicolescu, "Applying Dynamic Conditions to the Auction Behavior-Based Robotic Architecture", in Proceedings of International Conference on Artificial Intelligence, Las Vegas, NV, July 18-21, 2011. [PDF]
- Richard Kelley, Monica Nicolescu, Mircea Nicolescu, "Grammar-Based Robot Control", Proceedings of the International Conference on Autonomous Agents and Multiagent Systems, Budapest, Hungary, pages 1153-1154, May, 2009. [PDF] [LINK]
- Christopher King, Xavier Palathingal, Monica Nicolescu, Mircea Nicolescu, "A Vision-Based Architecture for Long-Term Human-Robot Interaction", in Proceedings, the IASTED International Conference on Human Computer Interaction, Chamonix, France, March 14-16, 2007.[PDF]
- Christopher King, Xavier Palathingal, Monica Nicolescu, Mircea Nicolescu,
"A Control Architecture for Long-Term Autonomy of Robotic Assistants",
Proceedings of the International Symposium on Visual Computing, pages 375-384,
Lake Tahoe, Nevada, November 2007. [PDF]
Support: This work is supported
by the National Science Foundation under Award IIS-0546876 and by the Office of Naval Research under grant number N00014-05-1-0525.
|
Pattern Recognition for Microcantilever Arrays
|
Motivation: Recently we are witnessing noticeable success
in the development of a new class of chemical and biological sensors –
microfabricated cantilever sensor arrays actuated at their resonance
frequencies and functionalized by polymer coatings. The major advantages
of such miniature sensors are their small size, fast response,
remarkably high sensitivity, and the endless possibilities of reaching
high selectivity via customized combination of polymer coatings. These
devices are inexpensive, portable, and have the ability to operate in
various environments, such as vacuum, air and liquids. The areas of
applications of microfabricated cantilever sensor arrays are almost
countless, including a variety of scientific research in physics,
chemistry, biochemistry, biology, and genetics, food and beverage
industry, perfume industry, pharmacology, medicine, environmental
monitoring, and most recently, related to the national security due to a
high risk of terrorist attacks. However, despite the remarkable
achievements in fabrication of microcantilever sensor arrays, creating
an accurate and reliable pattern recognition algorithm as a part of the
sensory system is still an essential and not yet completely solved
problem. Most pattern analysis algorithms that have been used with the
cantilever sensor arrays today are highly customized, ad hoc algorithms.
They often lack generality and cannot be easily carried from one set of
experimental data to another.
Objective: The main goal of this project is to develop pattern recognition algorithms
that can be effectively used as a reliable detection system with the
specific sensory data obtained during the experiments with a
microfabricated cantilever sensor array and further feature extracting
procedure. Five different pattern recognition algorithms have been
created for the current research. All of those algorithms and the open
source implementation of the sixth algorithm (multiclass SVMs) were used
for testing on benchmark data sets and collected sensory data. It has
been shown that the kernel-based algorithms have the greatest potential
to be used with the microfabricated cantilever sensor array in the
detection systems. Four out of six pattern recognition algorithms have
produced high accuracy classification results upon processing the
cantilever sensor array data.
Publications:
Support: This work is supported by the National Science Foundation EPSCoR
Ring True III award EPS0447416.
|
Autonomous Outdoor Navigation
|
Motivation:Robotic tasks for real-world applications
typically involve temporal sequences and representations of data not
available directly from sensors, which suggests that representations and
deliberative components are essential for a robot’s control architecture.
The majority of real-world robot tasks require not only the reliable
real-time characteristics of reactive behavior, but also guided transitions
from state-to-state, and the ability to make decisions based on abstract
data acquired a-priori, from sources such as maps. This poses significant
challenges for the development of controllers for robots that interact
directly with the physical environment. A principal challenge is in dealing
with the uncertainty introduced by noisy sensors. Pure deliberation falters
in that even seemingly simple environments cannot be entirely represented by
models accurately or quickly enough to guide a robot reliably in real-time.
Reactive architectures are able to deal with uncertainties, but are limited
to relatively simple tasks.
Objective: The goal of this project is to develop an architecture
that addresses the above challenges and demonstrates a flexible method
for building robot controllers, while allowing for both reactive and
deliberative components. Our solution is to consider a controller as a
composition of agents, with similar representations and inter-agent
communication mechanisms, similar to behaviors in the behavior-based
design paradigm. The agents can be composed of other agents in a nested
fashion and use the implementation (reactive or deliberative) that is
most appropriate for their task. Thus, the contribution of our work is a
control architecture that has the following main features: 1) it enables
modular, flexible, incremental design of controllers to execute complex,
sequential tasks, which can be easily extended or improved and 2) it
integrates reaction and deliberation within the same representational
framework, using deliberation only as necessary. This approach is used
to develop a controller on a custom robot, and was validated in an
outdoor navigation and retrieval task.
Publications:
Support: This work is supported by the National Science Foundation EPSCoR Ring True III Award EPS0447416 and by NASA under Awards NSHE-07-34, NSHE-07-35, and NSHE-07-56.
|
|
|
|