Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Learning procedural planning knowledge in complex environments

Conference ·
OSTI ID:430876
 [1]
  1. Univ. of Michigan, Ann Arbor, MI (United States)
Autonomous agents functioning in complex and rapidly changing environments can improve their task performance if they update and correct their world model over the life of the agent. Existing research on this problem can be divided into two classes. First, reinforcement learners that use weak inductive methods to directly modify an agent`s procedural execution knowledge. These systems are robust in dynamic and complex environments but generally do not support planning or the pursuit of multiple goals and learn slowly as a result of their weak methods. In contrast, the second category, theory revision systems, learn declarative planning knowledge through stronger methods that use explicit reasoning to identify and correct errors in the agent`s domain knowledge. However, these methods are generally only applicable to agents with instantaneous actions in fully sensed domains.
OSTI ID:
430876
Report Number(s):
CONF-960876--
Country of Publication:
United States
Language:
English

Similar Records

An agent architecture with on-line learning of both procedural and declarative knowledge
Conference · Mon Dec 30 23:00:00 EST 1996 · OSTI ID:466431

Procedural knowledge
Journal Article · Wed Oct 01 00:00:00 EDT 1986 · Proc. IEEE; (United States) · OSTI ID:7190032

A Study on Efficient Reinforcement Learning Through Knowledge Transfer
Journal Article · Fri Sep 30 20:00:00 EDT 2022 · Federated and Transfer Learning Adaptation, Learning, and Optimization · OSTI ID:1987999