Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Autonomous Agents that Learn to Better Coordinate Andrew Garland and Richard Alterman
 

Summary: Autonomous Agents that Learn to Better Coordinate
Andrew Garland and Richard Alterman
Volen Center for Complex Systems
Brandeis University, Waltham Massachusetts 02454
{aeg,alterman}@cs.brandeis.edu
Brandeis University Technical Report CS-TR-03-237
July, 2003
To appear in Autonomous Agents and Multi-Agent Systems
Abstract
A fundamental difficulty faced by groups of agents that work together is how to efficiently
coordinate their efforts. This coordination problem is both ubiquitous and challenging, espe-
cially in environments where autonomous agents are motivated by personal goals.
Previous AI research on coordination has developed techniques that allow agents to act effi-
ciently from the outset based on common built-in knowledge or to learn to act efficiently when
the agents are not autonomous. The research described in this paper builds on those efforts
by developing distributed learning techniques that improve coordination among autonomous
agents.
The techniques presented in this work encompass agents who are heterogeneous, who do
not have complete built-in common knowledge, and who cannot coordinate solely by observa-
tion. An agent learns from her experiences so that her future behavior more accurately reflects

  

Source: Alterman, Richard - Computer Science Department, Center for Complex Systems, Brandeis University
Garland, Andrew - Computer Science Department, Brandeis University

 

Collections: Computer Technologies and Information Sciences