Analysis of an Evolutionary Reinforcement Learning Method in a Multiagent Domain
Jan Hendrik Metzen, Mark Edgington, Yohannes Kassahun, Frank Kirchner
In Proceedings of the Seventh International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS '08), (AAMAS-08), 12.5.-16.5.2008, Estoril, o.A., pages 291-298, May/2008.
Abstract
:
Many multiagent problems comprise subtasks which can be considered as reinforcement learning (RL) problems. Several methods have been proposed for solving such RL problems. In addition to classical temporal difference (TD) methods, evolutionary algorithms (EA) are among the most promising approaches. The relative performance of these approaches in certain subdomains (e. g. multiagent learning) of the general RL problem remains an open question at this time. In addition to theoretical analysis, benchmarks are one of the most important tools for comparing different RL methods in certain problem domains. A recently proposed multiagent RL benchmark problem is the Keepaway benchmark, which is based on the RoboCup Soccer Simulator. This benchmark is one of the most challenging multiagent learning problems because its state-space is continuous and high dimensional, and both the sensors and actuators are noisy. In this paper we analyze the performance of the neuroevolutionary approach called Evolutionary Acquisition of Neural Topologies (EANT) in the Keepaway benchmark, and compare the results obtained using EANT with the results of other algorithms tested on the same benchmark.