Towards Efficient Online Reinforcement Learning Using Neuroevolution
Jan Hendrik Metzen, Mark Edgington, Yohannes Kassahun, Frank Kirchner
In Proceedings of the 10th Genetic and Evolutionary Computation Conference, (GECCO-2008), 12.7.-16.7.2008, Atlanta, GA, o.A., pages 1425-1426, 2008.

Zusammenfassung (Abstract) :

For many complex Reinforcement Learning (RL) problems with large and continuous state spaces, neuroevolution has achieved promising results. This is especially true when there is noise in sensor and/or actuator signals. These results have mainly been obtained in offline learning settings, where the training and the evaluation phases of the systems are separated. In contrast, for online RL tasks, the actual performance of a system matters during its learning phase. In these tasks, neuroevolutionary systems are often impaired by their purely exploratory nature, meaning that they usually do not use (i. e. exploit) their knowledge of a single individual's performance to improve performance during learning. In this paper we describe modifications that significantly improve the online performance of the neuroevolutionary method Evolutionary Acquisition of Neural Topologies (EANT) and discuss the results obtained in the Mountain Car benchmark.



© DFKI GmbH
zuletzt geändert am 06.09.2016
nach oben