EANT+KALMAN: An Efficient Reinforcement Learning Method for Continuous State Partially Observable Domains
Yohannes Kassahun, José de Gea Fernández, Jan Hendrik Metzen, Mark Edgington, Frank Kirchner
Editors: Andreas Dengel, K. Berns, Thomas Breuel, Thomas Roth-Berghofer
In KI 2008: Advances in Artificial Intelligence, (KI-08), 23.9.-26.9.2008, Kaiserslautern, Springer, series Lecture Notes in Artificial Intelligence, volume 5243, pages 241-248, 2008.

Abstract :

In this contribution we present an extension of a neuroevolutionary method called Evolutionary Acquisition of Neural Topologies (EANT) [11] that allows the evolution of solutions taking the form of a POMDP agent (Partially Observable Markov Decision Process) [8]. The solution we propose involves cascading a Kalman filter [10] (state estimator) and a feed-forward neural network. The extension (EANT+KALMAN) has been tested on the double pole balancing without velocity benchmark, achieving significantly better results than the to date published results of other algorithms.


© DFKI GmbH
last updated 28.02.2023
to top