Learning Graph-based Representations for Continuous Reinforcement Learning Domains
In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, (ECML PKDD-2013), 23.9.-27.9.2013, Prag, Springer Verlag GmbH, pages 81-96, Sep/2013.
Graph-based domain representations have been used in discrete reinforcement learning domains as basis for, e.g., autonomous skill discovery and representation learning. These abilities are also highly relevant for learning in domains which have structured, continuous state spaces as they allow to decompose complex problems into simpler ones and reduce the burden of handengineering features. However, since graphs are inherently discrete structures, the extension of these approaches to continuous domains is not straight-forward.
We argue that graphs should be seen as discrete, generative models of continuous domains. Based on this intuition, we define the likelihood of a graph for a given set of observed state transitions and derive a heuristic method entitled FIGE that allows to learn graph-based representations of continuous domains with large likelihood. Based on FIGE, we present a new skill discovery approach for continuous domains. Furthermore, we show that the learning of representations can be considerably improved by using FIGE.