Learning Parameters of Linear Models in Compressed Parameter Space
Yohannes Kassahun, Hendrik Wöhrle, Alexander Fabisch, Marc Tabie
Editors: Alessandro E. Villa, Włodzisław Duch, Péter Érdi, Francesco Masulli, Günther Palm
In Artificial Neural Networks and Machine Learning – ICANN 2012, Springer, series Lecture Notes in Computer Science, volume 7553, pages 108-115, 2012. ISBN: 978-3-642-33265-4.

Abstract :

We present a novel method of reducing the training time by learning parameters of a model at hand in compressed parameter space. In compressed parameter space the parameters of the model are represented by fewer parameters, and hence training can be faster. After training, the parameters of the model can be generated from the parameters in compressed parameter space. We show that for supervised learning, learning the parameters of a model in compressed parameter space is equivalent to learning parameters of the model in compressed input space. We have applied our method to a supervised learning domain and show that a solution can be obtained at much faster speed than learning in uncompressed parameter space. For reinforcement learning, we show empirically that searching directly the parameters of a policy in compressed parameter space accelerates learning.

Keywords :

Compressed Sensing; Supervised Learning; Reinforcement Learning

Links:

http://dx.doi.org/10.1007/978-3-642-33266-1_14


© DFKI GmbH
last updated 28.02.2023