Active Contextual Entropy Search
Jan Hendrik Metzen
In 8th Workshop on Optimization for Machine Learning (OPT 2015), (OPT-2015), 11.12.2015, Montreal, o.A., Dec/2015.

Abstract :

Contextual policy search allows adapting robotic movement primitives to different situations. For instance, a locomotion primitive might be adapted to different terrain inclinations or desired walking speeds. Such an adaptation is often achievable by modifying a small number of hyperparameters; however, learning when performed on actual robotic systems is typically restricted to a small number of trials. Bayesian optimization has recently been proposed as a sample-efficient means for contextual policy search, which is well suited under these conditions. In this work, we extend entropy search, a particular kind of Bayesian optimization, such that it can be used for active contextual policy search, where the learning systems selects those tasks during training in which it expects to learn the most.

Files:

20161209_Active_Contextual_Entropy_Search.pdf


© DFKI GmbH
last updated 28.02.2023
to top