Learning the optimal state-feedback using deep networks
Carlos Sánchez-Sánchez, Dario Izzo, Daniel Hennes
In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence, (SSCI-2016), 06.12.-09.12.2016, Athen, IEEE, Dec/2016.

Zusammenfassung (Abstract) :

We investigate the use of deep artificial neural networks to approximate the optimal state-feedback control of continuous time, deterministic, non-linear systems. The networks are trained in a supervised manner using trajectories generated by solving the optimal control problem via the Hermite-Simpson transcription method. We find that deep networks are able to represent the optimal state-feedback with high accuracy and precision well outside the training area. We consider non-linear dynamical models under different cost functions that result in both smooth and discontinuous (bang-bang) optimal control solutions. In particular, we investigate the inverted pendulum swing-up and stabilization, a multicopter pin-point landing and a spacecraft free landing problem. Across all domains, we find that deep networks significantly outperform shallow networks in the ability to build an accurate functional representation of the optimal control. In the case of spacecraft and multicopter landing, deep networks are able to achieve safe landings consistently even when starting well outside of the training area.

Files:

20161220_Learning_the_optimal_state-feedback_using_deep_networks.pdf

Links:

https://ieeexplore.ieee.org/document/7850105


© DFKI GmbH
zuletzt geändert am 27.02.2023