Neural networks are useful in evolving the control systems of agents. They provide a straightforward mapping between sensors and motors, enabling them to represent directly the policy (control) or the value function to be learned. It has been shown using standard benchmark problems that combinations of neural networks with evolutionary methods (neuroevolution) perform better than traditional reinforcement learning methods in many problem domains, especially in domains which are non-deterministic and only partially observable.
In this talk, principles of neuroevolutionary methods will be discussed using different examples from both complete and partially observable domains.