Robotics: Science and Systems V
Policy search via the signed derivative
J. Z. Kolter and A. Y. NgAbstract:
We consider policy search for reinforcement learning: learning policy parameters, for some fixed policy class, that optimize performance of a system. In this paper, we propose a novel policy gradient method based on an approximation we call the Signed Derivative; the approximation is based on the intuition that it is often very easy to guess the direction in which control inputs affect future state variables, even if we do not have an accurate model of the system. The resulting algorithm is very simple, requires no model of the environment, and we show that it can outperform standard stochastic estimators of the gradient; indeed we show that Signed Derivative algorithm can in fact perform as well as the true (model-based) policy gradient, but without knowledge of the model. We evaluate the algorithmâs performance on both a simulated task and two realworld tasks â driving an RC car along a specified trajectory, and jumping onto obstacles with an quadruped robot â and in all cases achieve good performance after very little training.
Bibtex:
@INPROCEEDINGS{ Kolter-RSS-09, AUTHOR = {J. Z. Kolter AND A. Y. Ng}, TITLE = {Policy search via the signed derivative}, BOOKTITLE = {Proceedings of Robotics: Science and Systems}, YEAR = {2009}, ADDRESS = {Seattle, USA}, MONTH = {June}, DOI = {10.15607/RSS.2009.V.027} }