Header logo is

Policy gradient methods for robotics

2006

Conference Paper

am

ei


The aquisition and improvement of motor skills and control policies for robotics from trial and error is of essential importance if robots should ever leave precisely pre-structured environments. However, to date only few existing reinforcement learning methods have been scaled into the domains of highdimensional robots such as manipulator, legged or humanoid robots. Policy gradient methods remain one of the few exceptions and have found a variety of applications. Nevertheless, the application of such methods is not without peril if done in an uninformed manner. In this paper, we give an overview on learning with policy gradient methods for robotics with a strong focus on recent advances in the field. We outline previous applications to robotics and show how the most recently developed methods can significantly improve learning performance. Finally, we evaluate our most promising algorithm in the application of hitting a baseball with an anthropomorphic arm.

Author(s): Peters, J. and Schaal, S.
Book Title: Proceedings of the IEEE International Conference on Intelligent Robotics Systems
Pages: 2219-2225
Year: 2006

Department(s): Autonomous Motion, Empirical Inference
Bibtex Type: Conference Paper (inproceedings)

DOI: 10.1109/IROS.2006.282564
Event Name: IROS 2006
Event Place: Beijing , China

Cross Ref: p2655
Note: clmc
URL: http://www-clmc.usc.edu/publications/P/peters-IROS2006.pdf

BibTex

@inproceedings{Peters_PIICIRS_2006,
  title = {Policy gradient methods for robotics},
  author = {Peters, J. and Schaal, S.},
  booktitle = {Proceedings of the IEEE International Conference on Intelligent Robotics Systems},
  pages = {2219-2225},
  year = {2006},
  note = {clmc},
  crossref = {p2655},
  doi = {10.1109/IROS.2006.282564},
  url = {http://www-clmc.usc.edu/publications/P/peters-IROS2006.pdf}
}