Abstract¶
This is a crash course on applying reinforcement learning to train policies that balance real legged robots. We first review the necessary basics: partially-observable Markov decision processes, value functions, the goal of reinforcement learning. We then focus on policy optimization: REINFORCE, policy gradient and proximal policy optimization (PPO). After some practical advice on training with PPO, we finally focus on techniques to train real-robot policies from simulation data: domain randomization, simulation augmentation and reward shaping.
Content¶
Slides | |
Lab notebook | |
Source of teaching material (CC-BY-4.0 license) |
Example¶
On Linux, you can run train and run the open source PPO balancer for Upkie wheeled bipeds:
$ git clone https://github.com/upkie/ppo_balancer.git $ cd ppo_balancer $ conda create -f environment.yaml $ conda activate ppo_balancer $ make show_training
References¶
Discussion ¶
Feel free to post a comment by e-mail using the form below. Your e-mail address will not be disclosed.