Here are outlines and teaching materials for classes I'm teaching at École Normale Supérieure and Master MVA.
2024 ¶
-
Reinforcement learning for legged robots
Stéphane Caron. Fall 2024 class at Master MVA, Mines de Paris and École normale supérieure, Paris.
This is a crash course on applying reinforcement learning to train policies that balance real legged robots. We first review the necessary basics: partially-observable Markov decision processes, value functions, the goal of reinforcement learning. We then focus on policy optimization: REINFORCE, policy gradient and proximal policy optimization (PPO). We finally focus on techniques to train real-robot policies from simulation data: domain randomization, simulation augmentation, teacher-student distillation, reward shaping, ...
2023 ¶
-
Modeling and control of legged locomotion
Stéphane Caron. Fall 2023 class at École normale supérieure, Paris.
The objective of this lecture is to understand the physics of balancing and how we can leverage them to design locomotion controllers.
-
Reinforcement learning for legged robots
Stéphane Caron. Fall 2023 class at Master MVA and École normale supérieure, Paris.
This is a crash course on applying reinforcement learning to train policies that balance real legged robots. We first review the necessary basics: partially-observable Markov decision processes, value functions, the goal of reinforcement learning. We then focus on policy optimization: REINFORCE, policy gradient and proximal policy optimization (PPO). After some practical advice on training with PPO, we finally focus on techniques to train real-robot policies from simulation data: domain randomization, simulation augmentation and reward shaping.
-
Robotics - Master MVA
Stéphane Caron, Justin Carpentier, Silvère Bonnabel and Pierre-Brice Wieber. Fall 2023 class at Master MVA, Paris.
A large part of the recent progress in robotics has sided with advances in machine learning, optimization and computer vision. The objective of this lecture is to introduce the general conceptual tools behind these advances and show how they have enabled robots to perceive the world and perform tasks ranging, beyond factory automation, to highly-dynamic saltos or mountain hikes. The course covers modeling and simulation of robotic systems, motion planning, inverse problems for motion control, optimal control, and reinforcement learning. It also includes practical exercises with state-of-the-art robotics libraries, and a broader reflection on our responsibilities when it comes to doing research and innovation in robotics.