Robotics - Master MVA - Fall 2025

Silvère Bonnabel, Stéphane Caron, Justin Carpentier, Ajay Sathya and Pierre-Brice Wieber. Fall 2025 course at Master MVA, Paris.

Abstract

A large part of the recent progress in robotics has sided with advances in machine learning, optimization and computer vision. The objective of this course is to introduce the general conceptual tools behind these advances and show how they have enabled robots to perceive the world and perform tasks ranging, beyond factory automation, to highly-dynamic saltos or mountain hikes. The course covers modeling and simulation of robotic systems, motion planning, inverse problems for motion control, optimal control, and reinforcement learning. It also includes practical exercises with state-of-the-art robotics libraries, and a broader reflection on our responsibilities when it comes to doing research and innovation in robotics.

Materials

Assignment notebooks for this class are available on GitHub:

github Assignments

Lecture materials marked below with an open-book icon 📖 link directly to the corresponding lecture page on this site. Those marked with a closed-book icon 📕 are password-protected for enrolled MVA students. To request those, you can reach out to the corresponding lecturer directly.

1. Introduction to robotics

This first lecture is a general introduction of modeling robotic systems. We review basic notions of control theory to describe the evolution of dynamical systems and introduce standard robot dynamics concepts.

closed-book Slides: Introduction to robotics

Lecturer: Justin Carpentier.

2. Kinematics and rigid transformations

Robotics is about producing motion. We now dive into the mathematical representation of robots (articulated systems of rigid bodies) and their motions (relative transforms and generalized velocities of these rigid bodies).

open-book Materials: Kinematics and rigid transformations

Lecturers: Silvère Bonnabel and Stéphane Caron.

Topics:

  • Rotations: SO(3)
  • Angular velocities: so(3)
  • Rigid-body transforms: SE(3)
  • Rigid-body velocities: se(3)

References:

3. Simulation

In this course, we will introduce optimal control and dynamics simulation. We will review the fundamuntal principles (Prontryagin maximum principle and Hamilton-Jacobi-Bellman equations) and their derivation in the context of numerical applications (constrained optimization, differentiable dynamic programming).

closed-book Slides: Simulation

Lecturer: Justin Carpentier.

Topics:

  • Optimal control and calculus of variations
  • Prontryagin principles and Hamilton-Jacobi-Bellman equations
  • Trajectory optimization
  • Differential dynamic programming
  • Model predictive control
  • Distinction between OC and MPC?

References:

  • Calculus of variations and optimal control theory: a concise introduction, Liberzon (2011).
  • Contrôle optimal: théorie & applications, Trélat (2005).
  • Model predictive control: theory, computation, and design, Rawlings, Mayne & Diehl (2017).

4. Perception and estimation

In this lecture, we will start by briefly describing the sensors that are used by robots to perceive their environment and self-localize in it, namely IMUs, cameras, point clouds, and absolute position measurements (GPS outdoors, motion capture indoors). We will introduce the sensor fusion problem for dynamical systems and its optimal solution in the linear case: the Kalman filter. We will then turn to the nonlinear case and its tools: EKF, Invariant EKF, factor graphs.

As praticle exercises, we will first start with simple wheeled robot localization in 2D, and then move to the principles behind the recent contact-aided invariant EKF for legged robot, and also simultaneous localization and mapping (SLAM) and the MSCKF for visual inertial odometry (VIO).

closed-book Slides: Perception and estimation

Lecturer: Sivlère Bonnabel.

Topics:

  • Estimation theory
  • Kalman filtering, Invariant Kalman filtering
  • SLAM, visual inertial odometry
  • Sensors (inertial, visual)

References:

  • State Estimation for Robotics, Barfoot (2017).
  • Bayesian filtering and smoothing, Särkkä (2013).

5. Motion planning

This lecture is about motion planning, the problem of finding feasible continuous motions between two robot configurations that may be quite far away or require careful execution, such as navigating between obstacles. We will recall the concepts of configuration space and workspace, then discuss state-of-the-art sampling-based algorithms. We will cover the cases of non-holonomic vehicles and manipulation. In the tutorial session, we will implement motion planning algorithms on a robotic arm scenario.

closed-book Slides: Motion planning

Lecturer: Stéphane Caron.

Topics:

  • Configuration space
  • Randomized algorithms: PRM and RRT

References:

6. Optimal control

In this course, we will introduce optimal control and dynamics simulation. We will review the fundamuntal principles (Prontryagin maximum principle and Hamilton-Jacobi-Bellman equations) and their derivation in the context of numerical applications (constrained optimization, differentiable dynamic programming).

closed-book Slides: Optimal control

Lecturer: Justin Carpentier.

Topics:

  • Optimal control and calculus of variations
  • Prontryagin principles and Hamilton-Jacobi-Bellman equations
  • Trajectory optimization
  • Differential dynamic programming
  • Model predictive control
  • Distinction between OC and MPC?

References:

  • Calculus of variations and optimal control theory: a concise introduction, Liberzon (2011).
  • Contrôle optimal: théorie & applications, Trélat (2005).
  • Model predictive control: theory, computation, and design, Rawlings, Mayne & Diehl (2017).

7. Reinforcement learning for legged robots

In this lecture, we will outline recent breakthroughs of reinforcement learning in real-robot locomotion and manipulation. We will step through the technical decisions in training pipelines, and describe the state-of-the-art toolbox for transferring simulation-trained policies to real robots.

open-book Materials: Reinforcement learning for legged robots

Lecturer: Stéphane Caron.

Topics:

  • Partially-observable Markov decision process (POMDP)
  • Goal of reinforcement learning
  • Model, policy, value function
  • Policy optimization: REINFORCE, policy gradient, PPO
  • Application to robotics: domain randomization, Markov property, "rewArt"

References:

8. Responsible robotics

What is Ethics, how does it work and what are your obligations when it comes to doing research and innovation in robotics? After a bit of history and a review of major aspects related to responsible robotics, we'll work through examples such as self-driving vehicles.

closed-book Slides: Responsible robotics

Lecturer: Pierre-Brice Wieber.

Topics:

  • Human agency and oversight
  • Technical robustness and safety
  • Environmental and societal well-being
  • Accountability

Discussion

You can subscribe to this  Discussion's atom feed to stay tuned.

  • Avatar

    Alouane

    Posted on

    Hello I wonder wether it is possible to have copy of the courses you are providing in this website. Actually, i need those courses to widen my knowledge in robotics Thanks

    • Avatar

      Stéphane

      Posted on

      Thank you for your interest. The materials marked with the open-book icon 📖 are accessible directly on this site, while the ones marked with the closed-book icon 📕 are password-protected for enrolled MVA students. If you'd like access to those, you can reach out to the corresponding lecturer directly. (I have updated the Materials section at the top to reflect this, thank you for your feedback.)

Feel free to post a comment by e-mail using the form below. Your e-mail address will not be disclosed.

📝 You can use Markdown with $\LaTeX$ formulas in your comment.

By clicking the button below, you agree to the publication of your comment on this page.

Opens your e-mail client.