Robotics – Master MVA

Welcome to the web page for the robotics course of the MVA master's program. The 2025-2026 edition of this course will be taught by Silvère Bonnabel, Stéphane Caron, Justin Carpentier, Ajay Sathya (Teaching Assistant) and Pierre-Brice Wieber.

Recent advances in robotics have sided with breakthroughs in machine learning, optimization and computer vision. This course aims to equip students with the foundational conceptual tools that underpin these developments, demonstrating how they enable robots to perceive the world and execute tasks, ranging from factory automation to highly-dynamic saltos or mountain hikes. The course covers modeling and simulation of robotic systems, motion planning, inverse problems for motion control, optimal control, and reinforcement learning. This knowledge will be completed by hands-on exercises with state-of-the-art robotics libraries, as well as a broader reflection on our responsibilities when it comes to doing research and innovation in robotics.

Next lectures

Lectures take place on Thursday mornings:

DateTimeWhereTopicTeacherTA
11/12/259am–12pmInria ParisFinal poster sessionAll-

Materials

1. Introduction to robotics

This first lecture is a general introduction of modeling robotic systems. We review basic notions of control theory to describe the evolution of dynamical systems and introduce standard robot dynamics concepts.

Contents:

Topics:

2. Kinematics

Robotics is about producing motion. We now dive into the mathematical representation of robots (articulated systems of rigid bodies) and their motions (relative transforms and generalized velocities of these rigid bodies).

Contents:

Topics:

References:

3. Inverse kinematics

We start our review standard inverse problems in robotics with inverse kinematics, the problem of computing configuration-space motions from workspace motions. This corresponds to, for instance, moving a robotic arm to reach a given object. We introduce the concepts of task functions and how to cast differential inverse kinematics as a quadratic program. In the tutorial, we will put these notions to use with Pinocchio.

In a short excursion, we also take a peek at model predictive control for legged locomotion, also cast as a quadratic program.

Contents:

Topics:

References:

3. Simulation

In this course, we will introduce optimal control and dynamics simulation. We will review the fundamuntal principles (Prontryagin maximum principle and Hamilton-Jacobi-Bellman equations) and their derivation in the context of numerical applications (constrained optimization, differentiable dynamic programming).

Contents:

Topics:

References:

4. Perception and estimation

In this lecture, we will start by briefly describing the sensors that are used by robots to perceive their environment and self-localize in it, namely IMUs, cameras, point clouds, and absolute position measurements (GPS outdoors, motion capture indoors). We will introduce the sensor fusion problem for dynamical systems and its optimal solution in the linear case: the Kalman filter. We will then turn to the nonlinear case and its tools: EKF, Invariant EKF, factor graphs.

As praticle exercises, we will first start with simple wheeled robot localization in 2D, and then move to the principles behind the recent contact-aided invariant EKF for legged robot, and also simultaneous localization and mapping (SLAM) and the MSCKF for visual inertial odometry (VIO).

Contents:

Topics:

References:

5. Motion planning

This lecture is about motion planning, the problem of finding feasible continuous motions between two robot configurations that may be quite far away or require careful execution, such as navigating between obstacles. We will recall the concepts of configuration space and workspace, then discuss state-of-the-art sampling-based algorithms. We will cover the cases of non-holonomic vehicles and manipulation. In the tutorial session, we will implement motion planning algorithms on a robotic arm scenario.

Contents:

Topics:

References:

6. Optimal control

In this course, we will introduce optimal control and dynamics simulation. We will review the fundamuntal principles (Prontryagin maximum principle and Hamilton-Jacobi-Bellman equations) and their derivation in the context of numerical applications (constrained optimization, differentiable dynamic programming).

Contents:

Topics:

References:

7. Reinforcement learning for legged robots

In this lecture, we will outline recent breakthroughs of reinforcement learning in real-robot locomotion and manipulation. We will step through the technical decisions in training pipelines, and describe the state-of-the-art toolbox for transferring simulation-trained policies to real robots.

Contents:

Topics:

References:

8. Responsible robotics

What is Ethics, how does it work and what are your obligations when it comes to doing research and innovation in robotics? After a bit of history and a review of major aspects related to responsible robotics, we'll work through examples such as self-driving vehicles.

Contents:

Topics:

References:

Evaluation

Evaluation for this class will be based on weekly homework (20%) and either a project or an article study (80%).

Homework

Six homework assignments will be handed out and started in tutorial (TP) sessions. Lecturers will help get everyone started during those sessions, then the tutorials can be finished as homework. Tutorials are due on the Wednesday (Paris time) preceding the next lecture. They can all be found in the 2025_MVA_Robotics_Exercises repository on GiHub.

To return your solution:

Best 5 out of 6 assigments will be used for calculating the final score from the homework. Some assignments have bonus questions that don't affect the grade (/3) of the individual assignment, but increase an independent pool of bonus points. If at least one bonus point is scored by the end of the course, the final grade will be rounded up using the ceiling function.

Final evaluation

The final evaluation consists in either a project or an artical study, conducted by pairs of students (i.e., groups of two). We cannot accomodate individual projects, as our limited number of evaluators would be insufficient for the number of projects this would generate. Additionally, groups of three or more cannot be accommodated due to the challenges posed by the assessment of individual contributions in larger teams.

Deliverables for the final evaluation are a small report and a poster. The poster will be presented to teachers, researchers and PhD students at the final poster session. You will need to print it beforehand, e.g. in A1 or A0 format, and bring it on that day. We will provide tape and a space to hang posters. The report can be sent afterwards: the deadline for report submission is December 19th, 2025.

Projects: In projects, you select a topic of interest. A base list of topics is available below, ranging from well-known to cutting-edge research works. You can pick/adapt from this list, or come up with your own proposal (e.g. build your own robot and implement on it one of the methods studied in class).

Here is the list of project topics for the 2025-2026 edition of the course:

  1. Reinforcement learning for wheeled-bipedal push recovery
  2. Model predictive control for bipedal locomotion
  3. Learning pendulum swing ups from demonstrations

Article studies: In article studies, you read and report on a research paper from the list of articles linked below. We strongly encourage a dash of creativity: you should be critical of the works they read, try to reproduce them (e.g. in simulation) to identify shortcomings or limitations of the assumptions made in the paper, and try to propose some next steps to overcome those. Examples of next steps include extending a proof, implementing another feature, trying the solution in a different context, etc.

Here is the list of articles for the 2025-2026 edition of the course.

Registering your project: send an e-mail to Stéphane once you have formed your team and decided on a project/article. We will keep track and match you with evaluators for the poster session.