Open loop and closed loop model predictive control

There are two ways model predictive control (MPC) has been applied to legged locomotion so far: open loop and closed loop MPC. In both cases, a model predictive control (numerical optimization) problem is derived from a model of the system and solved, providing a sequence of actions that can be used to drive the actual system. Open loop and closed loop MPC differ in how they apply the first action from this sequence.

Open loop model predictive control

Open loop MPC is a motion planning approach where the plan is "unrolled" progressively. Rather than being sent to the real robot, MPC outputs are fed to an integrator that uses the same forward dynamics model used to derive the MPC problem itself; for instance, \(\bfx_{k+1} = \bfA \bfx_k + \bfB u_k\) in linear model predictive control. The output state of this integrator is then fed as the input to the initial MPC state for the next control cycle:

Open loop model predictive control

One of the first breakthroughs in humanoid walking, the preview control method proposed by (Kajita et al., 2003), followed this open loop approach. It was later shown to be equivalent to linear model predictive control by (Wieber, 2006). These seminal papers don't mention the integrator directly, but open loop MPC is how the generated center of mass trajectories were executed in practice on the HRP-2 and HRP-4 humanoids. This is explicit in code that was released more recently such as the LIPM walking controller from CNRS or the centroidal control collection from AIST.

Closed loop model predictive control

Closed loop MPC is initialized from the latest observation:

Closed loop model predictive control

Observations, even when they are filtered, are subject to uncertainty and measurement errors, which raises new questions and edge cases to handle compared to open loop MPC. For instance, what if the MPC problem has state constraints \(\bfC \bfx_k \leq \bfe\), but the initial state does not satisfy \(\bfC \bfx_0 \leq \bfe\)?

This question was encountered by (Bellicoso et al., 2017) in the case of ZMP constraints during quadrupedal locomotion. Closed loop MPC was also followed by (Di Carlo et al., 2018) to control the base position and orientation of a walking quadruped via contact forces.

Pros and cons

A benefit of open loop MPC, compared to its closed loop counterpart, is that it makes it easier to enforce guarantees such as recursive feasibility, that is, the guarantee that if the current MPC problem is feasible, then the next MPC problem (after integration) will be feasible as well. This is an important property in practice to make sure that the robot does not run "out of plan" while walking, which is dangerous if its current state is not a static equilibrium.

Open loop MPC only generates a reference state, and is therefore commonly cascaded with a walking stabilizer to implement feedback from the observed state. The main drawback of this approach is that a stabilizer is often by design more short-sighted than a model predictive controller, so that the combined system may not be general enough to discover more complex recovery strategies (re-stepping, crouching, side stepping, ...) that closed loop MPC can discover.

To go further

Open loop MPC is described in the walking pattern generation tutorial. A comparison between open-loop, closed-loop (and a further variant called "robust closed loop") MPC for bipedal walking is carried out in (Villa and Wieber, 2017).

In this short overview, we described one question that arises from measurement errors in the initial state, but we didn't dig into the question of measurement uncertainty. This point, as well as other sources of uncertainty, can be taken into account in the more general framework of stochastic model predictive control.


There are no comments yet. Feel free to leave a reply using the form below.

Post a comment

You can use Markdown with $\LaTeX$ formulas in your comment.

You agree to the publication of your comment on this page under the CC BY 4.0 license.

Your email address will not be published.

© Stéphane Caron — All content on this website is under the CC BY 4.0 license.