Abstract¶
Real robots that walk in the field today rely on the Linear Inverted Pendulum Mode (LIPM) for walking control. Rigorously, the LIPM requires the robot's center-of-mass to lie in a plane, which is valid for walking on flat surfaces but becomes inexact over more general terrains. In this talk, we will see how to extend the LIPM to 3D walking, opening up old but refreshed questions on the analysis and control of bipeds. Technically, we will encounter a nonlinear control problem that we address by model predictive control of a quasi-convex optimization problem. We will see how the resulting controller works on the HRP-4 humanoid robot.
Content¶
Slides of NTU version, more detailed | |
Slides of QUT version, more high-level |
References¶
Balance problem with height variations (ICRA 2018) | |
Walking trajectory generation with height variations |
Discussion ¶
You can subscribe to this Discussion's atom feed to stay tuned.
-
Attendee #1
Posted on
What is your vector of design variables?
-
Stéphane
Posted on
Sorry for this gap in the talk, as I didn't want to step too much into the details of the derivation (these are available in condensed and unabridged versions). The function is actually . It is not obvious from the equations we saw but this quantity appears quite frequently in the structure of the problem. This draws another connection with TOPP as TOPP-RA defines the same quantity.
-
-
Attendee #2
Posted on
Do you interpolate a CoM path as in TOPP?
-
Stéphane
Posted on
No, the CoM path will be an output of the optimization (derived by forward integration from and ). The key trick we adapted from TOPP is the change of variable and the idea to define solutions as functions of rather than . (If you think of TOPP, an expression like expands to which looks like a snake biting its own tail, but is actually well-defined.) Path tracking was not necessary here to formulate our problem.
-
-
Attendee #3
Posted on
What makes these "capture problems" faster to solve than general nonlinear problems?
-
Stéphane
Posted on
Mainly two things: (1) the structure from linear inequalities that allows for taylored QR decompositions in the least-square steps of the SQP solver, and (2) the fact that the nonlinear equality does not "disrupt" the improvements thus obtained on the "QP part" of the optimization. You will need to take a look at Section IV of the the paper for more precise statements.
-
Adrien Escande
Posted on
There are mainly four factors behind this speedup:
- There is only one nonlinear constraint, so that using a quadratic penalty with fixed gain works well.
- Linear inequality constraints have a fixed and known structure that we can leverage during the least-square steps.
- The latter make it easy to find an initial feasible point.
- The problem is well-conditioned: most standard SQP refinements and tricks are not needed, so that we can get away with a simple "straight out of the textbook" implementation. Even line search or other globalization methods may not be necessary.
-
Feel free to post a comment by e-mail using the form below. Your e-mail address will not be disclosed.