Usage
Defining an optimal control problem
Define model predictive control problems.
- class qpmpc.mpc_problem.MPCProblem(transition_state_matrix, transition_input_matrix, ineq_state_matrix, ineq_input_matrix, ineq_vector, nb_timesteps, terminal_cost_weight, stage_state_cost_weight, stage_input_cost_weight, initial_state=None, goal_state=None, target_states=None)
Linear time-variant model predictive control problem.
The discretized dynamics of a linear system are described by:
\[x_{k+1} = A x_k + B u_k\]where \(x\) is assumed to be the first-order state of a configuration variable \(p\), i.e., it stacks both the position \(p\) and its time-derivative \(\dot{p}\). Meanwhile, the system is linearly constrained by:
\[\begin{split}\begin{align*} x_0 & = x_\mathrm{init} \\ \forall k, \ C_k x_k + D_k u_k & \leq e_k \\ \end{align*}\end{split}\]The output control law minimizes a weighted combination of two types of costs:
- Terminal state error
\(\|x_\mathrm{nb\_steps} - x_\mathrm{goal}\|^2\) with weight \(w_{xt}\).
- Cumulated state error:
\(\sum_k \|x_k - x_\mathrm{goal}\|^2\) with weight \(w_{xc}\).
- Cumulated control costs:
\(\sum_k \|u_k\|^2\) with weight \(w_{u}\).
- goal_state
Goal state as stacked position and velocity.
- ineq_input_matrix
Constraint matrix on control variables. When this argument is an array, the same matrix D is applied at each step k. When it is
None
, the null matrix is applied.
- ineq_state_matrix
Constraint matrix on state variables. When this argument is an array, the same matrix C is applied at each step k. When it is
None
, the null matrix is applied.
- ineq_vector
Constraint vector. When this argument is an array, the same vector e is applied at each step k.
- initial_state
Initial state as stacked position and velocity.
- input_dim
Dimension of an input vector.
- nb_timesteps
Number of discretization steps in the preview window.
- stage_input_cost_weight
Weight on cumulated control costs.
- stage_state_cost_weight
Weight on cumulated state costs, or
None
to disable (default).
- state_dim
Dimension of a state vector.
- terminal_cost_weight
Weight on terminal state cost, or
None
to disable.
- transition_input_matrix
Control linear dynamics matrix.
- transition_state_matrix
State linear dynamics matrix.
- get_ineq_input_matrix(k)
Get input inequality matrix at a given timestep.
- Parameters:
k – Index of the timestep.
- Return type:
ndarray
- Returns:
Input inequality matrix at that step.
- get_ineq_state_matrix(k)
Get state inequality matrix at a given timestep.
- Parameters:
k – Index of the timestep.
- Return type:
ndarray
- Returns:
State inequality matrix at that step.
- get_ineq_vector(k)
Get inequality vector at a given timestep.
- Parameters:
k – Index of the timestep.
- Return type:
ndarray
- Returns:
Inequality vector at that step.
- get_transition_input_matrix(k)
Get input-transition matrix at a given timestep.
- Parameters:
k – Index of the timestep.
- Return type:
ndarray
- Returns:
Input-transition matrix at that step.
- get_transition_state_matrix(k)
Get state-transition matrix at a given timestep.
- Parameters:
k – Index of the timestep.
- Return type:
ndarray
- Returns:
State-transition matrix at that step.
- property has_stage_state_cost: bool
Check whether the problem has a stage state cost.
- property has_terminal_cost: bool
Check whether the problem has a terminal cost.
- update_goal_state(goal_state)
Set the goal state.
- Parameters:
goal_state (
ndarray
) – New goal state.- Raises:
StateError – if the goal state does not have the right dimension.
- Return type:
None
- update_initial_state(initial_state)
Set the initial state.
- Parameters:
initial_state (
ndarray
) – New initial state.- Raises:
StateError – if the initial state does not have the right dimension.
- Return type:
None
- update_target_states(target_states)
Set the reference state trajectory to track.
- Parameters:
target_states (
ndarray
) – Reference state trajectory.- Raises:
StateError – if the initial state does not have the right dimension.
- Return type:
None
Solving the optimal control problem
- qpmpc.solve_mpc.solve_mpc(problem, solver, sparse=False, **kwargs)
Solve a linear time-invariant model predictive control problem.
- Parameters:
problem (
MPCProblem
) – Model predictive control problem to solve.solver (
str
) – Quadratic programming solver to use, to choose inqpsolvers.available_solvers
. Empirically the best performing solvers are Clarabel and ProxQP: see for instance this benchmark of QP solvers for model predictive control.sparse (
bool
) – Whether to use sparse or dense matrices in the output quadratic program. Enable it if the QP solver is sparse (e.g. OSQP).kwargs – Keyword arguments forwarded to the QP solver via the solve_qp function.
- Return type:
- Returns:
Solution to the problem, if found.
Reading the solution
Process solutions to a model predictive control problem.
- class qpmpc.plan.Plan(problem, qpsol)
State and input trajectories that optimize an MPC problem.
See also the
qpmpc.problem.Problem
class.- problem
Model predictive control problem that was solved.
- qpsol
Solution of the corresponding quadratic program.
- property first_input: ndarray | None
Get the first control input of the solution.
- Returns:
First input if a solution was found,
None
otherwise.
In model predictive control, this is the part of the solution we are mainly interested in.
- property inputs: ndarray | None
Stacked input vector, or
None
if the plan is empty.This is the stacked vector \(U\) structured as:
\[\begin{split}U = \begin{bmatrix} u_0 \\ u_1 \\ \vdots \\ u_{N-1} \end{bmatrix}\end{split}\]with \(N\) the number of timesteps.
- Returns:
Stacked input vector if the plan is non-empty,
None
otherwise.
- property is_empty: bool
Check whether the plan is empty.
- property states: ndarray | None
Stacked vector of states.
This is the vector \(X\) structured as:
\[\begin{split}X = \begin{bmatrix} x_0 \\ x_1 \\ \vdots \\ x_N \end{bmatrix}\end{split}\]with \(N\) the number of timesteps.
- Returns:
Stacked state vector if the plan is non-empty,
None
otherwise.
Note
The time complexity of calling this property is \(O(N)\) the first time, then \(O(1)\) as the result is memoized.