This alternative conversion maintains sparsity at the expense of increasing the
number of optimization variables from n to n+m, where
m is the number of rows of R. It will likely perform better
than the previous conversion on most sparse problems, but it may perform worse
in some cases, for instance if R has many more rows than columns.
Consider the structure of the problem at hand to decide on which conversion
strategy to follow.
Example in Python
For sparse problems, the solve_ls
function of qpsolvers boils down to:
def solve_sparse_ls(R, s, G, h, A, b, lb, ub, W, solver):
m, n = R.shape
zeros_nn = spa.csc_matrix((n, n))
eye_m = spa.eye(m, format="csc")
P = spa.block_diag([zeros_nn, W], format="csc")
q = np.zeros(n + m)
G = spa.hstack([G, spa.csc_matrix((G.shape[0], m))], format="csc")
A = spa.hstack([A, spa.csc_matrix((A.shape[0], m))], format="csc")
Rx_minus_y = spa.hstack([R, -eye_m], format="csc")
A = spa.vstack([A, Rx_minus_y], format="csc")
b = np.hstack([b, s])
lb = np.hstack([lb, np.full((m,), -np.inf)])
ub = np.hstack([ub, np.full((m,), np.inf)])
xy = solve_qp(P, q, G, h, A, b, lb, ub, solver=solver)
return xy[-m:] if xy is not None else None
The full function in the library is more complex to handle combinations of
defined and undefined inputs, dense and sparse matrices.