Quadratic Programming in Python

Quadratic programs are a particular class of numerical optimization problems that can be applied in a variety of situations, for instance: in statistics for curve fitting, in machine learning to compute support vector machines (SVMs), in robotics to solve inverse kinematics, etc. They are the first step beyond linear programming (LP) in convex optimization. We will now see how to solve quadratic programs in Python using a number of available solvers: CVXOPT, CVXPY, Gurobi, MOSEK, qpOASES and quadprog.

Standard form of quadratic programs

A quadratic program (QP) is written in standard form as:

\begin{equation*} \begin{array}{rl} \mathrm{minimize} & (1/2) x^T P x + q^T x \\ \mathrm{subject\ to} & G x \leq h \\ & A x = b \end{array} \end{equation*}

Here, \(x\) is the vector of optimization variables \(x_1, \ldots, x_n\). The matrix \(P\) and vector \(q\) are used to define any quadratic objective function on these variables, while the matrix-vector couples \((G, h)\) and \((A, b)\) respectively define inequality and equality constraints. Vector inequalities apply coordinate by coordinate.

This mathematical formulation means the following: a QP finds the minimum of a quadratic function over a linear set:

QP is about finding the minimum of a quadratic function (circles) over a linear set (polygon)

In the 2D illustration above, the level sets of the quadratic function are drawn as dashed ellipses while the linear set is the blue polygon. Since the global optimal of the objective function is outside of the polygon, the solution \(x^*\) of the QP lies on the boundary of this polygon. The set of linear constraints that are saturated at \(x^*\) is called the active set, but that's a story for another post...

Back to the standard form, you will notice that there is no constant term in the objective function. Indeed, it would have no effect on the result of the optimization, which is the location of the solution \(x^*\). For example, the quadratic expression \(\| A x - b \|^2\) of a least squares optimization is written in standard form as \(x^T (A^T A) x - 2 (A^T b)^T x\).

The standard form also assumes, without loss of generality, that the matrix \(P\) is symmetric. Any matrix \(M\) can be decomposed as sum of its symmetric part \(M^+\) and antisymmetric part \(M^-\), and the latter yields zero in \(x^T M^- x\). Note that some solvers (like CVXOPT) assume that you provide a symmetric cost matrix: they won't check this, and will return wrong results if you don't.

Setting up QP solvers

The two readily-available QP solvers in Python are CVXOPT and quadprog (NB: the Python module, which is not the same as QuadProg++). They can be installed by:

$ sudo CVXOPT_BUILD_GLPK=1 pip install cvxopt
$ sudo pip install quadprog

CVXOPT uses its own matrix type, and it requires the matrix \(P\) of the objective function to be symmetric. To be on the safe side, you can wrap it as follows:

def cvxopt_solve_qp(P, q, G=None, h=None, A=None, b=None):
    P = .5 * (P + P.T)  # make sure P is symmetric
    args = [matrix(P), matrix(q)]
    if G is not None:
        args.extend([matrix(G), matrix(h)])
        if A is not None:
            args.extend([matrix(A), matrix(b)])
    sol = cvxopt.solvers.qp(*args)
    if 'optimal' not in sol['status']:
        return None
    return numpy.array(sol['x']).reshape((P.shape[1],))

The quadprog module works directly on NumPy arrays so there is no need for type conversion. Its matrix representation is equivalent but with different names:

def quadprog_solve_qp(P, q, G=None, h=None, A=None, b=None):
    qp_G = .5 * (P + P.T)   # make sure P is symmetric
    qp_a = -q
    if A is not None:
        qp_C = -numpy.vstack([A, G]).T
        qp_b = -numpy.hstack([b, h])
        meq = A.shape[0]
    else:  # no equality constraint
        qp_C = -G.T
        qp_b = -h
        meq = 0
    return quadprog.solve_qp(qp_G, qp_a, qp_C, qp_b, meq)[0]

In case you want to try out other solvers, I implemented similar wrappers for those I could get my hands on (like Gurobi or MOSEK) in the qpsolvers module.

Example

For a small example, let us see how to solve the following QP:

\begin{align*} \begin{array}{rl} \mathrm{minimize} & \left\| \left[\begin{array}{ccc} 1 & 2 & 0 \\ -8 & 3 & 2 \\ 0 & 1 & 1 \end{array}\right] \left[\begin{array}{c} x_1 \\ x_2 \\ x_3\end{array}\right] - \left[\begin{array}{c} 3 \\ 2 \\ 3\end{array}\right] \right\|^2 \\ \mathrm{subject\ to} & \left[\begin{array}{ccc} 1 & 2 & 1 \\ 2 & 0 & 1 \\ -1 & 2 & -1 \end{array}\right] \left[\begin{array}{c} x_1 \\ x_2 \\ x_3\end{array}\right] \leq \left[\begin{array}{c} 3 \\ 2 \\ -2 \end{array} \right] \end{array} \end{align*}

First, we write our QP matrices in proper format:

M = array([[1., 2., 0.], [-8., 3., 2.], [0., 1., 1.]])
P = dot(M.T, M)
q = dot(array([3., 2., 3.]), M).reshape((3,))
G = array([[1., 2., 1.], [2., 0., 1.], [-1., 2., -1.]])
h = array([3., 2., -2.]).reshape((3,))

Then, we compute the solution using either of our QP solvers:

In [1]: cvxopt_solve_qp(P, q, G, h)
Out[1]: array([-0.49025721, -1.57755278, -0.66484775])

In [2]: quadprog_solve_qp(P, q, G, h)
Out[2]: array([-0.49025721, -1.57755261, -0.66484801])

Comparing solver performances

In the following benchmark, I compared six different solvers. Three of them are numerical, which is the approach we have seen so far:

The three others are symbolic, meaning that if you dig into their API they allow you to construct your problem formally (with variable names) rather than using the matrix-vector representation. This is convenient for big sparse problems, but slower and small problems such as the one we are looking at here. The three symbolic solvers I tested are:

Note that ECOS and MOSEK are actually SOCP solvers, SOCP being a class of problems more general that QP. Here is a sample of computation times on my machine:

  • qpOASES: 10000 loops, best of 3: 31.5 µs per loop
  • quadprog: 10000 loops, best of 3: 34.1 µs per loop
  • CVXOPT: 1000 loops, best of 3: 559 µs per loop
  • Gurobi: 1000 loops, best of 3: 865 µs per loop
  • ECOS: 100 loops, best of 3: 2.81 ms per loop
  • MOSEK: 100 loops, best of 3: 7.24 ms per loop

For further investigation, let us generate random problems of arbitrary size as follows:

def solve_random_qp(n, solver):
    M, b = random.random((n, n)), random.random(n)
    P, q = dot(M.T, M), dot(b, M).reshape((n,))
    G = toeplitz([1., 0., 0.] + [0.] * (n - 3), [1., 2., 3.] + [0.] * (n - 3))
    h = ones(n)
    return solve_qp(P, q, G, h, solver=solver)

The Toeplitz matrix used to generate inequalities is just an upper-tridiagonal matrix with coefficients 1, 2, 3, all other coefficients being zero. This matrix is sparse but represented by (dense) NumPy arrays here. Using the function above, I generated a benchmark for problem sizes ranging from 10 to 2,000, averaging computation times over 10 runs for each point. Here are the results:

Results of the benchmark of QP solvers in Python

The bottom line of this small comparison is that quadprog, which implements the Goldfarb-Idnani dual algorithm, simply rocks. More generally, active-set solvers (quadprog and qpOASES) perform best on these dense problems. To see the benefit of sparse solvers (like ECOS and MOSEK here), one would have to use sparse matrix representation, which I didn't do in this example.

One last note on this benchmark is that all performances reported here are for cold start, that is to say, problems are solved from scratch every time without a good initial guess. One reason why qpOASES is a bit slow here is that it is designed (e.g. in terms of memory allocation) for solving series of QP problems that are close to each other, so that the solution to one can be used as initial guess to solve the next problem faster (this is known as warm starting). You might want to give qpOASES a closer look if you are in such scenarios.

To go further

You can run this benchmark for yourself on your own computer: all scripts and solver wrapping functions are in the qpsolvers repository.

Pages of this website are under the CC-BY 4.0 license.