Projecting polytopes

Consider a polytope given in halfspace representation by:

\begin{equation*} P = \{ x \in \mathbb{R}^n \ | \ A x \leq b, C x = d \} \end{equation*}

Define the affine projection of this polytope onto a lower-dimensional space by:

\begin{equation*} \pi(P) = \{ y \in \mathbb{R}^m \ | \ \exists x \in P, y = E x + f \} \end{equation*}

We are interested in the halfspace representation of \(\pi(P)\).

Double description method

Projecting is easy in vertex representation: for a polytope \(P = \mathrm{conv}(V)\), the projection is simply obtained by \(\pi(P) = \{ E v + f, v \in V\}\). One simple algorithm to our problem is then:

  • Enumerate vertices of the halfspace representation of \(P\) using the double description method
  • Project the vertices to \(\pi(V) = \{ E v + f, v \in V \}\)
  • Apply the double description method again to convert the vertex representation \(\pi(V)\) into a halfspace representation

The cdd library can be used for this purpose. One problem with this approach is that it can be numerically unstable, and becomes significantly slower as the dimension of the polytope increases.

Implementation

A Python implementation of this method is given in the project_polyhedron() function of the geometry.py submodule in pymanoid.

Ray shooting methods

Ray shooting methods provide a better alternative when the size of the \(\pi(P)\) is small, and are typically used when the dimension \(m\) of the projected polytope is 2 or 3. They are output-sensitive algorithms: their complexity depends on the number of facets of \(\pi(P)\), which is reasonably bounded in 2D or 3D from McMullen's Upper Bound Theorem.

Projecting to a 2D polygon

The 2D ray shooting algorithm was introduced in robotics by Bretl and Lall to compute the static-equilibrium polygon of a legged robots in multi-contact. It was used in subsequent humanoid works to compute other criteria such as multi-contact ZMP support areas and CoM acceleration cones.

The core idea of the algorithm goes as follows: let \(\theta\) denote an angle in the 2D plane, with \(u_\theta = [\cos(\theta) \ \sin(\theta)]\) the corresponding unit vector. We solve the following linear program (LP):

\begin{equation*} \begin{array}{rl} \underset{x \in \mathbb{R}^n, y \in \mathbb{R}^m}{\text{maximize}} & u_\theta^T y \\ \text{subject to} & A x \leq b \\ & C x = d \\ & y = E x + f \end{array} \end{equation*}

This LP computes simultaneously \(x \in P\) and its projection \(y = \pi(x)\) via its inequality and inequality constraints. With the choice of \(u_\theta\) in its objective function, the output \(y^*\) will be the point of \(\pi(P)\) furthest away in the direction of \(u_\theta\). This has two important consequences:

  • \(y^*\) is a vertex of \(\pi(P)\)
  • \(u_\theta\) is the normal vector of a supporting hyperplane of \(\pi(P)\) at \(y^*\)

A supporting hyperplane (or line, since we are in 2D) is a plane tangent to the polygon at a given point. Formally, it separates the plane in two: one side \(S_0\) that contains no point of \(\pi(P)\), and its complement that contains the polygon fully. In our present case,

\begin{equation*} S_0 = \left\{z \in \mathbb{R}^2 \ \right|\left. \ u_\theta^T (z - y^*) > 0 \right\} \end{equation*}

Supporting hyperplanes are members of the dual of a polytope. They are generated by conic combination of the halfspace representation: in \(A x \leq b\), lines of \(A\) correspond to normal vectors of supporting hyperplanes, while the corresponding entries in \(b\) correspond to their respective affine coordinates. Hence, using the output of the LP above, we have obtained one line of an H-rep of \(\pi(P)\): \(u_\theta^T y \leq u_\theta^T y^*\). By repeatedly solving LPs with different values of the angle \(\theta\), we will thus obtain a halfspace representation of an outer approximation of the projected polygon.

Meanwhile, \(y^*\) gives us one vertex of \(\pi(P)\). By repeatedly solving LPs with different angles \(\theta\), we obtain a subset \(\{ y_1^*, \ldots, y_k^*\} \subset \pi(V)\) which gives us a vertex representation for an inner approximation of the projected polygon. Bretl and Lall showed how, as the number LPs solved for different angles \(\theta\) increases, these two approximations converge toward \(\pi(P)\). Furthermore, the number of iterations \(k\) required to reach a desired precision can be computed exlicitly (by looking at the surface between the inner and outer approximations), and the algorithm is anytime, i.e., computations can be interrupted after solving any number of LPs and still return a valid solution.

Implementation

A Python implementation of this algorithm was written by Quang-Cuong Pham in bretl.py. You can call it via project_polytope_bretl() in pymanoid.

Projecting to a 3D polytope

Bretl and Lall's algorithm has been recently extended to 3D by Audren and Kheddar, used this time to compute the 3D robust static-equilibrium polyhedron.

Fourier-Motzkin elimination

If the projection is done offline and once and for all, one may consider Fourier-Motzkin Elimination (FME). Contrary to the previous two approaches, this method works directly and exclusively on the halfspace representation of the input polytope. One of its benefit is that it can be applied analytically when the signs of the linear coefficients are known in advance. This feature was e.g. used to calculate the analytical formula of single-contact wrench cones.

Pivotting rule

FME relies on a pivotting rule similar to Gaussian elimination, it "eliminates" \(x\) dimensions from the combined polytope on \([x \ y]\):

\begin{equation*} P_{\times} = \left\{[x\ y] \ \right|\left. \ A x \leq b, C x = d, y = E x + f \right\} \end{equation*}

Its core idea goes as follows. First, use the equality constraints \(C x = d\) and \(y = E x + f\) to eliminate as many coordinates \(x_i\) from \(x\) as possible. This leaves us with a reduced polytope:

\begin{equation*} P' = \left\{[x_0 \ \cdots \ x_k \ y] \ \right|\left. \ A' [x_0 \ \cdots \ x_k \ y] \leq b'\right\} \end{equation*}

Now, consider the first coordinate \(x_0\). Each line of the inequality constraint can be written as:

\begin{equation*} A'_{i, 0} x_0 + \cdots + A'_{i, k} x_k + A'_{i,k+1} y_0 + A'_{i, k+2} y_1 \leq b'_i \end{equation*}

One of three outcomes can happen:

  • \(A'_{i, 0} < 0\): in this case, dividing by this number yields a lower bound \(x_0 \geq l_i\), with
\begin{equation*} l_i := \frac{b'_i}{A'_{i, 0}} - \sum_{j=1}^k \frac{A'_{i, j}}{A'_{i, 0}} x_j - \frac{A'_{i, k+1}}{A'_{i, 0}} y_0 - \frac{A'_{i, k+2}}{A'_{i, 0}} y_1 \end{equation*}
  • \(A'_{i, 0} > 0\): then, doing the same yields an upper bound \(x_0 \leq u_i\)
  • \(A'_{i, 0} = 0\): in this case, the inequality is independent from \(x_0\), i.e. it is already part of the projection where \(x_0\) has been eliminated.

By construction, \(x_0 \in [l^*, u^*]\) where \(l^*\) is the highest of all lower bounds obtained by the above process, and \(u^*\) is similarly the lowest of all upper bounds. Therefore, the system of linear inequalities has solutions if and only if this interval is nonempty, that is to say,

\begin{equation*} \forall i, j,\ l_i \leq u_j \end{equation*}

The key idea here is that the expressions of \(l_i\)'s and \(u_j\)'s only depend on \(x_1, \ldots, x_k, y_0, y_1\). To eliminate \(x_0\), we will therefore:

  • remove from \(A'\) and \(b'\) all lines \(i\) for which \(A'_{i, 0} \neq 0\)
  • add one new line for each pair (i, j) of lower and upper bounds on \(x_0\) corresponding to \(l_i - u_j \leq 0\).

The result from this operation is a new polytope over \([x_1 \ \cdots \ x_k \ y_0 \ y_1]\), from which the elimination can be repeated until the only remaining coordinates are \(y_0\) and \(y_1\).

As can be readily seen from the combinatorics of its update rule, Fourier-Motzkin elimination has a (doubly) exponential memory complexity. Imbert has proposed a set of syntactic rules to avoid considering pairs of lower-upper bounds that yield redundant inequalities, but without lowering the worst-case complexity of the overall algorithm.

Implementation

The SymPyFME toolbox provides a Python implementation of Fourier-Motzkin elimination, using Imbert's acceleration theorems and multiprocessing for parallel computations.

See also

Content on this website is under the CC-BY 4.0 license.