# Integrating cosines of polynomials

In this post, we want to integrate functions that look like:

\begin{align} \int_a^b Q(t) \cos(\alpha t^2 + \beta t + \gamma) {\rm d} t & & \int_a^b Q(t) \sin(\alpha t^2 + \beta t + \gamma) {\rm d} t \end{align}

where $$Q$$ can be any polynomial. We'll first reduce the problem analytically before implementing a C++ function to approximate its solution.

## Problem simplification

Let's focus on the cosine case. The sine case proceeds similarly, either using a trigonometric formula such as $$\cos(\theta) = \sin(\pi/2 - \theta)$$ or reproducing each step mutatis mutandis. For starters, by linearity of the integral we can deal w.l.o.g. with the case where $$Q$$ is a monomial. Also, if $$a > 0$$, a change of variable $$t' = t - a$$ and an update of the constant term $$\gamma$$ of the polynomial inside the cosine takes us to the same problem with $$a = 0$$. Furthermore, since $$\cos$$ is an even function, we can assume w.l.o.g. that $$\alpha > 0$$.

### Reduction of the outer polynomial degree

If $$Q$$ has degree $$n + 1$$ for $$n \geq 1$$, integrating by parts leads us back to two Integrals of Sines of Polynomials (ISP), one with $$\mathrm{deg}(Q) = n - 1$$ and the other where $$\mathrm{deg}(Q) = 0$$. If $$Q$$ has degree 1, we write the ISP as the real part of its complex counterpart:

\begin{equation*} \Re\left(\int_0^b t e^{it}{\rm d}t\right) \end{equation*}

Calculating in the complex world allows us to rewrite the expression as:

\begin{equation*} \frac{1}{2 \alpha} \left[\sin(\alpha b^2 + \beta b + \gamma) - \sin \gamma - \beta \int_a^b \sin(\alpha t^2 + \beta t + \gamma) {\rm d}t\right] \end{equation*}

We are thus down to the case where $$Q$$ is simply a constant.

### Reduction of the inner polynomial degree

Let's now take a look at the "inner" polynomial under the cosine or sine of the ISP, whose degree is 2. A change of variable allows us to eliminate its degree-one monomial:

\begin{equation*} \int_0^b \cos(\alpha t^2 + \beta t + \gamma) {\rm d}t = \frac{1}{\sqrt{\alpha}} \int_{\frac{\beta}{2\sqrt{\alpha}}}^{\sqrt{\alpha} b + \frac{\beta}{2\sqrt{\alpha}}} \cos\left[u^2 + \left(\gamma - \frac{\beta^2}{4 \alpha}\right)\right] {\rm d}u \end{equation*}

Trigonometric formulas to the rescue:

\begin{equation*} \cos\left[u^2 + \left(\gamma - \frac{\beta^2}{4 \alpha}\right)\right] = \cos(u^2) \cos\left(\gamma - \frac{\beta^2}{4 \alpha}\right) - \sin(u^2) \sin\left(\gamma - \frac{\beta^2}{4 \alpha}\right) \end{equation*}

We thus eliminate the constant term $$\gamma - \beta^2 / 4\alpha$$, modulo multiplicating by a few constants. All that's left to calculate our ISP is to integrate $$\cos(u^2)$$.

### Power series expansion

Consider the power series expansion of the expression under the integral:

\begin{equation*} \int_0^{b} \cos(u^2) {\rm d} u = \int_0^{b} \sum_{n=0}^{+\infty} (-1)^n \frac{u^{4n}}{(2n)!} {\rm d}u \end{equation*}

By a normal convergence criterion, we can apply the integral-power series inversion theorem to swap the sum and integral symbols in this expression. This yields:

\begin{equation*} \int_0^{b} \cos(u^2) {\rm d} u = \sum_{n=0}^{+\infty} \frac{(-1)^n}{(2n)!} \frac{b^{4n+1}}{4n+1} \end{equation*}

The partial sums and remainder of this series are easy to estimate, as the power series is an alternating one. If $$b$$ is not "too" large, we can approximate its value by computing the series up to some index $$N$$, the remainder of order $$N$$ giving us the order of magnitude of the precision of this result.

Yet, this method would not work as we go to larger and larger values of $$b$$: terms in this alternating series are then of increasing magnitudes, and we will run into numerical precision issues when computing their differences with floating-point arithmetic (for instance, the double type in C). One (non-trivial) solution to this problem would be to group terms by orders of magnitude, but we will leave this question open in this post.

## C++ implementation

Here is an iterative implementation of this method in C++, starting with the reduced problem:

constexpr double kDesiredPrecision = 1e-20;

double integrate_cos_x2(double b) {
if (b > 1) {
warn("Precision issue due to high bound.");
}

const double stop_threshold = kDesiredPrecision / b;
double cur_fact = 1.0;
double cur_term = 1.0;
double sum = cur_term;
for (unsigned i = 1; cur_term < 0.0 || cur_term > stop_threshold; ++i) {
cur_fact *= (-1.0) * (b *b * b * b) / (2.0 * i) / (2.0 * i - 1.0);
cur_term = cur_fact / (4.0 * i + 1.0);
sum += cur_term;
}

return b * sum;
}

double integrate_cos_x2(double from, double to) {
return integrate_cos_x2(to) - integrate_cos_x2(from);
}


The integrate_sin_x2() function can be implemented similarly. Next, suppose we have defined a Polynomial class where the bracket operator[] reads individual coefficients. We can finally solve our ISPs as:

enum SineFunction { COS, SIN };

double integrate_isp(SineFunction sine, Polynomial P, double from, double to) {
if (P[2] < 0) {
switch (sine) {
case COS:
return integrate_isp(sine, -P, from, to);
case SIN:
default:
return -integrate_isp(sine, -P, from, to);
}
}

double dc_sqrt = sqrtl(P[2]);
double new_from = dc_sqrt * from + P[1] / 2. / dc_sqrt;
double new_to = dc_sqrt * to + P[1] / 2. / dc_sqrt;
double delta = P[0] - P[1]*P[1] / 4. / dc_sqrt;

double I1, I2;
switch (sine) {
case COS:
I1 = cosl(delta) * integrate_cos_x2(new_from, new_to);
I2 = sinl(delta) * integrate_sin_x2(new_from, new_to);
return I1 - I2;
case SIN:
default:
I1 = cosl(delta) * integrate_sin_x2(new_from, new_to);
I2 = sinl(delta) * integrate_cos_x2(new_from, new_to);
return I1 + I2;
}
}


Wrapping up: with some trigonometric and power-series analysis, we found a way to compute with relatively little code our integrals of sines of polynomials. Yet, our algorithm becomes numerically unstable over large integration domains ($$b \to \infty$$) and we have left that next problem open.