High precision quadrature with Clenshaw-Curtis

Share on:

An article by Bailey, Jeybalan and LI, "A comparison of three high-precision quadrature schemes", and available online here, compares Gauss-Legendre quadrature, tanh-sinh quadrature, and a rule where the nodes and weights are given by the error function and its integrand respectively.

However, Nick Trefethen of Oxford has shown experimentally that Clenshaw-Curtis quadrature is generally no worse than Gaussian quadrature, with the added advantage that the nodes and weights are easier to obtain.

I'm using here the slightly sloppy convention of taking "Clenshaw-Curtis quadrature" to be the generic name for any integration rule over the interval \([-1,1]\) whose nodes are given by cosine values (more particularly, the zeros of Chebyshev polynomials). However, rules very similar to Clenshaw-Curtis were described by the Hungarian mathematician Lipót Fejér in the 1930s; some authors like to be very clear about the distinctions between the "Fejér I", "Fejér II", and "Clenshaw-Curtis" rules, all of which have very slightly different nodes.

The quadrature rule

The particular quadrature rule may be considered to be an "open rule" in that, like Gauss-Legendre quadrature, it doesn't use the endpoints. An \(n\)-th order rule will have the nodes

\[ x_k = \cos\left(\frac{2k+1}{2n}\pi\right), k= 0,1,\ldots,n-1. \]

The idea is that we create an interpolating polynomial \(p(x)\) through the points \((x_k,f(x_k)\), and use that polynomial to obtain the integral approximation

\[ \int_{-1}^1f(x)\approx\int_{-1}^1p(x)dx. \]

However, such a polynomial can be written in Lagrange form as

\[ p(x) = \sum_{k=0}^{n-1}f(x_k)p_k(x) \]

with

\[ p_k(x)=\frac{\prod(x-x_i)}{\prod (x_k-x_i)} \]

where the products are taken over \(0\le i\le n-1\) excluding \(k\). This means that the integral approximation can be written as

\[ \int_{-1}^1\sum_{k=0}^{n-1}f(x_k)p_k(x)=\sum_{k=0}^{n-1}\left(\int_{-1}^1p_k(x)dx\right)f(x_k). \]

Thus writing

\[ w_k = \int_{-1}^1p_k(x)dx \]

we have an integration rule

\[ \int_{-1}^1f(x)dx\approx\sum w_kf(x_k). \]

Alternatively, as for a Newton-Cotes rule, we can determine the weights as being the unique values for which

\[ \int_{-1}^1x^mdx = \sum w_k(x_k)^m \]

for all \(m=0,1,\ldots,n-1\). The integral \(I_n\) on the left is equal to 0 for odd \(n\) and equal to \(2/(n+1)\) for even \(n\), so we have an \(n\times n\) linear system consisting of equations

\[ w_0x_0^n+w_1x_1^n+\cdots w_{n-1}x_{n-1}^m=I_m \]

For example, here is how the weights could be constructed in Python:

 1import numpy as np
 2N = 9
 3xs = [np.cos((2*(N-k)-1)*np.pi/(2*N)) for k in range(N)]
 4A = np.array([[xs[i]**j for i in range(N)] for j in range(N)])
 5b = [(1+(-1)**k)/(k+1) for k in range(N)]
 6ws = np.linalg.solve(A,b)
 7ws[:,None]
 8
 9array([[0.05273665],
10       [0.17918871],
11       [0.26403722],
12       [0.33084518],
13       [0.34638448],
14       [0.33084518],
15       [0.26403722],
16       [0.17918871],
17       [0.05273665]])

Then we can approximate an integral, say

\[ \int_{-1}^1e^{-x^2}dx \]

by

1f = lambda x: np.exp(-x*x)
2sum(ws * np.vectorize(f)(xs))
3
41.4936477751634403

and this is correct to six decimal places.

Use of the DCT

The above method for obtaining the weights is conceptually easy, but computationally expensive. A far neater method, described in an article by Alvise Sommariva, is to use the discrete cosine transform - in particular the DCT III, which is available in most standard implementations as idct.

Define a vector \(m\) (called by Sommariva the weighted modified moments) of length \(N\) by

\begin{align*} m_0&=\sqrt{2}\\ m_k&=2/(1-k^2)\text{ if \(k\ge 2\) is even}\\ m_k&=0\text{ if \(k\) is odd} \end{align*}

Then the weights are given by the DCT of \(m\). Again in Python:

 1from scipy.fftpack import idct
 2m = [np.sqrt(2),0]+[(1+(-1)^k)/(1-k^2) for k in range(2,N)]
 3ws = np.sqrt(2/N)*idct(m,norm='ortho')
 4ws[:,None]
 5
 6array([[0.05273665],
 7       [0.17918871],
 8       [0.26403722],
 9       [0.33084518],
10       [0.34638448],
11       [0.33084518],
12       [0.26403722],
13       [0.17918871],
14       [0.05273665]])

which is exactly the same as before.

Arbitrary precision

Our choice here (in Python) will be the mpmath library, in spite of some criticisms against it. Our use of it though will be confined to simple arithmetic (sometimes of matrices) rather than more complicated routines such as quadrature. Note also that the criticisms were more in the nature of comparisons; the writer is in fact the principal author of mpmath.

As a start, here's how we might develop the above nodes and weights for 30 decimal places. First the nodes:

 1from mpmath import mp
 2mp.dps = 30
 3xs = [mp.cos((2*(N-k)-1)*mp.pi/(2*N)) for k in range(N)
 4for i in range(N):
 5    print(xs[i])
 6
 7-0.984807753012208059366743024589
 8-0.866025403784438646763723170753
 9-0.642787609686539326322643409907
10-0.342020143325668733044099614682
118.47842766036889964395870146939e-32
120.342020143325668733044099614682
130.642787609686539326322643409907
140.866025403784438646763723170753
150.984807753012208059366743024589

Next the weights. As mpmath doesn't have its own DCT routine, we can construct a DCT matrix and multiply by it:

 1dct = mp.matrix(N,N)
 2dct[:,0] = mp.sqrt(1/N)
 3
 4for i in range(N):
 5    for j in range(1,N):
 6	dct[i,j] = mp.sqrt(2/N)*mp.cos(mp.pi/2/N*j*(2*i+1))
 7
 8m = mp.matrix(N,1)
 9m[0] = mp.sqrt(2)
10for k in range(2,N,2):
11    m[k] = mp.mpf(2)/(1-k**2)
12
13ws = dct*m*mp.sqrt(2/N)
14ws
15
16matrix(
17[['0.0527366499099067816401650387891'],
18 ['0.179188712522045851600122685138'],
19 ['0.264037222541004397180813776173'],
20 ['0.330845175168136422780278417119'],
21 ['0.346384479717813038086088934304'],
22 ['0.330845175168136422780278417119'],
23 ['0.264037222541004397180813776173'],
24 ['0.179188712522045851600122685138'],
25 ['0.0527366499099067816401650387892']])

Given the nodes and the weights, evaluating the integral is straightforward:

1f = lambda x: mp.exp(-x*x)
2fs = [f(x) for x in xs]
3ia = sum([a*b for a,b in zip(ws,fs)])
4print(ia)
5
61.49364777516344031419344222023

This no more accurate than the previous result as we are still using only 9 nodes and weights. But suppose we increase both the precision and the number of nodes:

1mp.dps = 100
2N = 128

If we carry out all the above computations with these values, we can check the accuracy of the approximate result against the exact value which is \(\sqrt{\pi}\phantom{.}\text{erf}(1)\):

1ie = mp.sqrt(mp.pi)*mp.erf(1)
2mp.nprint(abs(ia-ie),10)
3
42.857468478e-101

and we see the we have accuracy to 100 decimal places.

Onwards and upwards

With the current integral

\[ \int_{-1}^1e^{-x^2}dx \]

here are some values with different precisions and values of N, starting with the two we have already:

dps N absolute difference
30 9 4.904614138e-7
100 128 2.857468478e-101
500 256 8.262799923e-298
1000 512 8.033083996e-667

Note that the code above is in no way optimized; there is no hint of a fast DCT, for example. Thus for large values of N and for high precision there is a time-cost; the last computation in the table took 44 seconds. (I have an IBM ThinkPad X1 Carbon 3rd Generation laptop running Arch Linux. It's several years old, but its proved to be a terrific workhorse.)

But this shows that a Clenshaw-Curtis type quadrature approach is quite appropriate for high precision integration.