Exploring Tanh-Sinh quadrature

Share on:

As is well known, tanh-sinh quadrature takes an integral

\[ \int_{-1}^1f(x)dx \]

and uses the substitution

\[ x = g(t) = \tanh\left(\frac{\pi}{2}\sinh t\right) \]

to transform the integral into

\[ \int_{-\infty}^{\infty}f(g(t))g'(t)dt. \]

The reason this works so well is that the derivative \(g'(t)\) dies away at a double exponentional rate; that is, at the rate of

\[ e^{-e^t} \]

In fact,

\[ g'(t)=\frac{\frac{\pi}{2}\cosh t}{\cosh^2(\frac{\pi}{2}\sinh t)} \]

and for example, \(g'(10)\approx 4.4\times 10^{-15022}\).

To apply this technique, David Bailey, one of the strongest proponents for it, recommends choosing first a small value \(h\), some integer \(N\) and a working precision so that \(|g'(Nh)f(g(Nh))|\) is less than the precision you want. So if you want 1000 decimal place accuracy, you need to be sure that your working precision allows you to determine that

\[ \|g'(Nh)f(g(Nh))\| < 10^{-1000}. \]

If you start with say \(h=0.5\) you can then halve this value at each step until you have approximations that agree to your precision. Bailey claims that \(h=2^{-12}\) is "sufficient to evaluate most integrals to 1000 place accuracy".

The computation then requires calculating the value of

\[ h\sum{j=-N^N g'(hj)f(g(hj)). \]

(I'm using exactly the same notation as in Bailey's description.)

The elegant thing is that the nodes or absisscae (values at the which the function is evaluated) are based on values of \(hj\), each step of which includes all values from the previous step. So at each stage we only have to compute the "new" values.

For example, if \(Nh=2\), for example, the first positive values of \(hj\) are

\[ 0,\frac{1}{2},1,\frac{3}{2},2 \]

and the next step will have

\[ 0,\frac{1}{4},\frac{1}{2},\frac{3}{4},1,\frac{5}{4},\frac{3}{2},\frac{7}{4},1 \]

of which the new values are

\[ \frac{1}{4},\frac{3}{4},\frac{5}{4},\frac{7}{4}. \]

At the next step, the new values will be

\[ \frac{1}{8},\frac{3}{8},\frac{5}{8},\frac{7}{8},\frac{9}{8},\frac{11}{8},\frac{13}{8},\frac{15}{8} \]

and so on. In fact at each step, with \(h=2^{-k}\) we only have to perform computations at the values

\[ \frac{1}{2^k},\frac{3}{2^k},\frac{5}{2^k},\ldots,\frac{2N-1}{2^k}. \]

We can express this sequence as

\[ h+2kh, h = 0,1,2,\ldots N-1 \]

and as the value of \(h\) halves each step, the value of \(N\) doubles each step.

A test in Julia

I decided to see how easy this might be by attempting to evaluate

\[ \int_0^1\frac{\arctan x}{x}dx \]

This is a good test integral, as the integrand has a removable singularity at one end. By integrating the Taylor series term by term (and assuming convergence), we obtain:

\[ 1-\frac{1}{3^2}+\frac{1}{5^2}-\frac{1}{7^2}+\frac{1}{9^2}-\cdots \]

and this particular sum is known as Catalan's constant and denoted \(G\). It is unknown whether \(G\) is irrational, let alone transcendental. Because the integral is equal to a known value, the results can be checked against published values including one publication providing 1,500,000 decimal places.

Over the interval \([-1,1]\) the integral transforms to

\[ \int_{-1}^1\frac{\arctan((x+1)/2)}{x+1}dx \]

I decided to try for 1000 decimal places, and go up to \(Nh=8\), given that \(g'(8)\approx 2.5\times 10^{-2030}\). (This is overkill, of course.) We then need enough precision so that near the left endpoint, the function value is never given as -1. For example, in Julia:

 1setprecision(4000)
 2
 3function g(t)::BigFloat
 4    bt = big(t)
 5    bp = big(pi)
 6    return tanh(bp/2*sinh(bt))
 7end
 8
 9function gt(t)::BigFloat
10    bt = big(t)
11    bp = big(pi)
12    return bp/2*cosh(bt)/cosh(bp/2*sinh(bt))^2
13end
14
15function at(x)::BigFloat
16    y = big(x)
17    return atan((y+1)/2)/(y+1)
18end

The trouble is at this precision, the value of \(g(-8)\) is given as \(-1.0\), at which the function is undefined. If we increase the precision to 7000, we find that

1setprecision(7000)
2using Printf
3@printf "%.8e" 1+g(-8)
4
55.33290917e-2034

This small difference can't be managed at only 4000 bits. Now we can have a crack at the first iteration of computation:

 1h = big"0.5"
 2N = 16
 3inds = [k*h for k=1:N]
 4xs = [g(z) for z in inds]
 5ws = [gd(z) for z in inds]
 6ts = h*gd(0)*at(g(0))
 7ts += h*sum(ws[i]*at(xs[i]) for i = 1:N)
 8ts += h*sum(ws[i]*at(-xs[i]) for i = 1:N)
 9@printf "%.50e" ts
10
119.15969525022017573265491207994328001754668713901325e-01

and this is correct to about 6 decimal places. We are here exploiting the fact that the weight function \(g'(t)\) is even, and the subsitution function \(g(t)\) is odd, so we only have to compute values for positive \(t\).

The promise of tanh-sinh quadrature is that the accuracy roughly doubles for each halving of the step size. We can test this, by repeatedly iterating the current value with new ordinates, and compare each to a downloaded value of a few thousand decimal places of the constant:

 1for j = 2:10
 2    h = h/2
 3    inds = [h+2*k*h for k in 0:N-1]
 4    xs = [g(z) for z in inds]
 5    ws = [gd(z) for z in inds]
 6    ts = ts/2 + h*sum(ws[i]*at(xs[i]) for i in 1:N)
 7    ts = ts + h*sum(ws[i]*at(-xs[i]) for i in 1:N)
 8    print(j,"  ")
 9    @printf "%.8e" abs(ts-catalan)
10    println()
11    N = N*2
12end
13
142  6.01994061e-10
153  6.03834702e-20
164  8.07587315e-38
175  1.15722093e-74
186  9.05835440e-148
197  7.95770023e-294
208  2.44238219e-585
219  4.47198995e-1167
2210  1.24233864e-1355

At this stage we have \(h=0.5^{10}\) and a result that is well over our intended accuracy of 1000 decimal places.

If we were instead to compare the absolute differences of successive results, we would see:

12  3.93144679e-06
23  6.01994061e-10
34  6.03834702e-20
45  8.07587315e-38
56  1.15722093e-74
67  9.05835440e-148
78  7.95770023e-294
89  2.44238219e-585
910  4.47198995e-1167

and we could infer that we have at least 1167 decimal places accuracy. (If we were to go one more step, the difference becomes about \(1.004\times 10^{-2304}\) which is the limit of the set precision.) :mathematics:computation