Tanh-sinh quadrature is one of a sequence of methods known as “double exponential” methods, because the weights used die away at the rate of the double exponential function

In order to integrate

we use the substitution

which transforms the integral into

So far this seems like we’re going in the wrong direction – ordinarily we’d want to transform an infinite integral into a nice finite one for computation.

But consider the functions

At even so small a value as , we have and . Suppose we approximate the transformed integral by the trapezoidal rule as

for a small enough and large enough . Here the values are the nodes or abscissae, and are the weights.

We only need large enough so that the die-off we have just seen happens. For example, if , then for such that – so that we are dealing with weights of the order of .

The tanh-sinh rule thus approximates an integral of a function over by choosing a small and a reasonable , and computing the above sum. The remarkable thing is how accurate this rule is for relatively large values of and small values of . (This can be shown using the Euler-Maclaurin summation formula, and I’ll discuss this is a later post).

The other very nice aspect of this rule is that if we are using real arithmetic to a high enough precision, then we can deal happily with functions with singularities at one or both endpoints, as we never actually calculate the function at those values. For as above, the closest we get to is about .

So let’s look at two examples, the first one is

We’ll do this in PanAxiom, starting by setting a high precision and output

`(1) -> digits(1000);outputGeneral(200);`

and entering the functions and :

`(2) -> psi:Float->Float`

`(3) -> psi(x)==tanh(%pi/2*sinh(x))`

`(4) -> psid:Float->Float`

`(5) -> psid(x)==%pi/2*cosh(x)/(cosh(%pi/2*sinh(x)))^2`

Now we’ll try a few values of and associated vaues of :

`(6) -> m:=3`

`(7) -> h:=0.5^m`

`(8) -> N:=6*2^m-1`

`(9) -> xs := [psi(h*t) for t in -N..N]`

`(10) -> ws := [psid(h*t) for t in -N..N]`

`(11) -> int3 := reduce(+,map((w,x)+->h*w*f(x),ws,xs)`

`1.5707963267_9489661923_1321692195_2032049740_0882640697_2503877808_704745288`

2_7101748171_3942893866_2316742183_6796150502_5937316895_0417577408_475800854

4_5628142836_8673040694_1249390695_6344814583_2236941053_851601884

`(12) -> abs(int3-%pi/2)`

`0.5554517628_7542412671_9419593390_3364085913_8006787437_7214628876_453560615`

6_8437688746_1769296163_5714948354_3607228896_9582948438_7726267453_442704425

5_5987084261_8034013826_3090769499_7396765187_2670125846_199135277 E -27

So even with this fairly large value of and only a few function calls (93) we have accuracy to 27 decimal places. Clearly we need to try with another :

`(13) -> m:=5`

`(14) -> h:=0.5^m;N:=6*2^m-1`

`(15) -> xs := [psi(h*t) for t in -N..N]`

`(16) -> ws := [psid(h*t) for t in -N..N]`

`(17) -> int5 := reduce(+,map((w,x)+->h*w*f(x),ws,xs)`

`1.5707963267_9489661923_1321691639_7514420985_8469968755_2910487472_296153908`

2_0314310449_9314017412_6710585339_9107404325_6641153323_5469223238_784499800

4_5287150025_7861373847_7334689949_7174954259_0751407012_954089269

`(18) -> abs(int5-%pi/2)`

`0.1910315886_4182607445_9615455786_5963129554_8527914401_4792528292_268179437`

3_5963818590_2038180838_1938683423_3585421354_3187584226_8658998324_329887020

8_3521709505_2582597229_5158482247_8277177970_8098898047_2412558066 E -127

and we have 127 decimal place accuracy! In general, working to high enough precision, we should expect an approximate doubling of the number of correct decimal places for every halving of .

For a second example, consider

where is the Euler-Mascheroni constant. This integral has a singularity at . Here goes:

`f(x)==-0.5*log(log(2/(x+1)))`

Put and compute and the two lists of nodes and weights as above. Then

`(23) -> int6:=reduce(+,map((w,x)+->h*w*f(x),ws,xs)`

0.5772156649_0153286060_6512090082_4024310421_5933593992_3598805767_2348848677

_2677766467_0936947063_2917467495_1463144724_9807082480_9605040144_8654283622

_4173997644_9235362535_0033374293_7337737673_9427925952_5824709492

and this can be checked as being correct to all displayed 200 decimal places.

Tanh-sinh quadrature is an impressive method, but seems little used. I’m guessing that’s because it’s quite new (1974) – well, new in comparison to other methods, and also hard to perform a solid error analysis. However, David Bailey, who is one of the champions of this method – see a paper of his here – has done several decades of computations and experiments, and claims that “Overall, the tanh-sinh scheme appears to be the best for integrands of the type most often encountered in experimental math research.”

You can read the original 1974 paper of Hidetosi Takahasi and Masatake Mori here.