In the last post I looked at Aitken’s delta-squared process, and showed its applicability to summing some alternating series. But what if the series is not alternating? Let’s consider

As before, we invoke our numerical software:

>> N = 13; n = 1:N; a = 1./(n.^2); s = cumsum(a); >> s.' ans = 1.000000000000000 1.250000000000000 1.361111111111111 1.423611111111111 1.463611111111111 1.491388888888889 1.511797052154195 1.527422052154195 1.539767731166541 1.549767731166541 1.558032193976458 1.564976638420903 1.570893798184216 >> atk = aitken(s); >> atk(end,:).' ans = 1.570893798184216 1.604976638420905 1.623249742431601 1.633102742772414 1.638466686494426 1.641588484246542 1.641583981554060

We are a little better off here; the final cumulative sum is in error by about , and using Aitken’s process gives us a final value which is in error by 0.003. But this is nowhere near as close as we achieved earlier. And this is because Aitken’s process doesn’t work for a sequence whose convergence is *logarithmic*, that is

### Lubkin’s W-transformation

This is named for Samuel Lubkin, who wrote extensively about series acceleration in the 1950’s. In fact, his 1952 paper is available online. In the 1980’s J. E. Drummond at the Australian National University took up Lubkin’s cudgels and further extended and developed his methods. Drummond noticed that Aitlen’s and Lubkin’s processes were very closely linked.

First, Aitken’s process can be written as

We can check this using any symbolic package, or working it out by hand:

This last expression can be expanded to:

After some algebraic fiddlin’, we end up with

which is the initial formula for Aitken’s process.

Lubkin’s process can be written as

Given that for a sequence we have , then can be expanded as

It turns out to be more numerically stable to write in the form

of which the numerator of the fraction is equal to . A little bit of algebra shows that the denominator is equal to We can thus write the W-transformation as

So let’s experiment with

>> N = 13; n = 1:N; a = 1./n.^2; >> s = cumsum(a); >> s = cumsum(a); s(end) ans = 1.570893798184216 >> abs(s(end)-pi^2/6) ans = 0.074040268664010

So far, not a particularly good approximation, as we’d expect. So we’ll try the W-transformation, in its original form as the quotient of two second differences:

>> s0 = s(1:N-3); s1 = s(2:N-2); s2 = s(3:N-1); s3 = s(4:N); >> w = (s2./(s3-s2)-2*s1./(s2-s1)+s0./(s1-s0))./(1./(s3-s2)-2./(s2-s1)+1./(s1-s0)); >> w(end) ans = 1.644837749532052 >> abs(w(end)-pi^2/6) ans = 9.631731617454342e-05

which is a great improvement. We can do this again, simply by

>> s = w; N = N-3;

and repeating the above commands. The new values of the final result, and error, are:

>> w(end) ans = 1.644933894309522 >> abs(w(end)-pi^2/6) ans = 1.725387044348992e-07

Just as with Aitken’s process, we can whip up a little program to perform the W-transformation iteratively:

function out = lubkin(c) % Applies Lubkin's W-process to a vector c, which we suppose to % represent a sequence converging to some limit L N = length(c); M = N; % length of current vector s = reshape(c,N,1); % ensures we are working with column vectors out = s; for i = 1:floor(N/3) s0 = s(1:M-3); s1 = s(2:M-2); s2 = s(3:M-1); s3 = s(4:M); t = s2./(s3-s2)-2*s1./(s2-s1)+s0./(s1-s0); t = t./(1./(s3-s2)-2./(s2-s1)+1./(s1-s0)); tcol = zeros(N,1); tcol(3*i+1:N) = t; out = [out tcol]; M = M-3; s = t; end

Note that the generalization of these processes, using

for integers , have been explored by J. E. Drummond, and you can read his 1972 paper online here.