Numerical integration, or quadrature (to use the old-fashioned, but charming, term) is the business of finding approximate values to definite integrals; integrals which are either impossible to be computed analytically, or for which the antiderivative is too complex to be used. In the first category we might have
At least, Wolfram|Alpha can’t compute this, so I assume it can’t be done analytically; and in the second category we might have
for which the antiderivative is a horrific expression involving the incomplete gamma function.
In an elementary calculus course, students might be introduced to some very simple numerical techniques: the trapezoidal rule or Simpson’s rule, but unless the course is more specifically geared to numerical computations, not much further.
However, there are lots of different methods for numerical integration, some of which are great fun to play with.
All these methods are based on one simple idea: sample the function values at equidistant points, find the interpolating polynomial through those points, and integrate the polynomial. Since polynomials are easy to integrate, this method is easy to apply. For example, let’s generate the fourth-order Newton-Cotes rule, which uses a quartic polynomial, and hence requires five points.
(%o4) (14 f5 + 64 f4 + 24 f3 + 64 f2 + 14 f1) h/45
In other words,
and this is known as Boole’s rule.
The trouble with single Newton-Cotes rules is that as the number of points get larger, the interpolating polynomial may develop more “wiggles” and the resulting integral approximation may get worse. For example, take the integral
for which the exact value is
However, let’s try his with a tenth-order rule:
We can see why this is so poor by plotting the original function and its interpolant together (click on image to show full-sized image):
For this reason, better results can be obtained by chopping the integral up into small segments, and applying a low order rule (say, Simpson’s) to each segment.
Newton-Cotes with adjustments
If you look up old texts, or search about on the web, you’ll find rules similar to the Newton-Cotes rules, but with much simpler coefficients.
For example, the sixth-order rule is
As you can see, the coefficients are already getting cumbersome.
One way of adjusting a Newton-Cotes formula is by adding some function differences. If we denote by these are defined as
and all higher differences can be obtained recursively for :
It is not hard to show that
So, for example
The idea is that for most functions, differences tend to get smaller, so by adding some small multiple of a difference, we shouldn’t affect the formula too much, but we may simplify its coefficients. Let’s try this with the Newton-Cotes sixth-order rule, which for simplicity we shall integrate over the range :
This just produces the sixth-order rule seen above. But now!
(%o13) f(7) - 6 f(6) + 15 f(5) - 20 f(4) + 15 f(3) - 6 f(2) + f(1)
(%o14) (3 f(7) + 15 f(6) + 3 f(5) + 18 f(4) + 3 f(3) + 15 f(2) + 3 f(1))/10
This is Weddle’s rule; more commonly written as
Notice how much simpler the coefficients are! Another rule, Hardy’s rule, can also be obtained by an adjustment:
(%o15) (14 f(7) + 81 f(6) + 110 f(4) + 81 f(2) + 14 f(1))/50
or in integral form as
Not only is this simpler than the sixth-order Newton-Cotes rule, but two of the coefficients have vanished.
Averaging different rules
You and your students can invent your own rules by taking some Newton-Cotes rules and forming weighted averages of them. For example, Simpson’s rule (the second order rule) over a difference of is
If we, for example, take
where is the fourth order Newton-Cotes rule, and is the Simpson rule over , we obtain
which certainly has nice coefficients.
Such a formula may not in fact give very accurate results, but students are always excited to discover something “for themselves”, and inventing their own integration rules (and testing them), would be a fascinating exercise.