As before, we invoke our numerical software:

>> N = 13; n = 1:N; a = 1./(n.^2); s = cumsum(a); >> s.' ans = 1.000000000000000 1.250000000000000 1.361111111111111 1.423611111111111 1.463611111111111 1.491388888888889 1.511797052154195 1.527422052154195 1.539767731166541 1.549767731166541 1.558032193976458 1.564976638420903 1.570893798184216 >> atk = aitken(s); >> atk(end,:).' ans = 1.570893798184216 1.604976638420905 1.623249742431601 1.633102742772414 1.638466686494426 1.641588484246542 1.641583981554060

We are a little better off here; the final cumulative sum is in error by about , and using Aitken’s process gives us a final value which is in error by 0.003. But this is nowhere near as close as we achieved earlier. And this is because Aitken’s process doesn’t work for a sequence whose convergence is *logarithmic*, that is

This is named for Samuel Lubkin, who wrote extensively about series acceleration in the 1950’s. In fact, his 1952 paper is available online. In the 1980’s J. E. Drummond at the Australian National University took up Lubkin’s cudgels and further extended and developed his methods. Drummond noticed that Aitlen’s and Lubkin’s processes were very closely linked.

First, Aitken’s process can be written as

We can check this using any symbolic package, or working it out by hand:

This last expression can be expanded to:

After some algebraic fiddlin’, we end up with

which is the initial formula for Aitken’s process.

Lubkin’s process can be written as

Given that for a sequence we have , then can be expanded as

It turns out to be more numerically stable to write in the form

of which the numerator of the fraction is equal to . A little bit of algebra shows that the denominator is equal to We can thus write the W-transformation as

So let’s experiment with

>> N = 13; n = 1:N; a = 1./n.^2; >> s = cumsum(a); >> s = cumsum(a); s(end) ans = 1.570893798184216 >> abs(s(end)-pi^2/6) ans = 0.074040268664010

So far, not a particularly good approximation, as we’d expect. So we’ll try the W-transformation, in its original form as the quotient of two second differences:

>> s0 = s(1:N-3); s1 = s(2:N-2); s2 = s(3:N-1); s3 = s(4:N); >> w = (s2./(s3-s2)-2*s1./(s2-s1)+s0./(s1-s0))./(1./(s3-s2)-2./(s2-s1)+1./(s1-s0)); >> w(end) ans = 1.644837749532052 >> abs(w(end)-pi^2/6) ans = 9.631731617454342e-05

which is a great improvement. We can do this again, simply by

>> s = w; N = N-3;

and repeating the above commands. The new values of the final result, and error, are:

>> w(end) ans = 1.644933894309522 >> abs(w(end)-pi^2/6) ans = 1.725387044348992e-07

Just as with Aitken’s process, we can whip up a little program to perform the W-transformation iteratively:

function out = lubkin(c) % Applies Lubkin's W-process to a vector c, which we suppose to % represent a sequence converging to some limit L N = length(c); M = N; % length of current vector s = reshape(c,N,1); % ensures we are working with column vectors out = s; for i = 1:floor(N/3) s0 = s(1:M-3); s1 = s(2:M-2); s2 = s(3:M-1); s3 = s(4:M); t = s2./(s3-s2)-2*s1./(s2-s1)+s0./(s1-s0); t = t./(1./(s3-s2)-2./(s2-s1)+1./(s1-s0)); tcol = zeros(N,1); tcol(3*i+1:N) = t; out = [out tcol]; M = M-3; s = t; end

Note that the generalization of these processes, using

for integers , have been explored by J. E. Drummond, and you can read his 1972 paper online here.

]]>

and . If then the convergence is *super-linear*. Convergence is *quadratic* if

again with . See the wikipedia page for more details. It is not hard to show that if a sequence converges linearly then the increase of significant figures is a linear function of , and if the convergence is quadratic then the number of significant figures roughly doubles at each step.

This is named for the esteemed mathematician Alexander Craig Aitken, who published it in 1926. To derive it, assume that

and solve for . This will produce

or equivalently as

We can now invoke the terminology of differences, of which the forward difference can be written

Then the second forward difference is

Comparing this with the expression for above, we can write

We can use this to produce a new sequence defined by

If the initial sequence converges linearly, then this new sequence will converge faster. If the sequence is obtained from a fixed point process (so that ) then this is called Steffensen’s method, and can be shown to have quadratic convergence. However, our interest is in series approximation.

As an example, we’ll take the first few cumulative sums of our standard sequence

And for simplicity, we’ll use a numerical package (Matlab, GNU Octave, Scilab, etc).

>> N = 13; >> n = 0:N-1; >> a = (-1).^n./(2*n+1); >> s = cumsum(a); >> 4*s' ans = 4.000000000000000 2.666666666666667 3.466666666666667 2.895238095238096 3.339682539682540 2.976046176046176 3.283738483738484 3.017071817071818 3.252365934718877 3.041839618929403 3.232315809405594 3.058402765927333 3.218402765927333

As we would expect, the final result is very innacurate. So we’ll give Aitken’s method a go, first taking out subsequences of length so as to be able to compute all the differences we need:

>> s0 = s(1:N-2); >> s1 = s(2:N-1); >> s3 = s(3:N); >> t = s0 - (s1-s0).^2./(s2-2*s1+s0); >> 4*t' ans = 3.166666666666667 3.133333333333334 3.145238095238096 3.139682539682540 3.142712842712843 3.140881340881342 3.142071817071818 3.141254823607766 3.141839618929403 3.141406718496503 3.141736099260667

and already the last value is surprisingly accurate: with and error of only , compared to an initial error (in the first sequence of cumulative sums) of . And we can apply the delta-squared process to this new sequence:

>> N = N-2; >> t0 = t(1:N-2); >> t1 = t(2:N-1); >> t2 = t(3:N); >> u = t0 - (t1-t0).^2./(t2-2*t1+t0); >> 4*t' ans = 3.142105263157895 3.141450216450217 3.141643323996266 3.141571290201428 3.141602841602842 3.141587320947787 3.141595655236941 3.141590862710498 3.141593774239114

and we have increased the accuracy again. If we applied this process to successively decreasing sequences, the final values would be:

3.218402765927333 3.141736099260667 3.141593774239114 3.141592673909636 3.141592654277287 3.141592653625053 3.141592653591176

and this last value is in error only by about .

Starting with a sequence of partial sums, we can write a simple Matlab program to compute Aitken’s process times, where is the length of the sequence:

function out = aitken(s) % Applies Aitken's delta-squared process to a vector s, which we suppose to % represent a sequence converging to some limit L N = length(s); M = N; % length of current vector a = reshape(s,N,1); % ensures we are working with column vectors out = a; for i = 1:floor(N/2) a0 = a(1:M-2); a1 = a(2:M-1); a2 = a(3:M); b = a0 - (a1-a0).^2./(a2-2*a1+a0); bcol = zeros(N,1); bcol(2*i+1:N) = b; out = [out bcol]; M = M-2; a = b; end

For example, let’s look at the sequence

>> N = 13; n = 1:N; a = (-1).^(n+1)./(n.^2+2*n); s = cumsum(a); >> s.' ans = 0.333333333333333 0.208333333333333 0.275000000000000 0.233333333333333 0.261904761904762 0.241071428571429 0.256944444444444 0.244444444444444 0.254545454545455 0.246212121212121 0.253205128205128 0.247252747252747 0.252380952380952 >> atk = aitken(s); >> atk(end,:).' ans = 0.252380952380952 0.250007568189386 0.250000065527560 0.250000001161480 0.250000000036343 0.250000000001713 0.250000000000078]]>

then the series

converges. However, simply adding terms one by one is, in general very slow and inefficient. For example, take the well known series

which converges to . Here’s a table of the sum of the first terms, and multiplied by 4, with correct digits in bold:

As you see, obtaining correct digits requires terms to be summed.

There are many different methods for obtaining an approximate sum of an infinite series; most of these are known as techniques for *accelerating convergence* or series acceleration.

One of the first documented methods to accelerate the convergence of a series was given by Euler (but of course); he noticed that an alternating series

can be written as

This replaces the original series with a new alternating series which should converge faster, because the terms are smaller.

Now for some experiments. For this blog post I’m going to use the software Pari/GP which is designed primarily for computational number theory, but contains a whole lot of other goodies, in particular some routines for approximating series sums.

To obtain the examples above, for example with :

? s = sum(i=0,10^4,(-1)^i/(2*i+1); ? \p 12 ? 4.0*s %1 = 3.14169264359

Here the first line computes the sum. Since GP works in symbolic, and accurate representation mode as a default, the sum is computed as a massive fraction, which we don;t want displayed. Hence the semi-colon at the end of the line. The second line sets the precision, and the last line displays our approximation to .

We are going to work with the first 11 terms, and apply Euler’s trick. First, create all the terms, noting that we are dealing with the positive values; we don’t need to include the alternating signs:

? a = vector(11,n,1/(2*n-1)) %2 = [1, 1/3, 1/5, 1/7, 1/9, 1/11, 1/13, 1/15, 1/17, 1/19, 1/21]

Note that GP indexing starts at one, rather than zero. The alternating sum of these terms from above is 3.232315809405594, which does not even have one decimal place accuracy. We can create the new sequence by taking differences:

? b = a/2; ? for(i = 2, length(a), b[i] = (a[i]-a[i-1])/2)

and we can compare the values of terms in the original and new series:

? 1.0*matconcat(a;b)~ %3 = [ 1.00000000000 0.500000000000] [ 0.333333333333 -0.333333333333] [ 0.200000000000 -0.0666666666667] [ 0.142857142857 -0.0285714285714] [ 0.111111111111 -0.0158730158730] [0.0909090909091 -0.0101010101010] [0.0769230769231 -0.00699300699301] [0.0666666666667 -0.00512820512821] [0.0588235294118 -0.00392156862745] [0.0526315789474 -0.00309597523220] [0.0476190476190 -0.00250626566416]

The terms in the new series are an order of magnitude smaller than in the original series. Now add them as an alternating series:

? sgns = vector(11,i,(-1)^(i-1)); >> 4.0 * sum(i = 1, 11, b[i]*sgns[i]) %4 = 3.13707771417

This value is a great deal closer to than the first result!

This process can be repeated any number of times, depending on the number of terms of the original series we have at our disposal:

? c = b/2; ? for(i = 2, length(b), c[i] = (b[i]-b[i-1])/2) ? 4.0 * sum(i = 1, 11, c[i]*sgns[i]) %5 = 3.14209024550 ? d = c/2; ? for(i = 2, length(c), d[i] = (c[i]-c[i-1])/2) ? 4.0 * sum(i = 1, 11, c[i]*sgns[i]) %6 = 3.14150053593

and already we have far greater accuracy than at the start, without computing any further terms of the series.

Another way of implementing this acceleration technique is to write the first term as

and consider partial sums instead of individual terms. The -th partial sum of the original series is

Now consider the -th partial sum of the series of differences:

Breaking this up according to the first and second terms in each of the

numerators, we obtain

For the series above, we start with the terms including the sign changes:

? N = 11; ? a = vector(N,i,(-1)^(i-1)/(2*i-1));

Now it will be convenient to create a function called cumsum, which produces the cumulative sum of elements of a vector:

cumsum(x) = { s = x; for(i = 2, length(x), s[i] = s[i-1]+x[i]) return(s) }

Now we can use this to experiment with Euler’s method again:

? s = cumsum(a); ? t = concat(0, vector(N-1, i, (s[i]+s[i+1])/2); ? 4.0*matconcat(s;t)~ %7 = [4.00000000000 0] [2.66666666667 3.33333333333] [3.46666666667 3.06666666667] [2.89523809524 3.18095238095] [3.33968253968 3.11746031746] [2.97604617605 3.15786435786] [3.28373848374 3.12989232989] [3.01707181707 3.15040515041] [3.25236593472 3.13471887590] [3.04183961893 3.14710277682] [3.23231580941 3.13707771417]

And again we can take the pairwise averages of all the elements of , and again:

? u = concat(0,vector(N-1, i, (t[i]+t[i+1])/2)); ? v = concat(0,vector(N-1, i, (u[i]+u[i+1])/2)); ? \p 6 ? 4.0 * matconcat(s;t;u;v)~ %8 = [4.00000 0 0 0] [2.66667 3.33333 1.66667 0.833333] [3.46667 3.06667 3.20000 2.43333] [2.89524 3.18095 3.12381 3.16190] [3.33968 3.11746 3.14921 3.13651] [2.97605 3.15786 3.13766 3.14343] [3.28374 3.12989 3.14388 3.14077] [3.01707 3.15041 3.14015 3.14201] [3.25237 3.13472 3.14256 3.14136] [3.04184 3.14710 3.14091 3.14174] [3.23232 3.13708 3.14209 3.14150]

We can write a small program to do this for the initial values:

euler_trans(x) = { local(s,N,p,t); s = x; N = length(s); p = Vec(s[N]); for(i = 1, N, t = concat(0,vector(N-1,i,(s[i]+s[i+1])/2)); p = concat(p,t[N]); s = t; ); return(p); }

And now use this program:

? \r euler_trans.gp ? \p 10 ? s = cumsum(a*1.0); ? Mat(4*euler_trans(s))~ %9 = [3.232315809] [3.137077714] [3.142090245] [3.141500536] [3.141618478] [3.141582188] [3.141598683] [3.141587686] [3.141598683] [3.141581088] [3.141633874] [3.139152897]

Note that better accuracy is in fact obtained not at the end, but about two-thirds of the way down. This observation (and the proof that it does indeed have higher accuracy), is the contribution of Adriaan van Wijngaarden, who published it in 1965.

So one last check, for 120 terms, stopping our loop at 80:

? \p 20 ? N = 120; ? a = vector(N,i,(-1)^(i-1)/(2*i-1)); ? s = cumsum(a*1.0); ? p = euler_trans(s); ? forstep(i = 1,floor(N*2/3),10,print(p[i])) 3.1332594649198298159 3.1415926535897932384 3.1415926535897932385 3.1415926535897932385 3.1415926535897932385 3.1415926535897932385 3.1415926535897932385 3.1415926535897932385

Our accuracy is at the limit of our set precision:

? print(abs(4*p[80]-Pi)) 1.5281426560689737604 E-37

Notice that all of this was produced with just the first 120 terms of the series; at no time in the computation of the loop did we compute any further terms.

For comparison, the direct sum of these terms is a very poor approximation indeed:

? print(abs(4*s[N]-Pi)) 0.0083331886699634225838

One last example, to compute

Before we attempt it ourselves, we can check its value with the `sumalt` command of Pari/GP:

? altsqrt = sumalt(i=2,(-1)^i/sqrt(i)) %10 = 0.39510135657836962975273408576404450022

As above,:

? N = 120; ? a = vector(N,i,(-1)^(i+1)./sqrt(i+1)); ? s = cumsum(a); s(N) %11 = 0.34974072346962855572717264600532619101 ? abs(s[N]-altsqrt) %12 = 0.045360633108741074025561439758718309212

So the initial sum of 120 elements is not very good – as we’d expect. So, let’s see how Euler-van Wijngaarden manages:

? p = euler_trans(s); ? forstep(i = 1, floor(N*2/3), 10, print(p[i])) 0.34974072346962855572717264600532619101 0.39510135657836962968976358865551977318 0.39510135657836962975273408576388436451 0.39510135657836962975273408576404450023 0.39510135657836962975273408576404450022 0.39510135657836962975273408576404450022 0.39510135657836962975273408576404450022 0.39510135657836962975273408576404450022

And in fact we have reached accuracy to our set precision even earlier:

? forstep(i = 1, floor(N*2/3), 10, print([i,abs(p[i]-altsqrt)])) [1, 0.045360633108741074025561439758718309212] [11, 6.2970497108524727047861082221352448351 E-20] [21, 1.6013571021345509597 E-31] [31, 4.408103815583578155 E-39] [41, 0.E-38] [51, 1.4693679385278593850 E-39] [61, 2.938735877055718770 E-39] [71, 4.408103815583578155 E-39]

Note again that the power of this method is that *without computing any extra terms of the series*, we have increased the precision of the sum from one decimal place to nearly 40.

With Linux, and sticking to open-source or free typesetting software, there are three that I’ve used:

- GNU Lilypond, very mature software in active development since at least 2004, and with a large user base. It is designed to meet pretty much all notational needs, and as you can see by hunting around on its web page, can be used for scores both simple and complex. Its aim is to be software suitable for use by professionals.
- ABC notation goes back even further than Lilypond, to the early 1980s, when its initial creator, Chris Walshaw, was looking for ways of notating folk tunes easily. Since that time ABC has been extended to manage polyphonic music, and with the program abcm2ps developed by Jef Moine it can handle very complex music. A good page which pulls all the strands of ABC together, with examples, is provided by Guido Gonzato. The ABC standard is at version 2.2, and is defined here.
- MusixTeX uses the power of TeX to typeset music. The results are impressive, but the raw input is fiddly in the extreme. For this reason there are several pre-processors, of which PMX is the most full-featured, and M-Tx for lyrics.

All of these programs require the user to write out the music in ascii notation; thus the input is an ascii document which defines the music to be typeset. This provides a nice backwards compatibility: you can grab an ascii file created 10 or more years ago for any of these programs, and maybe doctor it a little to take advantage of the newer functionality.

As an experiment, I’m going to typeset the first eight bars of the the third movement of the “Variation Sonata on Bonny Jean of Aberdeen” by the Scottish composer Charles McLean (fl 1732). A “variation sonata” was a local invention: a sonata in multiple movements where each movement was a variation on a popular folk tune. The variation sonata was supposedly invented by another shadowy figure in 18th century Scottish music: Alexander Munro.

First, Lilypond:

\version "2.19.53" \header{ title = "Variation Sonata on Bonny Jean of Aberdeen" composer = \markup {\column {" " \large "Charles McLean"} " "} } \layout{ indent = 25\mm } global = { \language english } allegroVln = \new Voice \relative c'' { \set Staff.instrumentName = #"Violin " \override Staff.InstrumentName.self-alignment-X = #RIGHT \time 4/4 \key g \major \partial 4 b8 a | g4 b8 c d4 d,8 e | g4 d'8 c b4 a8 g | c8 g'4 c,8 b g'4 b,8 | a4 b8 g e4 b'8 a | g4 b8 c d4 d,8 e | g4 b8 d g4 fs8 e | d g4 c,8 b g'4 b,8 | a g'4 fs8 g4 \bar ":.|.:" } allegroBass = \new Voice \relative c' { \set Staff.instrumentName = #"Bass " \override Staff.InstrumentName.self-alignment-X = #RIGHT \clef bass \key g \major \time 4/4 \partial 4 r4 | g4 d g, b8 c | b4. a8 g4 g' | e c g' g, | d' b c a | b4. a8 g4 b8 c | b4 g'8 fs e4 d8 c | b4 a g g' | d d, g \bar ":.|.:" } \score { \new StaffGroup << \new Staff << \global \allegroVln >> \new Staff << \global \allegroBass >> >> \header{ piece = \markup{\fontsize #2 "III. Allegro"} } }

This is the result:

Next, ABC, using the abcm2ps software to turn the file into a page of music:

X: 1 T: Variation Sonata on Bonny Jean of Aberdeen C: Charles McLean L: 1/4 K: G M: 4/4 Q:"III Allegro" %%score [vln | Bc] V:vln clef=treble name=Violin bracket=2 space=+5pt V:Bc clef=bass name=Bass [V:vln] B/G/ | AB/c/dD/E/| Gd/c/B A/G/ | c/gc/ B/gB/ | AB/G/EB/A/ | GB/c/dD/E/ | GB/d/gf/e/ | d/gc/ B/gB/ | A/gf/g :| [V:Bc] z | G,D,G,,B,,/C,/ | B,,>A,,G,,G, | E,C,G,G,, | D,B,,C,A,, | B,,>A,,C,A,, | B,,G,/F,/E,D,/C,/ | B,,A,,G,,G, | D,D,,G,, :|

with result:

Finally, MusixTeX with the PMX pre-processor:

% Charles McLean, Variation Sonata on Bonny Jean of Aberdeen % PREAMBLE: % nstaves ninstr mtrnuml mtrdenl mtrnump mtrdenp 2 2 4 4 0 6 % npickup nkeys 1 1 % npages nsystems musicsize fracindent 1 2 16 .08 Bass Violin bt ./ % BODY: % HEADER: Tc Charles McLean (fl. 1732) Tt Variation Sonata on Bonny Jean of Aberdeen h III. Allegro Abec4 % Bars 1 - 4 r4 g43 d g- b8 c | b4.a g4 g+ | e c g+ g- | d+ b c a | / [ b84 a ] g4 b8 c d4 d8- e | g4 d8+ c b4 a8 g | c8 g4+ c8- b g4+ b8- | a4 b8 g e4 b8+ a / % Bars 5 - 7 b4.a g b8 c | b4 g8+ f e4 d8 c | b4 a g g+ | / g4 b8 c d4 d8- e | g4 b8 d g4 f8 e | d8 g4 c8- b g4+ b8- | / % Last partial bar m3400 d4 d- g Rr / a8 g4+ f8 g4 Rr /

and its result:

For me, the use of a music notation system is dependent on two aspects: ease of use; and quality of output. I think all of these systems – at least, for my needs – are roughly comparable in their output. They all make sensible suggestions about layout and spacing, and as far as I can tell all three outputs look pretty good.

For ease of use, I found ABC easiest, followed by Lilypond, and last by PMX. In fact PMX does so much checking that it won’t compile and typeset your music until the file is set up correctly. As you see from the preamble, there are many factors which affect the result, and which can be set by the user. PMX works by a sort of coding system, where the length of a note is given by one digit: 0, 1, 2, 4, 8, 3, 6, 9 for breve, whole-note (or semibreve), half-note (minim), quarter note (crotchet), eighth, sixteenth, thirty-second and sixty-fourth note. PMX shares with Lilypond a relative pitch naming system, where a note is placed automatically in the same octave as the previous note (that is, within one fourth), and can be moved into a higher or lower octave by extra symbols: in PMX plus and minus (+ and -); in Lilypond apostrophe and comma (‘ and ,).

Lilypond’s learning curve is steeper than ABC, and if you read through the manual you’ll come up against “contexts”, “engravers” which are initially confusing. Once mastered, however, you have almost infinite control over the output.

PMX was also the only system where the notes in the first partial, or pick-up, bar of the violin part weren’t automatically beamed. Also PMX is very fussy about final partial bars (such as in my example) – it requires you to formally define a partial bar by what is called a “blind meter change”.

ABC, or rather abcm2ps, is the least fussy of all the systems. For example if you make a mistake in your notation, and end up with a bar that has the wrong number of beats in it, ABC will just go ahead and try to print it out, even if it looks like rubbish. This makes debugging simple, as you can immediately see where you’ve gone wrong. On the other hand, ABC doesn’t support relative pitch notation, although this can be alleviated by defining your octave, and placing notes within it.

So which is better? For my purposes, ABC is certainly good enough and is by far and away the simplest, although Lilypond has the edge in terms of the way it is structured, and for professional control over the output. (Although, in fairness, ABC also allows fine control over the output, at least for placement of notes and other elements.) I don’t think MusixTeX/PMX is in the running: it’s too fiddly, and there are easier ways of obtaining equivalent output.

]]>

to compute the derivative of “from first principles”. And this limit can be evaluated analytically in several ways. We could for example use the addition law for sine and write the limit as

and then compute the individual limits. This is where that fine old mathematical chestnut

arises.

Or we could use the sine difference formula

and so write the limit as

Neither of these approaches is wrong, but as teaching tools they leave a lot to be desired. They are very abstract, and several removes from the geometric notion of the derivative as a rate of change, or of the gradient of a tangent. Beginning students, especially those with weak backgrounds, are likely to be bemused.

So here is a geometric approach. I discovered that it is in fact very well known (well, something so simple was bound to be), but in fact I’d never come across this until today. So here it is.

Start with and on a diagram:

Then is the vertical side of the small blue triangle in this next diagram:

Since is very small, the hypotenuse of this triangle will be very close to the arc length of the circle, which is also , since we are working in radians. Thus the hypotenuse can be considered to be .

Now let’s look at a few angles. First the outer angles in the triangle created by will be equal, since the triangle is isosceles (the two long sides are both radii of the circle):

Both these angles will be equal to

Since the upper green angle in the first diagram is

the upper angle in the blue triangle will be

which simplifies to

Just to make this quite clear, here’s the top point:

We have

and so

as we found above.

So, the blue triangle has hypotenuse , vertical side and upper angle :

This means that, for small :

or alternatively that

Taking the limit of both sides as tends to zero:

Voilà!

Note that the base of the blue triangle, checking with the first diagram, is

This means that

or

from which, by taking limits of both sides as tends to zero, we obtain

]]>

which has a cents value of

The *syntonic comma* may be considered as the difference in two ways of tuning 28 semitones: either as four perfect fifths, or two octaves and a major third, for which the just values are and respectively. This has a value in cents of

*Meantone tunings* are based on dealing with the syntonic comma; for example, by setting all fifths so that four fifths will be equal to two octaves and a major third; thus each fifth will have a value of , or about 696.58 cents. Note that 12 of these are a bit less than 7 octaves:

This means that in order to fill up our octaves, one fifth is going to have to be very sharp: this will be the “wolf” fifth. Starting at C, we want the wolf fifth to be as far away from C as possible, which in the cycle of fifth is F#:

So our basic meantone tuning will have 11 fifths tuned to , and the final fifth () tuned to

This will be a very sharp fifth. To see the intervals, use the `intervals` program of the previous post.

[In ]: fifth = rat2cents(5.0**0.25) [In ]: wolf = 8400 - 11*fifth [In ]: mt = [fifth]*12; mt[6]=wolf [in ]: print intervals(mt)['D'] [[ -5.38 -0. 5.38 -5.38] [-46.44 -0. -35.68 -5.38] [ -5.38 -0. 5.38 -5.38] [-46.44 -0. 5.38 -5.38] [ -5.38 41.06 5.38 -5.38] [ -5.38 -0. 5.38 -5.38] [ -5.38 41.06 5.38 35.68] [ -5.38 -0. 5.38 -5.38] [-46.44 -0. 5.38 -5.38] [ -5.38 41.06 5.38 -5.38] [ -5.38 -0. 5.38 -5.38] [ -5.38 41.06 5.38 -5.38]]

We see immediately that the fifths are a bit flat; the fourths correspondingly sharp, with the exception of the wolf which is way off. Note that many of the major thirds are pure – this is a facet of meantone. We might compare it with equal temperament (ET):

[In ]: print intervals([700]*12)['D'] [[-15.64 13.69 1.96 -1.96] [-15.64 13.69 1.96 -1.96] [-15.64 13.69 1.96 -1.96] [-15.64 13.69 1.96 -1.96] [-15.64 13.69 1.96 -1.96] [-15.64 13.69 1.96 -1.96] [-15.64 13.69 1.96 -1.96] [-15.64 13.69 1.96 -1.96] [-15.64 13.69 1.96 -1.96] [-15.64 13.69 1.96 -1.96] [-15.64 13.69 1.96 -1.96] [-15.64 13.69 1.96 -1.96]]

The fifths and fourths are closer to pure in ET, but all the thirds are considerably out.

Meantone is so called because the size of the first two tones is equal. If we list the scale:

[In ]: print intervals(mt)['S'] [0.0, 117.11, 193.16, 310.26, 386.31, 503.42, 579.47, 696.58, 813.69, 889.74, 1006.84, 1082.89, 1200.0]

we see that the first tone C-D is cents, and the next tone, D-E is cents. This is not the case for just intonation, where the tones are 10/9 and 9/8, noting that

which is a major third.

Meantone tuning has one of the longest durations in history, being in use from the early 1500’s into the 1700’s. Some modern scholars believe that much of the music of Mozart and Haydn can be played to great effect in meantone. The meantone above is known as “quarter-comma meantone”, since every fifth (aside from the wolf) is tuned down one quarter of a syntonic comma. We can see this by noting that meantone fifth (in cents) is , and the syntonic comma is

Reducing a pure fifth by one quarter of this produces

Alternatively, note that the distance between and in cents, is

which is a quarter of the syntonic comma.

This quarter comma meantone was first described by the musician and theorist Pietro Aron (1480 – 1545) in his treatise *Toscanello in musica*, published in 1525. According to the software engineer and musician Graham Breed, quarter-comma meantone is optimal in the sense of minimizing the root-mean square value of the differences of the thirds with their just values. I haven’t checked this myself, yet!

Aron’s quarter comma meantone is just one of many. The “1/6 comma meantone” tunes each fifth down by one sixth of the syntonic comma:

If we take out the fraction on the right:

and this fraction reduces down to

This is the value for the fifth, and is approximately 1.4969, or 698.37 cents. It is closer to the true fifth than . For a quick test:

[In ]: fifth = rat2cents(11.25**(1.0/6)) [In ]: wolf = 8400 - 11*fifth [In ]: mt6 = [fifth]*12; mt6[6]=wolf [in ]: print intervals(mt6)['D'] [[-10.75 7.17 3.58 -3.58] [-10.75 26.72 3.58 15.97] [-10.75 7.17 3.58 -3.58] [-30.31 7.17 3.58 -3.58] [-10.75 26.72 3.58 -3.58] [-10.75 7.17 3.58 -3.58] [-10.75 26.72 3.58 -3.58] [-10.75 7.17 3.58 -3.58] [-30.31 7.17 -15.97 -3.58] [-10.75 7.17 3.58 -3.58] [-30.31 7.17 3.58 -3.58] [-10.75 26.72 3.58 -3.58]]

The fourth and fifths are better; the wolf is much reduced, and although none of the thirds are pure, they are better in general that with equal temperament.

Another meantone which was extensively discussed by early theorists was 2/7-comma meantone, where every fifth is tempered by 2/7th of a syntonic comma. This has the effect of approximating a fifth by

Then there’s 1/3 comma meantone, which tempers every fifth by one-third of a syntonic comma, resulting in fifths equal to

And of course one-fifth comma meantone, with fifths

And so on.

More recently, Charles Lucy has promoted a tuning originally developed by the horologist John Harrison. Harrison was the subject of Dava Sobel‘s entertaining biography “Longitude”. Harrison’s idea was to approximate the fifth (in cents) by

which leads to a fifth frequency of approximately

Tunings based on this value are now known as “Lucy tunings”.

]]>But what if we narrow just *some* of the fifths? Could we obtain a system with the key range of equal temperament, and some of the precision of the home key of Pythagorean temperament?

Francesco Antonio Vallotti (1697 – 1780) was a composer, organist and theorist who developed a number of different tuning systems. One of the most popular narrows six fifths and keeps the rest pure. If we start our cycle at C, then Vallotti has the fifths

D, A, E, B, F#, C#, G#

all flattened by one sixth of a Pythagorean comma.

Here’s how we might generate tuning of a scale tuned by Vallotti’s temperament, in Python. First define the scale, the Pythagorean comma, and a perfect fifth, using the “rat2cents” function developed earlier:

[In ]: scale = ['C','C#','D','Eb','E','F','F#','G','G#','A','Bb','B'] [In ]: pcomma = rat2cents(1.5**12/2.0**7) [In ]: perf5th = rat2cents(1.5)

Now we turn our scale into a cycle of fifth, and generate the sequence of Vallotti fifths:

[In ]: val5ths = [perf5th]*12 [In ]: val5ths[2:8] -= pcomma/6

To turn these tempered fifths back into a scale, recall that if

are the fifths, then the scale can be obtained by adding enough fifths to get from C to the required note. Since the function

turns a cycle of fifths into a scale and back again, the values of the scale can be obtained as

[In ]: valscale = [sum(val5ths[:(7*i)%12])%1200 for i in range(12)]+[1200] [In ]: for v in valscale: print round(x,2) 0.0 94.13 203.91 294.13 400.0 498.04 596.09 701.96 792.18 901.96 996.09 1098.04 1200.0

We see that note values are quite close to equal temperament. To see if this closeness provides good thirds, fourths, and fifths, we can determine the differences between the just intonation values.

[In ]: just = [rat2cents(x) for x in [6.0/5, 5.0/4, 4.0/3, 3.0/2]] [In ]: D = [3,4,5,7] [In ]: intervals = numpy.zeros((12,4)) [In ]: for i in range(12): ...: intervals[i,:] = [round((valscale[(i+x)%12]-valscale[i])%1200,2) for x in D] ...:

So here `just` is a list of the cent values of the justly tuned intervals, and `D` is the number of semitones in each interval. The array `intervals` determines the cents in each interval with semitones by the standard method of determining the value of

over all values . Determining the differences now is straightforward:

In : diffs = np.zeros((12,4)) In : for i in range(4): ...: diffs[:,i] = [round(x,2) for x in inv[:,i]-just[i]] ...: In : print diffs [[-21.51 13.69 0. 0. ] [ -9.78 17.6 3.91 -3.91] [-21.51 5.87 0. -3.91] [-13.69 21.51 -0. 0. ] [-13.69 5.87 3.91 -3.91] [-21.51 17.6 -0. -0. ] [ -9.78 13.69 3.91 -3.91] [-21.51 9.78 0. 0. ] [ -9.78 21.51 3.91 0. ] [-17.6 5.87 3.91 -3.91] [-17.6 21.51 -0. 0. ] [ -9.78 9.78 3.91 -3.91]]

We see immediately that the fifths (last column) and fourths (third column) are very close to pure, but the thirds are mostly way off, with the minor thirds (first column) very flat, and the major thirds (second column) very sharp.

Similar to the Vallotti temperament is the second temperament of Thomas Young (1773 – 1829), which also has six consecutive fifths all tempered by one sixth of a Pythagorean comma. Young tempered the first six fifths, rather than Vallotti who left the first two fifths pure. Young’s temperament has the same disadvantages (poor thirds) as Vallotti’s.

Before we go exploring other temperaments, it will be convenient to have a program to call on to produce all the output we need. For example:

def intervals(fifths,rounding=True): """ The parameter 'fifths' is a list or numpy vector consisting of a circle of fifths. The output is a dictionary indexed by: 'S': the corresponding scale 'I': the minor thirds, major thirds, fourths, and fifths 'A': all intervals (except unison and octave) 'D': differences between I and their justly tuned values The parameter 'rounding' may be set to False, in which case all outputs are given in double precision. The default is to produce outputs rounded to two decimal places """ scale = [sum(fifths[:(7*i)%12])%1200 for i in range(12)]+[1200] semitones = [3,4,5,7] just = [rat2cents(x) for x in [6.0/5,5.0/4,4.0/3,3.0/2]] just_ratios = \ [0,16.0/15,9.0/8,6.0/5,5.0/4,4.0/3,45.0/32,3.0/2,8.0/5,5.0/3,\ 16.0/9,15.0/8,2.0] alljust = [rat2cents(x) for x in just_ratios] intvls = np.zeros((12,4)) for i in range(12): intvls[i,:] = [(scale[(i+x)%12]-scale[i])%1200 for x in semitones] diffs = np.zeros((12,4)) for i in range(4): diffs[:,i] = intvls[:,i]-just[i] all_intervals = np.zeros((12,11)) for i in range(12): all_intervals[i,:] = [(scale[(i+x)%12]-scale[i])%1200 for x in range(1,12)] if rounding: scale = [round(x,2) for x in scale] for i in range(12): intvls[i,:] = [round(x,2) for x in intvls[i,:]] all_intervals[i,:] = [round(x,2) for x in all_intervals[i,:]] diffs[i,:] = [round(x,2) for x in diffs[i,:]] return {'S':scale, 'I':intvls, 'A':all_intervals, 'D':diffs}

This means we can play with any tempered sequence of fifths we like, and check on the resulting intervals.

Andreas Werckmeister (1645 – 1706) developed a number of different temperaments. One kept eight of the fifths pure, and tempered the others by one quarter of a Pythagorean comma. The fifths to be tempered were C – G – D – A and B – F#. Given our program above, this temperament can be easily examined:

[In ]: wkm = [perf5th]*12 [In ]: wkm[:3] -= pcomma/4 [In ]: wkm[5] -= pcomma/4 [In ]: temp = intervals(wkm) [In ]: print temp['D'] [[-21.51 3.91 0. -5.87] [-15.64 21.51 -0. 0. ] [ -9.78 9.78 5.87 -5.87] [-21.51 15.64 -0. 0. ] [ -9.78 15.64 0. -0. ] [-21.51 3.91 -0. -0. ] [-15.64 21.51 5.87 0. ] [-15.64 9.78 5.87 -5.87] [-15.64 21.51 -0. 0. ] [ -3.91 15.64 5.87 -0. ] [-21.51 9.78 -0. 0. ] [-15.64 15.64 0. -5.87]]

As with Vallotti, the minor thirds are flat and the major thirds sharp.

Another Werckmeister temperament flattens the fifths by these fractions of a Pythagorean comma:

So:

[In ]: wkm2 = [perf5th]*12 [In ]: wkm2[:7] -= pcomma/8 [In ]: wkm2[7] -= pcomma/12 [In ]: wkm2[11] -= pcomma/8 [In ]: out = intervals(wkm2) [In ]: print out['D'] [[-20.53 9.78 0.98 -2.93] [-12.71 19.55 2.93 -1.96] [-14.66 9.78 2.93 -2.93] [-16.62 17.6 -0. 0. ] [-12.71 10.75 2.93 -2.93] [-21.51 11.73 -0. -0.98] [-12.71 16.62 2.93 -2.93] [-17.6 9.78 2.93 -2.93] [-13.69 20.53 1.96 0. ] [-12.71 9.78 2.93 -2.93] [-19.55 14.66 -0. 0. ] [-12.71 13.69 2.93 -2.93]]

Again, although the fourths and fifths are all quite close to pure, the minor thirds again are flat, and major thirds sharp.

There are plenty of other temperaments which have been published and used over the past few hundred years, and many of them, such as these, sacrifice purity of thirds for purity (or at least closeness) of fourths and fifths.

Suppose we temper six of the fifths by one-sixth of a Pythagorean comma. Which group of temper fifths will provide the best thirds? We can measure “best” by the smallest sum of squares of the differences between the major thirds and the just value. And this can be done by iterating over all possible combinations of 6 ones and 6 zeros:

[In ]: import itertools [In ]: for x in list(it.combinations(range(12),6)): ...: v = [0]*12 ...: for i in x: ...: v[i] = 1 ...: L += [v] ...: [In ]: mn = 10000 [In ]: z = [0]*12 [In ]: for x in L: ...: tmp = [perf5th -i*pcomma/6 for i in x] ...: out = sd.intervals(tmp) ...: outd = out['D'][:,1] ...: sq = sum([y**2 for y in outd]) ...: if sq < mn: ...: z = x ...: mn = sq ...: print z ...: [1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0]

This would seem to indicate that, in terms of least squares of distances, the temperament where alternate pairs of fifths are tempered would give good results. So let’s see:

[In ]: ts = [1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0] [In ]: temp = [perf5th - i*pcomma/6 for i in ts] [In ]: out = intervals(temp) [In ]: print out['D'] [[-17.6 13.69 0. -3.91] [-13.69 13.69 -0. 0. ] [-13.69 13.69 3.91 0. ] [-17.6 13.69 3.91 -3.91] [-17.6 13.69 0. -3.91] [-13.69 13.69 -0. -0. ] [-13.69 13.69 3.91 0. ] [-17.6 13.69 3.91 -3.91] [-17.6 13.69 -0. -3.91] [-13.69 13.69 0. -0. ] [-13.69 13.69 3.91 0. ] [-17.6 13.69 3.91 -3.91]]

The major thirds are false by exactly the same amount as in equal temperament; all the minor thirds are flat, but in general not as badly as in the temperaments discussed above.

I have no idea what this temperament is called, but no doubt it’s been written about, discussed and examined along with all the others.

]]>- The perfect fifth corresponds to a frequency ration of 3/2 or 1.5
- The octave corresponds to a frequency ration of 2.
- It is impossible to have fully accurate fifths and octaves together on a keyboard, as 12 fifths overshoot 7 octaves by an amount known as the
*Pythagorean Comma*. - Pythagorean tuning tries to have as many fifths perfect as possible: it has 11 perfect fifths and one which is flat by the comma, and this flat fifth will sound noticeably out of tune.

Just Intonation is the ascribing of small and simple fractions to all intervals within a single octave. We can do this by making a whole lot of fractions using powers of 3 and 5, and adjust them by multiplying or dividing by a power of two so that the result lies between 1 and 2. This gives the following ratios:

Interval | Semitones | Name | Fraction | Cents |

C – C | 0 | Unison | 0 | 0.0 |

C – C♯ | 1 | Semitone | 16/15 | 111.73 |

C – D | 2 | Whole tone | 9/8 | 203.91 |

C – E♭ | 3 | Minor third | 6/5 | 315.64 |

C – E | 4 | Major third | 5/4 | 386.31 |

C – F | 5 | Perfect fourth | 4/3 | 498.04 |

C – F♯ | 6 | Tritone | 45/32, 64/45 | 590.22 |

C – G | 7 | Perfect fifth | 3/2 | 701.96 |

C – G♯ | 8 | Minor sixth | 8/5 | 813.69 |

C – A | 9 | Perfect sixth, Major sixth | 5/3 | 884.36 |

C – B♭ | 10 | Minor Seventh | 16/9, 9/5 | 996.09 |

C – B | 11 | Major Seventh | 15/8 | 1088.27 |

C – C’ | 12 | Octave | 2 | 1200 |

You can see one way of doing this on the wikipedia page above. This particular group of fractions is known also as 5-limit tuning, as there is no prime number greater than 5 used. If we allow the use of 7, we the following list of fractions and cents is one of many possible (the other intervals stay as for 5-linit tuning):

Interval | Semitones | Name | Fraction | Cents |

C – C sharp | 1 | Semitone | 15/14 | 119.44 |

C – D | 2 | Whole tone | 8/7 | 231.17 |

C – F sharp | 6 | Tritone | 7/5 | 582.51 |

C – B flat | 10 | Minor Seventh | 7/4 | 968.83 |

C – B | 11 | Major Seventh | 28/15 | 1080.56 |

There’s a whole page of possible 7-limit tunings here.

Just intonation is no better than Pythagorean temperament, but it’s a handy reference chart with which to compare other temperaments.

This is the temperament in which almost all modern pianos are tuned, and the one in which piano tuners are trained. It is very simple to describe: each semitone is exactly 100 cents. So we can easily determine the cent values of intervals by listing all the necessary multiples of 100:

Interval | Semitones | Name | 5-limit Just | Equal |

C – C | 0 | Unison | 0.0 | 0 |

C – C♯ | 1 | Semitone | 111.73 | 100 |

C – D | 2 | Whole tone | 203.91 | 200 |

C – E♭ | 3 | Minor third | 315.64 | 300 |

C – E | 4 | Major third | 386.31 | 400 |

C – F | 5 | Perfect fourth | 498.04 | 500 |

C – F♯ | 6 | Tritone | 590.22 | 600 |

C – G | 7 | Perfect fifth | 701.96 | 700 |

C – G♯ | 8 | Minor sixth | 813.69 | 800 |

C – A | 9 | Perfect sixth, Major sixth | 884.36 | 900 |

C – B♭ | 10 | Minor Seventh | 996.09 | 1000 |

C – B | 11 | Major Seventh | 1088.27 | 1100 |

C – C’ | 12 | Octave | 1200 | 1200 |

We notice first that aside from the unison and the octaves, no other intervals are pure, although some are closer than others. The fifth is 700 cents instead of 701.955: that’s close enough for the human ear (in fact there are beats, but they are too slow to be noticed). Other intervals are quite poor: the major and minor thirds are both about 15 cents out, as are the sixths.

As before, we can see this by plotting major thirds for both temperaments:

[In ]: s = 44100 [In ]: dur = 1 [In ]: dpi = 2.0*np.pi [In ]: hz = 283 [In ]: t = np.linspace(0,dur,dur*s) [In ]: x = np.sin(dpi*hz*t) [In ]: y = np.sin(1.25*dpi*hz*t) [In ]: z = np.sin(2.0**(300.0/1200)*dpi*hz*t) [In ]: pyplot.plot(t,x+y) [In ]: pyplot.plot(t,x+z)

So here is a justly tuned minor third:

This is a rich, multi-layered output, and the effect of the sound would be consonant -it would still sound very good.

And here is the equal tempered minor third:

As you see, it’s a jagged thing with internal beats.

Equal temperament of course has the advantage of allowing play equally well in any key: a difference of 7 semitones will be 700 cents no matter where you are on the keyboard. It has the disadvantages of having no pure intervals, and no differences in “colour” between keys: in equal temperament, the only difference between keys is their pitch. The frequency ratios between harmonies never changes.

We can consider equal temperament as another way of dealing with the Pythagorean comma. In Pythagorean tuning we kept 11 fifths pure and flattened the last fifth by the comma. In equal temperament, all fifths are equally flattened by one twelfth of the comma. Since an equal tempered fifth is 700 cents, and a pure fifth 701.955 cents, we find that

which is the amount of the comma.

]]>As we know, an octave is divided into 12 semitones, and out of those semitones we can build various sequences of notes, called *scales*. To make our explanations easier, here is a diagram of one octave on a modern keyboard:

A *major scale* consists of playing notes in an octave separated by 2,2,1,2,2,2,1 semitones:

One example is playing all the white notes between two C’s an octave apart.

A *minor scale* has a different sequence of semitones: 2,1,2,2,1,2,2:

A *chromatic scale* consists of all notes: 1,1,1,1,1,1,1,1,1,1,1,1:

A *pentatonic scale* consists of five notes from the octave; such scales are particularly common in folk music; one such scale has the semitone differences 2,2,3,2,3:

Intervals are described in terms of the number of notes between them, counting the first note as one. Thus for the notes of a major scale, we have, starting with the bottom note (known as the *tonic*):

Notes |
Interval |
Semitones |

C – C | Unison | 0 |

C – D | Whole tone | 2 |

C – E | Major third | 4 |

C – F | Perfect fourth | 5 |

C – G | Perfect fifth | 7 |

C – A | Sixth | 9 |

C – B | Seventh | 11 |

C – C’ | Octave | 12 |

We would like a perfect fifth to have a frequency ratio of exactly 1.5, and an octave exactly 2, but these are impossible on a modern keyboard.

Suppose we start with a chromatic scale, and turn it into a cycle of fifths by jumping up seven semitones in sequence. We can obtain this in Python with

[In ]: chr_scale =['C','C#','D','Eb','E','F','F#','G','G#','A','Bb','B'] [In ]: fifths = [chr_scale((i*7)%12) for i in range(13)] [out]: ['C', 'G', 'D', 'A', 'E', 'B', 'F#', 'C#', 'G#', 'Eb', 'Bb', 'F', 'C']

Pythagorean tuning requires that each such jump be a perfect fifth. Since a perfect fifth corresponds to , we can obtain the number of cents needed for a scale by addition, and subtracting 1200 whenever the result is bigger than 1200:

def mod1200(x): return x-(x//1200)*1200

Then:

[In ]: pyth_fifths = [mod1200(i*1200*np.log2(1.5)) for i in range(12)]

Since we can use the same procedure that we used for obtaining the cycle of fifths:

[In ]: pyth_scale = [pyth_fifths[(7*i)%12) for i in range(12)]+[1200] [In ]: [round(x,2) for x in pyth_scale] [Out]: [0.0,113.69,203.91,317.6,407.82,521.51,611.73,701.96,815.64,905.87,1019.55,1109.78,1200]

And we can build a major scale as follows:

[In ]: semis = [0,2,2,1,2,2,2,1] [In ]: pyth_maj = [pyth_scale[i] for i in np.cumsum(semis)] [In ]: [round(x,2) for x in pyth_maj] [Out]: [0.0, 203.91, 407.82, 521.51, 701.96, 905.87, 1109.78, 1200.0]

This is what it sounds like:

The trouble with this tuning is that it’s not perfect for all possible intervals – even fifths. We can investigate all fifths with:

[In ]: for i in range(12): fifth = pyth_scale[(i+7)%12] - pyth_scale[i] print '{0:2} {1:5}'.format(chr_scale[i],round(mod1200(fifth),2)) C 701.96 C# 701.96 D 701.96 Eb 701.96 E 701.96 F 678.49 F# 701.96 G 701.96 G# 701.96 A 701.96 Bb 701.96 B 701.96

The fifth at F is very flat, and will sound pretty awful. We could have realized this even without listing all the fifths first – we had to make a compromise, and that is that including an octave of exactly 1200 cents will be flat with respect to the rest of this tuning. Since at F the fifth is F – C’ (C at the octave) we can expect a flattened fifth from F. Alternately, recall that 12 perfect fifths overshoot 7 octaves by an amount known as the Pythagorean Comma, so that if we are to fit 12 fifths into 7 octaves, at least one of them must be flat. Pythagorean tuning gives us 11 perfect fifths, and one very flat fifth .

Such a poorly mistuned interval is called a *wolf interval*, and minimizing such mistunings while, if possible, keeping as many intervals pure as possible, is the main problem of western tuning.

Since F is a standard modulation from C (one fifth below), a flat fifth at F is highly unsatisfactory. This can be ameliorated by putting C in the middle of the circle of fifths, and tuning in perfect fifths up so say F sharp, and down to G. Then the flat fifth will be F sharp to C sharp. But because of the wolf fifth, Pythagorean tuning forces you not to stray too far away from your original key. This is one of the many problems with Pythagorean tuning.

We can check out the waveform of the wolf fifth, and compare it to a perfect fifth:

[In ]: s = 44100 [In ]: dur = 1 [In ]: dpi = 2.0*np.pi [In ]: hz = 283 [In ]: t = np.linspace(0,dur,dur*s) [In ]: x = np.sin(dpi*hz*t) [In ]: y = np.sin(1.5*dpi*hz*t) [In ]: z = np.sin(2.0**(678.49/1200)*dpi*hz*t) [In ]: pyplot.plot(t,x+y) [In ]: pyplot.plot(t,x+z)

Here is a perfect fifth lasting for a second:

You can see that everything lines up nicely; it would sound sweet and pure.

The wolf fifth looks like this:

We have a shifting pattern of amplitudes; these would appear to the ear as a pulsing sound known as *acoustic beats*. Piano tuners will try to broaden out beats so that they are no longer audible. Audible beats will be heard as a mistuning.

Here is the sound of this wolf fifth:

]]>As you know, in Western music, the smallest pitch difference commonly used is a semitone, of which 12 make an octave. We speak of an *interval* as the difference between two pitches, and intervals played sequentially form a melody; in parallel a chord. It turns out that two or more notes played simultaneously sound best when the ratio of their frequencies is a fraction with small integer values. For example, two notes with frequencies 300Hz and 450Hz would sound good together, as the ratio of their frequencies is 3/2 (or 2/3; doesn’t matter). We can see this by plotting

which produces something like this:

As you see, it’s a nice, regular shape, with all internal maxima and minima lining up. If we play the notes first singly and then together:

They “meld” together well – they sound like they go together, and make a nice concordant sound.

If instead we plotted

we would have this messy graph:

As a sound this would come across as discordant:

As you hear, it sounds like two pitches with no relationship between them.

A difference in pitch between two notes is called an interval, of which the most fundamental is the octave: two notes whose frequency ratio is 2:

A single pitch can be described in terms of cycles per second, or *Hertz*; the greater the Hertz, the higher the pitch. This value is called the *frequency* of the sound. Multiply the frequency by 2 and you have the same note an octave higher. For example, modern concert pitch A is defined as 440Hz, so the A one octave higher has a frequency of 880Hz, and one octave lower 220Hz. The range of sounds perceptible to the human ear is about 20Hz to 20,000Hz; the range of a modern concert piano is 27.5Hz to 4186.01Hz.

Frequency is an exponential scale, so the ratio of two similarly named notes will be equal no matter their individual frequencies. Thus if A=440Hz, and the E above A has frequency E=660Hz, their ratio is . If you go three octaves higher, you will have and . The frequency ratio is .

To turn an exponential scale into an additive scale, we use *cents*, of which 100 make a semitone (hence the name “cents”), and so 1200 make an octave. This means that a single cent must satisfy

and so

.

So given a ratio between two frequencies, the number of cents satisfies

or more simply

So a ratio of , for example, corresponds to cents. The use of cents is very convenient, as to consider the frequencies between a set of notes, we can simply add (or subtract) the number of cents.

In (Western) music, an octave is equal to 12 semitones. The pitch ratio , which we heard above as a concord, sits at seven semitones. So every time we go up seven semitones, we multiply the pitch by 1.5. Since 7 and 12 are relatively prime, we would hope that , which it can’t, as this would imply or , thus contradicting the fundamental theorem of arithmetic. The pitch ratio in cents of 12 lots of 1.5 is

and 7 octaves is of course

.

The difference of these two pitches is cents, and this value is called the Pythagorean comma. In general, in discussions of tuning and pitch, a *comma* is the difference in pitches between a note obtained using different tunings. This means that the basic ratios of 2 (for the octave) and 1.5 (called a *perfect fifth*) are fundamentally incompatible. The business of trying to manage this incompatibility has occupied theorists and players for hundreds of years, and continues to do so. The result is that it is impossible to have a perfect scale where every ratio (of low values) is fully concordant.

We will look at some different approaches to scale building in future posts.

]]>