What's all this, then?

Enumerating the rationals

The rational numbers are well known to be countable, and one standard method of counting them is to put the positive rationals into an infinite matrix $$M=m_{ij}$$, where $$m_{ij}=i/j$$ so that you end up with something that looks like this: $\left[\begin{array}{ccccc} \frac{1}{1}&\frac{1}{2}&\frac{1}{3}&\frac{1}{4}&\dots\\[1ex] \frac{2}{1}&\frac{2}{2}&\frac{2}{3}&\frac{2}{4}&\dots\\[1ex] \frac{3}{1}&\frac{3}{2}&\frac{3}{3}&\frac{3}{4}&\dots\\[1ex] \frac{4}{1}&\frac{4}{2}&\frac{4}{3}&\frac{4}{4}&\dots\\[1ex] \vdots&\vdots&\vdots&\vdots&\ddots \end{array}\right]$ It is clear that not only will each positive rational appear somewhere in this matrix, but its value will appear an infinite number of times.

Fitting the SIR model of disease to data in Julia

A few posts ago I showed how to do this in Python. Now it’s Julia’s turn. The data is the same: spread of influenza in a British boarding school with a population of 762. This was reported in the British Medical Journal on March 4, 1978, and you can read the original short article here. As before we use the SIR model, with equations \begin{align*} \frac{dS}{dt}&=-\frac{\beta IS}{N}\\ \frac{dI}{dt}&=\frac{\beta IS}{N}-\gamma I\\

The Butera-Pernici algorithm (2)

The purpose of this post will be to see if we can implement the algorithm in Julia, and thus leverage Julia’s very fast execution time. We are working with polynomials defined on nilpotent variables, which means that the degree of any generator in a polynomial term will be 0 or 1. Assume that our generators are indexed from zero: $$x_0,x_1,\ldots,x_{n-1}$$, then any term in a polynomial will have the form $cx_{i_1}x_{i_2}\cdots x_{i_k}$ where $$\{x_{i_1}, x_{i_2},\ldots, x_{i_k}\}\subseteq\{0,1,2,\ldots,n-1\}$$.

The Butera-Pernici algorithm (1)

Introduction We know that there is no general sub-exponential algorithm for computing the permanent of a square matrix. But we may very reasonably ask – might there be a faster, possibly even polynomial-time algorithm, for some specific classes of matrices? For example, a sparse matrix will have most terms of the permanent zero – can this be somehow leveraged for a better algorithm? The answer seems to be a qualified “yes”.

The size of the universe

As a first blog post for 2020, I’m dusting off one from my previous blog, which I’ve edited only slightly. I’ve been looking up at the sky at night recently, and thinking about the sizes of things. Now it’s all very well to say something is for example a million kilometres away; that’s just a number, and as far as the real numbers go, a pretty small one (all finite numbers are “small”).

Runge's phenomenon in Geogebra

Runge’s phenomenon says roughly that a polynomial through equally spaced points over an interval will wobble a lot near the ends. Runge demonstrated this by fitting polynomials through equally spaced point in the interval $$[-1,1]$$ on the function $\frac{1}{1+25x^2}$ and this function is now known as “Runge’s function”. It turns out that Geogebra can illustrate this extremely well. Equally spaced vertices Either open up your local version of Geogebra, or go to http://geogebra.