Back in 1998, Paul Zimmermann wrote an article “Comparison of three public-domain multiprecision libraries: BigNum, Gmp and Pari” which is available (as a gzipped PostScript file) here. I can’t find anything more recent. Since then, there have been several other libraries produced. The main contenders seem to be:
- GMP, the “GNU Multi-Precision Arithmetic Library”. This is now at version 5.05, and is a very complete and mature package. The authors take pains to point out that it is not a number theory package, and although it has a few number theoretical routines (prime testing, computing of Jacobi and Legendre symbols), these are not the focus of the library.
- MPIR, which is a fork of GMP, and which is supposed to be used as a drop-in replacement for it. It has been optimized for different operating systems and C compilers.
- LibTomMath, which is one of a suite of open source libraries, of which the chief developer is Tom St Denis. They are designed to be easy to use, and to provide a simple and efficient API for further development.
- PARI, which is a C library for number theory, and the engine for GP, a number theory calculator. Its current version is 2.5.1.
There are several other big integer libraries available, but most seem to be the interests of a single person, or only available for C++ (such as NTL or LiDIA). There are also some other libraries which I have not considered , such as FLINT, which is a ” Fast Library for Number Theory”, but whose focus is on polynomial computations.
The problem I’m considering is the one from my previous post: given
how long does it take to produce one million values of this sequence?
I’ve written four programs, cpowers.c, mpirpowers.c, tompowers.c, paripowers.c. The first one of these just uses standard C with no trimmings, and no optimization, aside from using bit shifts where possible. (These were cleverly devised by Lih-Yuan Deng, and can be found here.) And in the interests of portability, I made sure that all modular multiplications and squarings remained within 32 bits. The next three programs use the libraries MPIR, TomMath and PARI repectively, and are compiled by being linked with them.
I ran each program five times, and averaged the results. And they are:
- Standard C: 1.54 seconds.
- Using the MPIR library: 0.78 seconds.
- Using the PARI library: 0.523 seconds.
- Using the TomMath library: 12.83 seconds.
I don’t know why the TomMath library is so slow. In the document describing the library and its algorithms, the author says:
While LibTomMath is certainly not the fastest library (GMP often beats LibTomMath by a factor of two) it does feature a set of optimal algorithms for tasks such as modular reduction, exponentiation, multiplication and squaring. GMP and LIP also feature such optimizations while MPI only uses baseline algorithms with no optimizations. GMP lacks a few of the additional modular reduction optimizations that LibTomMath features.
I just used this library “off the shelf”, so to speak, with no more care than I did with the MPIR library (and in my last post, the GMP library). Maybe there’s some optimization of which I’m unaware.
The PARI library, although clearly the fastest, is also the hardest to use. (This point is also made by Zimmermann in his article.) PARI uses a stack for its computations, and at the beginning of a program, PARI must be initialized by giving it a chunk of memory to use. So if the stack grows until it’s bigger than the assigned memory, the program just aborts. This can be overcome in two ways: allocating more memory at the start, or better, by periodically clearing the stack. This is what I have done in my program (with help from the PARI users mailgroup), and it allows my program to run without eating up too much memory.
It would be nice to write some other programs using different problems, and see if these times are consistent, or if some libraries excel at particular tasks.