Tales of Karatsuba

This article is based on a brilliant essay in Quanta magazine, behind which lies a lovely story of a mathematical discovery. I take the essay a little further and describe an algorithm we posted on arxiv to increase the speed of the calculation even further (though not as fast as the fastest method there is).

You probably learned to multiply numbers in elementary school. If you multiplied two 4-digit numbers, you learnt to multiply the second number’s unit digit (3 in the below) like so

Mult

You’d then progress on to the tens digit, repeat the same procedure (multiply {\it it} by each digit of the first number. While doing so, you remember to write out the digits of the second multiplication by one place to the left and on and on. If you were multiplying two n digit numbers, you would need to perform n^2 single-digit multiplications.

Multiplication is one of the most well-understood mathematical operations, it was thought, when the famous mathematician and physicist Andrey Kolmogorov gave a series of lectures on complexity theory at the Moscow State University in the 1960s. One of the students listening to the famous man speak about the order of the number of operations required to perform complex arithmetic tasks was a fellow called Anatoly Karatsuba. He heard Kolmogorov describe multiplication as an {\cal O}(n^2) operation for two n-digit numbers. Later he went home and over the week of lectures managed to prove a much faster way to multiply two numbers. He wrote this up for Kolmogorov, who, suitably impressed, mentioned it in other lectures. Kolmogorov then wrote it up for publication, as the contribution of Karatsuba and another student Yuri Ofman, without their knowledge – they heard only from getting printer proofs!

The algorithm they invented is very simple to explain for four digit numbers. Suppose we wanted to multiply x and y, where

x=x_0 \times 10^0 + x_1 \times 10^2

y=y_0 \times 10^0 + y_1 \times 10^2

where x_0, y_0, x_1, y_1 are two-digit numbers. The product of these two numbers is

x y = x_0 y_0 \times 10^0 + (x_0 y_1 + x_1 y_0) \times 10^2 + x_1 y_1 \times 10^4

One would normally think that one would need to perform four multiplications and two additions to compute this, but note, as Anatoly Karatsuba did, that this could be written as

x y = x_0 y_0 \times 10^0 + \left( (x_0 + x_1)(y_0+y_1) - x_0 y_0 - x_1 y_1 \right) \times 10^2 + x_1 y_1 \times 10^4

which therefore needs only three multiplications and three additions.

Generalizing this “divide-and-conquer” strategy results in a systematic recursion relation between the complexity of the n-digit number multiplication and the \frac{n}{2}-digit multiplication, i.e.,

{\cal M}(n)=3 {\cal M}(\frac{n}{2}) + {\cal O}(n)

where the last term captures the order-n additions that have to also be performed. This function can be computed for large n and yields

{\cal M}(n) \sim n^{\log_2 3} = n^{1.58}

The method is elegant and simple, but has since been superseded by far more analytically complex Fourier transform algorithms that are {\cal O}(n  (\log_2 n) (\log_2 \log_2 n)) and indeed {\cal O}(n \log_2 n) itself in complexity.

{\underline {But \: there \: is \: an \: even \: simpler \: way \: to \: make \: a \: faster \: algorithm}}. Note that if you {\bf precomputed} \: {\underline all} possible multiplications of n-digit numbers, you would need to compute 10^{2n} entries into a giant matrix. It would then take you \log_2 10^{2n} searches to find the particular element you wanted to find, which is of {\cal O}(n).

That is the trick underlying the route I suggested along with my co-author (full disclosure, it is my brilliant brother and published author) in our article on the arxiv.