The wikipedia page on multiplication algorithms mentions an interesting one by Donald Knuth. Basically, it involves combining fourier-transform multiplication with a precomputed table of logarithmically-sized multiplications. It runs in linear time.
The article acts like this algorithm somehow doesn't count as a "true" multiplication algorithm. More significantly, it's considered to be an open question whether multiplication can be done in even O(n lg n)
time!
What details of this algorithm disqualify it from counting as a "true" multiplication algorithm?
My guesses are:
- Precomputing the table takes more than linear time. On the other hand, it can still be done in
n lg n
time so that would still seem to be impressive. - Random access is somehow not allowed. But then why can other algorithms use things like hash tables and pointers?
- It somehow scales wrong as you increase a machine's word size, like if you have a 256 bit machine that does 256 bit multiplications in a single instruction then there's no point to this algorithm until you have more than 2^256 elements. On the other hand, we bother with the inverse-ackermann factor in union-find.
- The "is there a linear time multiplication algorithm?" question is secretly in terms of some weaker machine, but this only gets hinted at.