Can someone explain how to do the Fast fourier transform based multiplication algorithm in very large numbers more clearly? I've been searching for this a lot and I'm having a very hard time because all explanations I get have mathematical notation represented in figures or images and my screen reader can't read it. Can someone explain this clearer? And with examples if possible. Preferable in java.
closed as unclear what you're asking by Robert Harvey, MichaelT, gnat, Bart van Ingen Schenau, Giorgio Jan 8 at 11:41Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question.If this question can be reworded to fit the rules in the help center, please edit the question. |
|||||||||
|
This is an important technique in certain areas of computing. It's worth explaining. But I don't recall seeing any good explanation that weren't gummied up with math details obscuring the essential intuitive notion. Take a number 'X' written in base ten, or in any base B. We write a series of digits {x_i} (let's leave off the underscores) {xi} and the value is: X = x0*B^0 + x1*B^1 + x2*B^2 + ... Similar for some other number Y. Multiplying like we learned in third grade or its equivalent in whatever country, we multiply all pairwise combinations of digits {xi} and {yj} and write their products positioned according to certain rules. The value is: X*Y = x0*y0*B^0 + x1*y0*B^1 + ... The term with xi*yj has B to the power i+j. What we're really doing can be seen as a discrete convolution of the series {x} and {y}. The effort goes as N^2 for N-digit numbers. The Fourier transform relates products to convolutions. Multiplying X time Y by convolving long series of N digits, taking N^2 time, can be done quicker by Fourier transforming those series {x} and {y} as if they were time series, such as voltages or stock prices, into a new series, call them {f} and {g}. The transforms, if we can use FFT, go as N*log(N). We don't need all pairwise combinations of the elements of {f} and {g}; just simple point-wise products like fi*gi. That effort goes as N. In code in whatever language, given X and Y in binary, we may want to work with bytes. (unsigned.) Use base 256 instead of ten - the "digits" are just the bytes sitting in memory as given to us. We treat this as a time series of data that happen to be unsigned 8-bit integers. A discrete Fourier transform will produce "amplitudes" in some less convenient range, but whatever. We obtain our {f} and {g} series as arrays of some integral data type sufficient to not overflow, with enough accuracy to make things work. We scan along the f and g arrays, multiplying corresponding elements and writing them back into f. Delete g. Inverse Fourier transform the f to obtain what ought to be an array of integers from 0 to 255, but probably off a little due to rounding/truncation errors. If we chose wisely an amplitude data type of sufficient accuracy, we should be able to round those values and store them as bytes. These finally are the base 256 "digits" of the product. One complication to deal with, however - convolutions done by multiplying in Fourier space have cyclical boundary conditions. You don't want that for multiplying high-precision numbers. Padding with zeros or some other trick for dealing with this must be applied. |
|||||
|