I am looking for an example where an algorithm is apparently changing its complexity class due to compiler and/or processor optimization strategies.
Lets take a simple program which prints the square of a number entered on the command line.
As you can see, this is a O(n) calculation, looping over and over again. Compiling this with
In this you can see the add being done, a compare and a jump back for the loop. Doing the compile with
One can now see instead it has no loop and furthermore, no adds. Instead there is a call to The compiler has recognized a the loop and the math operator inside and replaced it by the proper calculation. Note that this included a call to | |||
|
Tail Call Optimization may reduce the space complexity. For example, without TCO, this recursive implementation of a
This doesn't even need general TCO, it only needs a very narrow special case, namely elimination of direct tail recursion. What would be very interesting though, is where a compiler optimization not just changes the complexity class but actually changes the algorithm completely. The Glorious Glasgow Haskell Compiler sometimes does this, but that's not really what I am talking about, that's more like cheating. GHC has a simple Pattern Matching Language that allows the developer of the library to detect some simple code patterns and replace them with different code. And the GHC implementation of the Haskell standard library does contain some of those annotations, so that specific usages of specific functions which are known to be inefficient are rewritten into more efficient versions. However, these translations are written by humans, and they are written for specific cases, that's why I consider that cheating. A Supercompiler may be able to change the algorithm without human input, but AFAIK no production-level supercompiler has ever been built. | |||||
|
int main(void) { exit(0); };
– Jörg W Mittag May 26 at 16:11