It's said that compiling GNU tools and Linux kernel with -O3
gcc optimization option will produce weird and funky bugs. Is it true? Has anyone tried it or is it just a hoax?
|
It's used in Gentoo, and I didn't notice anything unusual. |
|||||
|
|
|||||||||
|
Note that large chunks of the toolchain (glibc in particular) flat out don't compile if you change optimization levels. The build system is setup to ignore your -O preferences for these sections on most sane distros. Simply put, certain fundamental library and OS features depend on the code actually doing what it says, not what would be faster in many cases. -fgcse-after-reload in particular (enabled by -O3) can cause odd issues. |
|||
|
-O3 uses some aggressive optimisations that are only safe if certain assumptions about register use, how stack frames are interacted with, and function reentrancy are true, and these assumptions are not guaranteed to be true in some code like the kernel especially when inline assembly is used (as it is in some very low level parts of the kernel and its driver modules). |
|||
|
While you can get away with using -O3 and other optimizations knobs on most applications (and it can result in speed improvements), I would hesitate to use such tweaks the kernel itself or on the tool chain required for building it (compiler, binutils, etc.). Think about it: Is a 5% performance gain of the raid and ext3 subsystems worth system crashes or potential data loss and/or corruption? Tweak all the knobs to want for that Quake port you're playing or the audio/video codecs you use for ripping your DVD collection to divx files. You'll likely see an improvement. Just don't mess w/ the kernel unless you have time to waste and data you can bear to lose. |
|||||
|