In 1999, inline optimization was turned off by default. The commit log indicates this was
done because GCC was running out of memory on some hosts when building the Zend executor.
In 2003, inline optimization was re-enabled by default, but a build option was added to
turn it off if one runs out of memory when building.
Computing hardware has come a long way since 2003 and I doubt that anyone is running out
of memory when building PHP now.
Interestingly, this code set an unused variable called `INLINE_CFLAGS`. It actually
disabled inline optimization by adding -O0 to the build command, not using `INLINE_CFLAGS`.
Just to see how much memory GCC/Make are using when building PHP, I tried building with
successively higher values of `ulimit -v` until it succeeded. Interestingly, while most
of the codebase can be built with about 400MB of memory, ext/fileinfo/libmagic/apprentice.c
requires 1.2GB, doubtless because it includes ext/fileinfo/data_file.c, which is more
than 350,000 lines long. That is with GCC 7.5.0.
Most users get PHP as a binary package anyways, so the question is, are *packagers*
of PHP trying to build on machines with just 1GB RAM? And would they want to package
a PHP interpreter built with *no optimizations*? I can't imagine either being true.