| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
| |
A generic build system is overkill for such a small project, and it was
adding complexity on OpenBSD which doesn't come with GNU make by
default.
In the process we also eliminate our reliance on bash and perl, at least
for the core build script.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the last couple of days I've been implicitly thinking in terms of
how many compilation units I want to generate. Might as well make that
explicit and drop the hacky ideas for approximating it.
I tried more timing experiments like the ones in commit 3281.
Conclusion: I can't have the best of both worlds:
1. Full compilation doesn't take too much longer than with a single
compilation unit.
2. Incremental compilation is fast enough that there's negligible
benefit from dropping optimization.
We're still taking on a 10s hit in full build time.
I care more about not degrading the full compilation too much, since
that gets magnified so much on the Couch's puny server. So we'll just
have to continue using CXXFLAGS=-g when we care to save a few seconds in
incremental compilation time.
A final mystery: the build time increases by 10s with the new heuristic
even though the number of calls to the compiler (and therefore the fixed
cost) is the same. Seems like separating certain functions into
different units is causing the compiler issues. Dropping from 4 to 3
compilation units eliminated the issue.
--- Appendix: Measurements
before:
full build 4 + test: 42s
incremental compilation with -O3: varied from 30s for mu_0.cc to 5s for mu_3.cc
longer times benefitted from dropping -O3
after:
full build 1 + test: 39s
full build 2 + test: 41s
full build 3 + test: 43s
full build 4 + test: 52s
full build 5 + test: 53s
full build 6 + test: 51s
full build 10 (9) + test: 54s
full build 20 (16) + test: 58s
|
| |
|
|
Before:
layers -> tangle -> g++
All changes to (C++) layers triggered recompilation of everything,
taking 35s on my laptop, and over 4 minutes on a puny server with just
512MB of RAM.
After:
layers -> tangle -> cleave -> g++
Now a tiny edit takes just 5s to recompile on my laptop.
My initial approach was to turn each function into a separate
compilation unit under the .build/ directory. That blew up the time for
a full/initial compilation to almost 6 minutes on my laptop. Trial and
error showed 4 compilation units to be close to the sweet spot. Full
compilation is still slightly slower (43s) but not by much.
I could speed things up further by building multiple of the compilation
units in parallel (the recursive invocation in 'makefile'). But that
would put more pressure on a puny server, so I'm going to avoid getting
too aggressive.
--- Other considerations
I spent some time manually testing the dependency structure to the
makefile, making sure that files aren't unnecessarily written to disk,
modifying their timestamp and triggering dependent work; that changes to
layers don't unnecessarily modify the common headers or list of globals;
that changes to the cleave/ tool itself rebuild the entire project; that
the old auto-generated '_list' files plug in at the right stage in the
pipeline; that changes to common headers trigger recompilation of
everything; etc. Too bad it's not easy to write some tests for all this.
I spent some time trying to make sure the makefile was not too opaque to
a newcomer. The targets mostly flow from top to bottom. There's a little
diagram at the top that is hopefully illuminating. When I had 700
compilation units for 700 functions I stopped printing each one of those
compilation commands, but when I backed off to just 4 compilation units
I decided to err on the side of making the build steps easy to see.
|