| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Clean up rest of long-standing bit of ugliness.
I'm growing more confident now that I can use layers to cleanly add any
functionality I want. All I need is hook functions. No need to ever put
'{' on their own line, or add arguments to calls.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the last couple of days I've been implicitly thinking in terms of
how many compilation units I want to generate. Might as well make that
explicit and drop the hacky ideas for approximating it.
I tried more timing experiments like the ones in commit 3281.
Conclusion: I can't have the best of both worlds:
1. Full compilation doesn't take too much longer than with a single
compilation unit.
2. Incremental compilation is fast enough that there's negligible
benefit from dropping optimization.
We're still taking on a 10s hit in full build time.
I care more about not degrading the full compilation too much, since
that gets magnified so much on the Couch's puny server. So we'll just
have to continue using CXXFLAGS=-g when we care to save a few seconds in
incremental compilation time.
A final mystery: the build time increases by 10s with the new heuristic
even though the number of calls to the compiler (and therefore the fixed
cost) is the same. Seems like separating certain functions into
different units is causing the compiler issues. Dropping from 4 to 3
compilation units eliminated the issue.
--- Appendix: Measurements
before:
full build 4 + test: 42s
incremental compilation with -O3: varied from 30s for mu_0.cc to 5s for mu_3.cc
longer times benefitted from dropping -O3
after:
full build 1 + test: 39s
full build 2 + test: 41s
full build 3 + test: 43s
full build 4 + test: 52s
full build 5 + test: 53s
full build 6 + test: 51s
full build 10 (9) + test: 54s
full build 20 (16) + test: 58s
|
|
|
|
| |
Fix CI.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before:
layers -> tangle -> g++
All changes to (C++) layers triggered recompilation of everything,
taking 35s on my laptop, and over 4 minutes on a puny server with just
512MB of RAM.
After:
layers -> tangle -> cleave -> g++
Now a tiny edit takes just 5s to recompile on my laptop.
My initial approach was to turn each function into a separate
compilation unit under the .build/ directory. That blew up the time for
a full/initial compilation to almost 6 minutes on my laptop. Trial and
error showed 4 compilation units to be close to the sweet spot. Full
compilation is still slightly slower (43s) but not by much.
I could speed things up further by building multiple of the compilation
units in parallel (the recursive invocation in 'makefile'). But that
would put more pressure on a puny server, so I'm going to avoid getting
too aggressive.
--- Other considerations
I spent some time manually testing the dependency structure to the
makefile, making sure that files aren't unnecessarily written to disk,
modifying their timestamp and triggering dependent work; that changes to
layers don't unnecessarily modify the common headers or list of globals;
that changes to the cleave/ tool itself rebuild the entire project; that
the old auto-generated '_list' files plug in at the right stage in the
pipeline; that changes to common headers trigger recompilation of
everything; etc. Too bad it's not easy to write some tests for all this.
I spent some time trying to make sure the makefile was not too opaque to
a newcomer. The targets mostly flow from top to bottom. There's a little
diagram at the top that is hopefully illuminating. When I had 700
compilation units for 700 functions I stopped printing each one of those
compilation commands, but when I backed off to just 4 compilation units
I decided to err on the side of making the build steps easy to see.
|
|
|
|
| |
Streamline the build process a bit.
|
|
|
|
| |
Follow convention more closely by using CXXFLAGS for C++ files.
|
|
|
|
| |
Fix a new warning from Perl.
|
|
|
|
|
| |
In experiments on my laptop it seems to compile a little faster and run
slightly slower. Both might be in the noise.
|
|
|
|
|
|
|
| |
How did I not know about -ftrapv for so long?! Found while reading
Memarian et al, "Into the depths of C: Elaborating the de facto
standards".
http://www.cl.cam.ac.uk/~pes20/cerberus/pldi16.pdf
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reorganize build system to minimize duplication while handling 3
scenarios:
1. Locally running tests with `mu test`
2. Locally running tests until some layer with `build_and_test_until`
3. Running on Linux with `test_layers`
4. Running on Travis CI with multiple sharded calls to `test_layers`
One thing we drop at this point is support for OSX in test_layers. We
don't need it now that we have Travis CI working.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Make it easy to skip distracting valgrind errors when debugging more
obvious errors in early layers. Just throw a 'test' at the end of
build_and_test_until commands to not run valgrind (and make it a regular
test run).
|
| |
|
|
|
|
|
|
|
|
|
| |
No, 2001 is no good. Phony targets can't early-exit if everything's
built. New approach:
$ CFLAGS=-g make && ./mu test
etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let's stop hackily editing compiler flags in makefile.
I considered modifying the 'mu' script as well, with cases like this:
1. mu test -- don't optimize
2. mu test edit.mu -- optimize
3. mu test edit.mu just-one-test -- don't optimize
4. mu edit.mu -- interactive; optimize
5. mu -- just help message; don't optimize
But that seems brittle for all the added complexity. From now on to
build quickly just do:
$ make dbg && mu test
etc.
|
|
|
|
|
| |
Spent a while trying to understand why editing a slightly larger program
was so much slower. Then realized I'd managed to disable optimizations.
|
|
|
|
| |
Now we can make use of all the depths from 1 to 99.
|
| |
|
|
|
|
|
|
|
| |
The cost of optimization across all levels is now lower than that of
running them unoptimized.
test_all_layers unoptimized: 22:36.88
test_all_layers optimized: 19:33.38
|
|
|
|
|
| |
Also, turns out I haven't been building 999spaces.cc in my default
build. Now fixed.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Time to turn on optimizations, since we aren't recompiling mu all the
time anymore.
But it doesn't help much with the editor. We need to be smarter about
not rendering the whole screen.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Ever since 1403 mu depended on a phony target and so was always
considered stale. This commit improves on that fix.
|
| |
|
|
|
|
|
|
|
|
| |
While I'm at it I also explored turning on optimization. With
optimization compile+test of the chessboard app takes 10+3s, while
without optimization it takes 3+8s. So we're still better off without
optimizations in a tight debug loop. (Since we stopped tracing the big
chessboard test.)
|
| |
|
|
I've tried to update the Readme, but there are at least a couple of issues.
|