| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the last couple of days I've been implicitly thinking in terms of
how many compilation units I want to generate. Might as well make that
explicit and drop the hacky ideas for approximating it.
I tried more timing experiments like the ones in commit 3281.
Conclusion: I can't have the best of both worlds:
1. Full compilation doesn't take too much longer than with a single
compilation unit.
2. Incremental compilation is fast enough that there's negligible
benefit from dropping optimization.
We're still taking on a 10s hit in full build time.
I care more about not degrading the full compilation too much, since
that gets magnified so much on the Couch's puny server. So we'll just
have to continue using CXXFLAGS=-g when we care to save a few seconds in
incremental compilation time.
A final mystery: the build time increases by 10s with the new heuristic
even though the number of calls to the compiler (and therefore the fixed
cost) is the same. Seems like separating certain functions into
different units is causing the compiler issues. Dropping from 4 to 3
compilation units eliminated the issue.
--- Appendix: Measurements
before:
full build 4 + test: 42s
incremental compilation with -O3: varied from 30s for mu_0.cc to 5s for mu_3.cc
longer times benefitted from dropping -O3
after:
full build 1 + test: 39s
full build 2 + test: 41s
full build 3 + test: 43s
full build 4 + test: 52s
full build 5 + test: 53s
full build 6 + test: 51s
full build 10 (9) + test: 54s
full build 20 (16) + test: 58s
|
|
|
|
|
|
|
|
|
|
| |
Now that we have a new build system we shouldn't need to run unoptimized
just to save time. (Though that's not strictly true; if a change
modifies .build/mu_0.cc which is twice as large as later compilation
units, dropping -O3 shaves 10s off the time for an incremental build.)
Since we don't need to run unoptimized anymore, let's just explicitly
ask for --test-only-app when we need it.
|
| |
|
| |
|
| |
|
|
|
|
| |
Fix CI.
|
|
|
|
|
|
|
|
|
| |
Fix CI process after recent changes. CI still will not be actually
*making use* of separate compilation (as it shouldn't).
As a side effect, 'build_until' shows a simpler (but still working!)
process for building Mu. Vast improvement over the previous hack of
dipping selectively into the Makefile.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before:
layers -> tangle -> g++
All changes to (C++) layers triggered recompilation of everything,
taking 35s on my laptop, and over 4 minutes on a puny server with just
512MB of RAM.
After:
layers -> tangle -> cleave -> g++
Now a tiny edit takes just 5s to recompile on my laptop.
My initial approach was to turn each function into a separate
compilation unit under the .build/ directory. That blew up the time for
a full/initial compilation to almost 6 minutes on my laptop. Trial and
error showed 4 compilation units to be close to the sweet spot. Full
compilation is still slightly slower (43s) but not by much.
I could speed things up further by building multiple of the compilation
units in parallel (the recursive invocation in 'makefile'). But that
would put more pressure on a puny server, so I'm going to avoid getting
too aggressive.
--- Other considerations
I spent some time manually testing the dependency structure to the
makefile, making sure that files aren't unnecessarily written to disk,
modifying their timestamp and triggering dependent work; that changes to
layers don't unnecessarily modify the common headers or list of globals;
that changes to the cleave/ tool itself rebuild the entire project; that
the old auto-generated '_list' files plug in at the right stage in the
pipeline; that changes to common headers trigger recompilation of
everything; etc. Too bad it's not easy to write some tests for all this.
I spent some time trying to make sure the makefile was not too opaque to
a newcomer. The targets mostly flow from top to bottom. There's a little
diagram at the top that is hopefully illuminating. When I had 700
compilation units for 700 functions I stopped printing each one of those
compilation commands, but when I backed off to just 4 compilation units
I decided to err on the side of making the build steps easy to see.
|
| |
|
|
|
|
|
| |
Stop inlining functions because that will complicate separate
compilation. It also simplifies the code without impacting performance.
|
| |
|
|
|
|
| |
Streamline the build process a bit.
|
| |
|
|
|
|
| |
Follow convention more closely by using CXXFLAGS for C++ files.
|
|
|
|
| |
Always keep macro definitions in the Includes section.
|
|
|
|
|
|
|
|
|
|
|
| |
Undo 3272. The trouble with creating a new section for constants is that
there's no good place to order it since constants can be initialized
using globals as well as vice versa. And I don't want to add constraints
disallowing either side.
Instead, a new plan: always declare constants in the Globals section
using 'extern const' rather than just 'const', since otherwise constants
implicitly have internal linkage (http://stackoverflow.com/questions/14894698/why-does-extern-const-int-n-not-work-as-expected)
|
|
|
|
|
|
| |
Move global constants into their own section since we seem to be having
trouble linking in 'extern const' variables when manually cleaving mu.cc
into separate compilation units.
|
|
|
|
| |
Disallow defining multiple globals at once.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Clean up the Globals section so that we can generate extern declarations
for all globals out using this command after we carve it out into
globals.cc:
grep ';' globals.cc |perl -pwe 's/[=(].*/;/' |perl -pwe 's/^[^\/# ]/extern $&/' > globals.h
The first perl command strips out initializers. The second prepends
'extern'. This simplistic approach requires each global definition to
lie all on one line.
|
|
|
|
|
|
|
|
|
| |
Deconstruct the tracing layer which had been an exception to our
includes-types-prototypes-globals-functions organization thus far.
To do this we predefine a few primitive globals before the types that
use them, and we pull some method definitions out of struct definitions
at the cost of having to manually write a couple of prototypes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Right now Mu has zero dependency knowledge. If anything changes in our
project the C++ compiler has to redo the entire project. This is
unnecessarily slow, and also causes gcc to run out of RAM on puny
machines.
New vision: carve the tangled mu.cc into multiple files.
includes.h
types.h
globals.cc
globals.h
one .cc file for each function definition
(This is of course in addition to the already auto-generated test_list
and function_list.)
With this approach changes to functions will only require recompiling
the functions that changed. We'd need to be smart to not rewrite files
that don't change (modulo #line directives).
Any changes to includes/types/globals would still require rebuilding the
entire project. That's the (now greatly reduced) price we will continue
to pay for outsourcing dependency management to the computer.
Plan arrived at after conversation with Stephen Malina.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ah, the reason commit 3258 broke chessboard.mu was that I forgot to
migrate the implementation of 'switch' to 'wait-for-routine-to-block'.
That caused these cascading effects when running chessboard.mu:
a) 'read-event' from real keyboard calls 'switch'
b) 'switch' waits for some other currently running routine to *complete*
rather than just block
c) deadlock unsurprisingly ensues
This was hard to debug because I kept searching for occurrences of
'wait-for-routine' that I'd missed, and didn't realize that 'switch' too
was a form of 'wait-for-routine'. No more; now it's a form of
'wait-for-routine-to-block', possibly the *only* reason to ever call
that instruction outside of tests.
|
|
|
|
|
|
| |
Turns out chessboard.mu started deadlocking in commit 3258 even though
all its tests continue to pass. Not fixed yet; first make deadlock
easier to diagnose.
|
|
|
|
| |
Commit 3171 which added '--trace' broke 'Save_trace'.
|
| |
|
|
|
|
| |
Fix CI.
|
|
|
|
|
| |
array length = number of elements
array size = in locations
|
|
|
|
|
|
|
|
|
|
|
| |
Prefer preincrement operators wherever possible. Old versions of
compilers used to be better at optimizing them. Even if we don't care
about performance it's useful to make unary operators look like unary
operators wherever possible, and to distinguish the 'statement form'
which doesn't care about the value of the expression from the
postincrement which usually increments as a side-effect in some larger
computation (and so is worth avoiding except for some common idioms, or
perhaps even there).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the process of debugging the last couple of commits (though I no
longer remember exactly how) I noticed that 'wait-for-routine' only
waits until the target routine stops running for any reason, including
when it blocks on something. That's not the synchronization primitive we
want in production code, even if it's necessary for some scenarios like
'buffer-lines-blocks-until-newline'. So we rename the old 'wait-for-routine'
primitive to 'wait-for-routine-to-block', and create a new version of
'wait-for-routine' that say callers of 'start-writing' can safely use,
because it waits until a target routine actually completes (either
successfully or not).
|
| |
|
|
|
|
|
| |
Bugfix in filesystem creation. I'm sure there are other fake-filesystem
bugs.
|
| |
|
| |
|
|
|
|
| |
High time I committed the part that works.
|
| |
|
|
|
|
|
| |
Replace some asserts when checking scenario screens with better error
messages.
|
| |
|
|
|
|
|
| |
This is inefficient; every occurrence of a recipe literal requires a
scan through the whole caller recipe.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I started out incredibly lax about running past errors (I even used to
call them 'warnings' when I started Mu), but I've been gradually seeing
the wisdom of Go and Racket in refusing to run code if it doesn't pass
basic integrity checks (such as using a literal as an address).
Go is right to have no warnings, only errors. But where Go goes wrong is
in even caring about unused variables.
Racket and other languages perform more aggressive integrity checks so
that the can optimize more aggressively, and I'm starting to realize I
don't know enough to disagree with them.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Drop support for escape characters in dilated reagents. We haven't felt
the need for it yet, we have no tests for it, and eventually when we do
we want to treat escapes the way we treat them in the rest of the
language. (commit 3233)
|
|
|
|
| |
Use allocate() in 'assume-console'.
|
| |
|
| |
|