| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before:
layers -> tangle -> g++
All changes to (C++) layers triggered recompilation of everything,
taking 35s on my laptop, and over 4 minutes on a puny server with just
512MB of RAM.
After:
layers -> tangle -> cleave -> g++
Now a tiny edit takes just 5s to recompile on my laptop.
My initial approach was to turn each function into a separate
compilation unit under the .build/ directory. That blew up the time for
a full/initial compilation to almost 6 minutes on my laptop. Trial and
error showed 4 compilation units to be close to the sweet spot. Full
compilation is still slightly slower (43s) but not by much.
I could speed things up further by building multiple of the compilation
units in parallel (the recursive invocation in 'makefile'). But that
would put more pressure on a puny server, so I'm going to avoid getting
too aggressive.
--- Other considerations
I spent some time manually testing the dependency structure to the
makefile, making sure that files aren't unnecessarily written to disk,
modifying their timestamp and triggering dependent work; that changes to
layers don't unnecessarily modify the common headers or list of globals;
that changes to the cleave/ tool itself rebuild the entire project; that
the old auto-generated '_list' files plug in at the right stage in the
pipeline; that changes to common headers trigger recompilation of
everything; etc. Too bad it's not easy to write some tests for all this.
I spent some time trying to make sure the makefile was not too opaque to
a newcomer. The targets mostly flow from top to bottom. There's a little
diagram at the top that is hopefully illuminating. When I had 700
compilation units for 700 functions I stopped printing each one of those
compilation commands, but when I backed off to just 4 compilation units
I decided to err on the side of making the build steps easy to see.
|
| |
|
|
|
|
|
| |
Stop inlining functions because that will complicate separate
compilation. It also simplifies the code without impacting performance.
|
| |
|
|
|
|
| |
Streamline the build process a bit.
|
| |
|
|
|
|
| |
Follow convention more closely by using CXXFLAGS for C++ files.
|
|
|
|
| |
Always keep macro definitions in the Includes section.
|
|
|
|
|
|
|
|
|
|
|
| |
Undo 3272. The trouble with creating a new section for constants is that
there's no good place to order it since constants can be initialized
using globals as well as vice versa. And I don't want to add constraints
disallowing either side.
Instead, a new plan: always declare constants in the Globals section
using 'extern const' rather than just 'const', since otherwise constants
implicitly have internal linkage (http://stackoverflow.com/questions/14894698/why-does-extern-const-int-n-not-work-as-expected)
|
|
|
|
|
|
| |
Move global constants into their own section since we seem to be having
trouble linking in 'extern const' variables when manually cleaving mu.cc
into separate compilation units.
|
|
|
|
| |
Disallow defining multiple globals at once.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Clean up the Globals section so that we can generate extern declarations
for all globals out using this command after we carve it out into
globals.cc:
grep ';' globals.cc |perl -pwe 's/[=(].*/;/' |perl -pwe 's/^[^\/# ]/extern $&/' > globals.h
The first perl command strips out initializers. The second prepends
'extern'. This simplistic approach requires each global definition to
lie all on one line.
|
|
|
|
|
|
|
|
|
| |
Deconstruct the tracing layer which had been an exception to our
includes-types-prototypes-globals-functions organization thus far.
To do this we predefine a few primitive globals before the types that
use them, and we pull some method definitions out of struct definitions
at the cost of having to manually write a couple of prototypes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Right now Mu has zero dependency knowledge. If anything changes in our
project the C++ compiler has to redo the entire project. This is
unnecessarily slow, and also causes gcc to run out of RAM on puny
machines.
New vision: carve the tangled mu.cc into multiple files.
includes.h
types.h
globals.cc
globals.h
one .cc file for each function definition
(This is of course in addition to the already auto-generated test_list
and function_list.)
With this approach changes to functions will only require recompiling
the functions that changed. We'd need to be smart to not rewrite files
that don't change (modulo #line directives).
Any changes to includes/types/globals would still require rebuilding the
entire project. That's the (now greatly reduced) price we will continue
to pay for outsourcing dependency management to the computer.
Plan arrived at after conversation with Stephen Malina.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ah, the reason commit 3258 broke chessboard.mu was that I forgot to
migrate the implementation of 'switch' to 'wait-for-routine-to-block'.
That caused these cascading effects when running chessboard.mu:
a) 'read-event' from real keyboard calls 'switch'
b) 'switch' waits for some other currently running routine to *complete*
rather than just block
c) deadlock unsurprisingly ensues
This was hard to debug because I kept searching for occurrences of
'wait-for-routine' that I'd missed, and didn't realize that 'switch' too
was a form of 'wait-for-routine'. No more; now it's a form of
'wait-for-routine-to-block', possibly the *only* reason to ever call
that instruction outside of tests.
|
|
|
|
|
|
| |
Turns out chessboard.mu started deadlocking in commit 3258 even though
all its tests continue to pass. Not fixed yet; first make deadlock
easier to diagnose.
|
|
|
|
| |
Commit 3171 which added '--trace' broke 'Save_trace'.
|
| |
|
|
|
|
| |
Fix CI.
|
|
|
|
|
| |
array length = number of elements
array size = in locations
|
|
|
|
|
|
|
|
|
|
|
| |
Prefer preincrement operators wherever possible. Old versions of
compilers used to be better at optimizing them. Even if we don't care
about performance it's useful to make unary operators look like unary
operators wherever possible, and to distinguish the 'statement form'
which doesn't care about the value of the expression from the
postincrement which usually increments as a side-effect in some larger
computation (and so is worth avoiding except for some common idioms, or
perhaps even there).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the process of debugging the last couple of commits (though I no
longer remember exactly how) I noticed that 'wait-for-routine' only
waits until the target routine stops running for any reason, including
when it blocks on something. That's not the synchronization primitive we
want in production code, even if it's necessary for some scenarios like
'buffer-lines-blocks-until-newline'. So we rename the old 'wait-for-routine'
primitive to 'wait-for-routine-to-block', and create a new version of
'wait-for-routine' that say callers of 'start-writing' can safely use,
because it waits until a target routine actually completes (either
successfully or not).
|
| |
|
|
|
|
|
| |
Bugfix in filesystem creation. I'm sure there are other fake-filesystem
bugs.
|
| |
|
| |
|
|
|
|
| |
High time I committed the part that works.
|
| |
|
|
|
|
|
| |
Replace some asserts when checking scenario screens with better error
messages.
|
| |
|
|
|
|
|
| |
This is inefficient; every occurrence of a recipe literal requires a
scan through the whole caller recipe.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I started out incredibly lax about running past errors (I even used to
call them 'warnings' when I started Mu), but I've been gradually seeing
the wisdom of Go and Racket in refusing to run code if it doesn't pass
basic integrity checks (such as using a literal as an address).
Go is right to have no warnings, only errors. But where Go goes wrong is
in even caring about unused variables.
Racket and other languages perform more aggressive integrity checks so
that the can optimize more aggressively, and I'm starting to realize I
don't know enough to disagree with them.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Drop support for escape characters in dilated reagents. We haven't felt
the need for it yet, we have no tests for it, and eventually when we do
we want to treat escapes the way we treat them in the rest of the
language. (commit 3233)
|
|
|
|
| |
Use allocate() in 'assume-console'.
|
| |
|
| |
|
|
|
|
|
|
| |
Clean up primitive for reading from file. Never return EOF character.
Stop using null character to indicate EOF as well. Instead, always use a
second product to indicate EOF, and require calls to use it.
|
|
|
|
|
|
|
|
| |
More checks for unsafe filesystem primitives. Most important, make sure
the product of any $close-file instruction is never ignored, and that
it's the same variable as the ingredient. (No way to indicate that in Mu
code yet, but then Mu code should always be safe and not require such
checks.)
|
| |
|
| |
|
|
|
|
| |
Fix some breaking sandbox/ tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Thanks Sam Putman for helping think through this idea.
When you encounter a backslash, strip it out and pass through any
following run of backslashes. If we 'escaped' a single following
character like C, then the character '\' would be the same as:
'\\' escaped once
'\\\\' escaped twice
'\\\\\\\\' escaped thrice (8 backslashes)
..and so on, the number of backslashes doubling each time. Instead, our
approach is to make the character '\' the same as:
'\\' escaped once
'\\\' escaped twice
'\\\\' escaped thrice
..and so on, the number of backslashes merely increasing by one each
time.
This approach only works as long as backslashes aren't also overloaded
to create special characters. So Mu doesn't follow C's approach of
overloading backslashes both to escape quote characters and also as a
notation for unprintable characters like '\n'.
|
|
|
|
| |
Support pipe characters in fake files. Still super ugly, though.
|