| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Clean up rest of long-standing bit of ugliness.
I'm growing more confident now that I can use layers to cleanly add any
functionality I want. All I need is hook functions. No need to ever put
'{' on their own line, or add arguments to calls.
|
|
|
|
| |
Clean up one long-standing bit of ugliness.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
|
|
|
|
|
| |
Stop inlining functions because that will complicate separate
compilation. It also simplifies the code without impacting performance.
|
|
|
|
| |
Commit 3171 which added '--trace' broke 'Save_trace'.
|
|
|
|
|
|
|
|
|
|
|
| |
Prefer preincrement operators wherever possible. Old versions of
compilers used to be better at optimizing them. Even if we don't care
about performance it's useful to make unary operators look like unary
operators wherever possible, and to distinguish the 'statement form'
which doesn't care about the value of the expression from the
postincrement which usually increments as a side-effect in some larger
computation (and so is worth avoiding except for some common idioms, or
perhaps even there).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I started out incredibly lax about running past errors (I even used to
call them 'warnings' when I started Mu), but I've been gradually seeing
the wisdom of Go and Racket in refusing to run code if it doesn't pass
basic integrity checks (such as using a literal as an address).
Go is right to have no warnings, only errors. But where Go goes wrong is
in even caring about unused variables.
Racket and other languages perform more aggressive integrity checks so
that the can optimize more aggressively, and I'm starting to realize I
don't know enough to disagree with them.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Never mind, just close your nose and replace that function parameter
with a global variable.
This may not always be the solution for the problem of layers being
unable to add parameters and arguments, but here it works well and it's
unclear what problems the global might cause.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace an integer with a boolean across two layers of function calls.
It has long been one of the ugliest consequences of my approach with
layers that functions might need to be introduced with unnecessary
arguments simply because we have no clean way to add parameters to a
function definition after the fact -- or to add the default argument
corresponding to that parameter in calls. This problem is exacerbated by
the redundant argument having to be passed in through multiple layers of
functions. In this instance:
In layer 20 we define write_memory with an argument called
'saving_instruction_products' which isn't used yet.
In layer 36 we reveal that we use this argument in a call to
should_update_refcounts_in_write_memory() -- where it is again not used
yet.
Layer 43 finally clarifies what we're shooting for:
a) In general when we need to update some memory, we always want to
update refcounts.
b) The only exception is when we're reclaiming locals in a function
that set up its stack frame using 'local-scope' (signalling that it
wants immediate reclamation). At that point we avoid decrementing
refcounts of 'escaping' addresses that are being returned, and we also
avoid incrementing refcounts of products in the caller instruction.
The latter case is basically why we need this boolean and its dance
across 3 layers.
In summary, write_memory() needs to update refcounts except if:
we're writing products for an instruction,
the instruction is not a primitive, and
the (callee) recipe for the instruction starts with 'local-scope'.
|
| |
|
|
|
|
|
|
|
|
| |
The edit/ app without tracing turned on takes 22s to load up a
reasonably complex file and run 12 scenarios. Turn on tracing, and it
takes 68s. Turn on tracing just for app-level stashes, and it still
takes 40s. That's too much overhead, so let's keep it turned off by
default but give students an option to enable it at the commandline.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Complicated logic to not run core tests. I only want to disable core
tests if:
a) I'm changing CFLAGS on the commandline (usually to disable
optimizations, causing tests to run slower in a debug cycle)
b) I'm not printing a help message (either with just 'mu' or
'mu --help')
c) I'm loading other files besides just the core.
Under these circumstances I only want to run tests in the files
explicitly loaded at the commandline.
This is all pretty hairy, in spite of my attempts to document it in
four different places. I might end up taking it all out the first time I
need to run core tests under all these conditions.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'd been toying with this idea for some time now given how large the
repo had been growing. The final straw was noticing that people cloning
the repo were having to wait *5 minutes*! That's not good, particularly
for a project with 'tiny' in its description. After purging .traces/
clone time drops to 7 seconds in my tests.
Major issue: some commits refer to .traces/ but don't really change
anything there. That could get confusing :/
Minor issues:
a) I've linked inside commits on GitHub like a half-dozen times online
or over email. Those links are now liable to eventually break. (I seem
to recall GitHub keeps them around as long as they get used at least
once every 60 days, or something like that.)
b) Numbering of commits is messed up because some commits only had
changes to the .traces/ sub-directory.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Standardize quotes around reagents in error messages.
I'm still sure there's issues. For example, the messages when
type-checking 'copy'. I'm not putting quotes around them because in
layer 60 I end up creating dilated reagents, and then it's a bit much to
have quotes and (two kinds of) brackets. But I'm sure I'm doing that
somewhere..
|
|
|
|
|
|
| |
More thorough redo of commit 2767 (Mar 12), which was undone in commit
2810 (Mar 24). It's been a long slog. Next step: write a bunch of mu
code in the edit/ app in search of bugs.
|
|
|
|
|
|
|
| |
More consistent labeling of waypoints. Use types only when you need to
distinguish between function overloadings. Otherwise just use variable
names unless it's truly not apparent what they are (like that the result
is a recipe in "End Rewrite Instruction").
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This continues a line of thought sparked in commit 2831. I spent a while
trying to avoid calling size_of() at transform-time, but there's no
getting around the fact that translating names to addresses requires
knowing how much space they need.
This raised the question of what happens if the size of a container
changes after a recipe using it is already transformed. I could go down
the road of trying to detect such situations and redoing work, but that
massively goes against the grain of my original design, which assumed
that recipes don't get repeatedly transformed. Even though we call
transform_all() in every test, in a non-testing run we should be loading
all code and calling transform_all() just once to 'freeze-dry'
everything.
But even if we don't want to support multiple transforms it's worth
checking that they don't occur. This commit does so in just one
situation. There are likely others.
|
|
|
|
|
|
|
| |
Now that we no longer have non-shared addresses, we can just always
track refcounts for all addresses.
Phew!
|
|
|
|
|
| |
Layers 0-29 are now a complete rudimentary platform except for pointers
and indirection.
|
|
|
|
|
|
|
| |
Current plan:
- get rid of get-address and index-address, and therefore any address
that is not address:shared
- rename address:shared to just 'shared'
|
|
|
|
| |
Move all bounds checks for types and recipes to one place.
|
|
|
|
|
| |
Show more thorough information about instructions in the trace, but keep
the original form in error messages.
|
|
|
|
|
|
|
|
|
| |
Several times now I've wasted time tracking down a failing test only to
eventually remember that order of definition matters in tests even
though it doesn't elsewhere -- I've been having tests implicitly start
running the first function defined in them. Now I stop doing that if a
test defines a function called 'main', and just start the test at main
instead.
|
|
|
|
|
|
|
|
| |
As outlined at the end of 2797. This worked out surprisingly well. Now
the snapshotting code touches fewer layers, and it's much better
behaved, with less need for special-case logic, particularly inside
run_interactive(). 30% slower, but should hopefully not cause any more
bugs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When I started to make channels generic in 2784, I introduced an
infinite loop when running until just layer 72. This happens because
transform_all() can create new recipes while specializing, and these
were getting added to Recently_added_recipes and then deleted. I didn't
notice until now because layer 91 was clearing Recently_added_recipes
soon after.
Solution: make calls to transform_all after calls to load_permanently
also clear Recently_added_recipes like load_permanently does.
No transforms yet create new types. If they do we'll need to start
handling the other Recently_added_* variables as well.
I should rethink this whole approach of tracking changes to global state
while running tests, and undoing such changes. Ideally I wouldn't need
to manually track changes for each global. I should just encapsulate all
global state in an object, copy it for each test and delete the copy
when I'm done.
|
| |
|
|
|
|
| |
This should eradicate the issue of 2771.
|
| |
|
|
|
|
|
| |
Get rid of a local variable that was only serving to render unreadable
the code for reclaiming allocated memory.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm dropping all mention of 'recipe' terminology from the Readme. That
way I hope to avoid further bike-shedding discussions while I very
slowly decide on the right terminology with my students.
I could be smarter in my error messages and use 'recipe' when code uses
it and 'function' otherwise. But what about other words like ingredient?
It would all add complexity that I'm not yet sure is worthwhile. But I
do want separate experiences for veteran programmers reading about Mu on
github and for people learning programming using Mu.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
This is the easy one. The remaining ones are like phantoms popping up
and dying at random. One thing I know is that they all have to do with
tangling. Always implicated is the line in the tangle layer where
instructions are loaded and inserted into After_fragments.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Stack of plans for cleaning up replace_type_ingredients() and a couple
of other things, from main problem to subproblems:
include type names in the type_tree rather than in the separate properties vector
make type_tree and string_tree real cons cells, with separate leaf nodes
redo the vocabulary for dumping various objects:
do we really need to_string and debug_string?
can we have a version with *all* information?
can we have to_string not call debug_string?
This commit nibbles at the edges of the final task, switching from
member method syntax to global function like almost everything else. I'm
mostly using methods just for STL in this project.
|
| |
|
|
|
|
| |
More tweaks to the trace after all my debugging.
|
|
|
|
| |
Somehow this never transferred over from the Arc version until now.
|
|
|
|
|
|
|
|
| |
I'd feared that the refcount errors in the previous commit meant there
was a bug in my ref-counting, so I temporarily used new variables rather
than reusing existing ones. But it turns out the one remaining place
memory corruption can happen is when recipes don't use default-scope and
so end up sharing memory. Don't I have a warning for this?
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also start auto-abandoning addresses when their refcount returns to 0.
I'm mixing this auto-abandon support with my earlier/hackier support for
automatically abandoning default-space created by 'local-scope'. We need
to flesh out the story for automatically reclaiming memory using
C++-style destructors.
But that's a value-add. Memory corruption is far more important to avoid
than memory *leaks*.
|