| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
More consistent definitions for jump targets and waypoints.
1. A label is a word starting with something other than a letter or
digit or '$'.
2. A waypoint is a label that starts with '<' and ends with '>'. It has
no restrictions. A recipe can define any number of waypoints, and
recipes can have duplicate waypoints.
3. The special labels '{' and '}' can also be duplicated any number of
times in a recipe. The only constraint on them is that they have to
balance in any recipe. Every '{' must be followed by a matching '}'.
4. All other labels are 'jump targets'. You can't have duplicate jump
targets in a recipe; that would make jumps ambiguous.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Allow type-trees to be ordered in some consistent fashion. This could be
quite inefficient since we often end up comparing the four sub-trees of
the two arguments in 4 different ways. So far it isn't much of a time
sink.
|
|
|
|
|
| |
Turns out the slowdown reported in 3309 was almost entirely due to
commit 3305: supporting extremely small floating point numbers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Thanks Ella Couch for pointing out that Mu was lying when debugging
small numbers.
def main [
local-scope
x:number <- copy 1
{
x <- divide x, 2
$print x, 10/newline
loop # until SIGFPE
}
]
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Stop inlining functions because that will complicate separate
compilation. It also simplifies the code without impacting performance.
|
|
|
|
|
|
|
|
|
|
|
| |
Undo 3272. The trouble with creating a new section for constants is that
there's no good place to order it since constants can be initialized
using globals as well as vice versa. And I don't want to add constraints
disallowing either side.
Instead, a new plan: always declare constants in the Globals section
using 'extern const' rather than just 'const', since otherwise constants
implicitly have internal linkage (http://stackoverflow.com/questions/14894698/why-does-extern-const-int-n-not-work-as-expected)
|
|
|
|
|
|
| |
Move global constants into their own section since we seem to be having
trouble linking in 'extern const' variables when manually cleaving mu.cc
into separate compilation units.
|
| |
|
|
|
|
|
| |
array length = number of elements
array size = in locations
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'd been toying with this idea for some time now given how large the
repo had been growing. The final straw was noticing that people cloning
the repo were having to wait *5 minutes*! That's not good, particularly
for a project with 'tiny' in its description. After purging .traces/
clone time drops to 7 seconds in my tests.
Major issue: some commits refer to .traces/ but don't really change
anything there. That could get confusing :/
Minor issues:
a) I've linked inside commits on GitHub like a half-dozen times online
or over email. Those links are now liable to eventually break. (I seem
to recall GitHub keeps them around as long as they get used at least
once every 60 days, or something like that.)
b) Numbering of commits is messed up because some commits only had
changes to the .traces/ sub-directory.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
More reorganization in preparation for implementing recursive abandon().
Refcounts are getting incredibly hairy. I need to juggle containers
containing other containers, and containers *pointing* to other
containers. For a while I considered getting rid of address_element_info
entirely and just going by types for every single
update_refcount. But that's definitely more work, and it's unclear that
things will be cleaner/shorter/simpler. I haven't measured the speedup,
but it seems worth optimizing every pointer copy to make sure we aren't
manipulating types at runtime.
The key insight now is a) to continue to compute information about
nested containers at load time, because that's the common case when
updating refcounts, but b) to compute information about *pointed* values
at run-time, because that's the uncommon case.
As a result, we're going to cheat in the interpreter and use type
information at runtime just for abandon(), just because the
corresponding task when we get to a compiler will be radically
different. It will still be tractable, though.
|
| |
|
| |
|
|
|
|
|
|
|
| |
It's a bit of a trade-off because we need to store copies of
container metadata in each reagent (to support shape-shifting
containers), and metadata is not lightweight and will get heavier. But
it'll become more unambiguously useful when we switch to a compiler.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This continues a line of thought sparked in commit 2831. I spent a while
trying to avoid calling size_of() at transform-time, but there's no
getting around the fact that translating names to addresses requires
knowing how much space they need.
This raised the question of what happens if the size of a container
changes after a recipe using it is already transformed. I could go down
the road of trying to detect such situations and redoing work, but that
massively goes against the grain of my original design, which assumed
that recipes don't get repeatedly transformed. Even though we call
transform_all() in every test, in a non-testing run we should be loading
all code and calling transform_all() just once to 'freeze-dry'
everything.
But even if we don't want to support multiple transforms it's worth
checking that they don't occur. This commit does so in just one
situation. There are likely others.
|
| |
|
| |
|
|
|
|
| |
Move all bounds checks for types and recipes to one place.
|
|
|
|
|
| |
Show more thorough information about instructions in the trace, but keep
the original form in error messages.
|
|
|
|
|
|
|
|
| |
As outlined at the end of 2797. This worked out surprisingly well. Now
the snapshotting code touches fewer layers, and it's much better
behaved, with less need for special-case logic, particularly inside
run_interactive(). 30% slower, but should hopefully not cause any more
bugs.
|
|
|
|
| |
This should eradicate the issue of 2771.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
To find this I spent some time trying to diagnose when it happened but
there was no seeming pattern. I'd ended up with a small single-file .cc
and single-file .mu that reproduced one memory leak. Eventually I tried
deleting all type_tree and string_tree from it, and lo the leaks
vanished. I retried on all of edit (just loading), and the leaks
remained gone. At that point I switched tack and started looking at all
the core methods of these classes.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All my attempts at staging this change failed with this humongous commit
that took all day and involved debugging three monstrous bugs. Two of
the bugs had to do with forgetting to check the type name in the
implementation of shape-shifting recipes. Bug #2 in particular would
cause core tests in layer 59 to fail -- only when I loaded up edit/! It
got me to just hack directly on mu.cc until I figured out the cause
(snapshot saved in mu.cc.modified). The problem turned out to be that I
accidentally saved a type ingredient in the Type table during
specialization. Now I know that that can be very bad.
I've checked the traces for any stray type numbers (rather than names).
I also found what might be a bug from last November (labeled TODO), but
we'll verify after this commit.
|
| |
|
|
|
|
|
|
|
| |
Start using type names from the type tree rather than the property tree
in most places. Hopefully the only occurrences of
'properties.at(0).second' left are ones where we're managing it. Next we
can stop writing to it.
|
|
|
|
| |
Include type names in the type tree. Though we aren't using them yet.
|
| |
|
| |
|
|
|
|
|
|
|
| |
But I realize that this won't actually streamline
replace_type_ingredients(), which needs that 'if (curr->left)
curr = curr->left' dance for an unrelated reason. So there's no
justification for entering into this big refactoring.
|
| |
|
| |
|
|
|
|
| |
It's only for transient debugging.
|
|
|
|
|
|
| |
to_string(): relatively stable fields only; for trace()
debug_string(): all fields; for debugging
inspect(): for a form that can be parsed back later
|
| |
|