| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
This has taken me almost 6 weeks :(
|
|
|
|
| |
Standardize use of type ingredients some more.
|
|
|
|
|
| |
Implement literal constants before type abbreviations, reducing some
unnecessary tangling.
|
| |
|
|
|
|
|
|
|
|
|
| |
They uncovered one bug: in edit/003-shortcuts.mu
<scroll-down> was returning 0 for an address in one place where I
thought it was returning 0 for a boolean.
Now we've eliminated this bad interaction between tangling and punning
literals.
|
|
|
|
|
| |
'deaddress' is a terrible name. Hopefully I'll come up with something
better.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I've been working on this slowly over several weeks, but it's too hard
to support 0 as the null value for addresses. I constantly have to add
exceptions for scalar value corresponding to an address type (now
occupying 2 locations). The final straw is the test for 'reload':
x:num <- reload text
'reload' returns an address. But there's no way to know that for
arbitrary instructions.
New plan: let's put this off for a bit and first create support for
literals. Then use 'null' instead of '0' for addresses everywhere. Then
it'll be easy to just change what 'null' means.
|
| |
|
|
|
|
| |
Support explicit conversions from number to character.
|
|
|
|
|
|
|
|
| |
Narrow the scope of implicit type conversions. Now only numbers can be
freely converted to from other scalars (booleans, characters). We want
in particular to make this an error:
x:character <- new [abc]
|
| |
|
|
|
|
| |
Fix CI.
|
|
|
|
| |
Loosen type-checking slightly to accomodate type abbreviations.
|
|
|
|
| |
Fix CI.
|
| |
|
|
|
|
| |
Thanks Ella Couch for running into these.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Be more disciplined about tagging 2 different concepts in the codebase:
a) Use the phrase "later layers" to highlight places where a layer
doesn't have the simplest possible self-contained implementation.
b) Use the word "hook" to point out functions that exist purely to
provide waypoints for extension by future layers.
Since both these only make sense in the pre-tangled representation of
the codebase, using '//:' and '#:' comments to get them stripped out of
tangled output.
(Though '#:' comments still make it to tangled output at the moment.
Let's see if we use it enough to be worth supporting. Scenarios are
pretty unreadable in tangled output anyway.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was a large commit, and most of it is a follow-up to commit 3309,
undoing what is probably the final ill-considered optimization I added
to s-expressions in Mu: I was always representing (a b c) as (a b . c),
etc. That is now gone.
Why did I need to take it out? The key problem was the error silently
ignored in layer 30. That was causing size_of("(type)") to silently
return garbage rather than loudly complain (assuming 'type' was a simple
type).
But to take it out I had to modify types_strictly_match (layer 21) to
actually strictly match and not just do a prefix match.
In the process of removing the prefix match, I had to make extracting
recipe types from recipe headers more robust. So far it only matched the
first element of each ingredient's type; these matched:
(recipe address:number -> address:number)
(recipe address -> address)
I didn't notice because the dotted notation optimization was actually
representing this as:
(recipe address:number -> address number)
---
One final little thing in this commit: I added an alias for 'assert'
called 'assert_for_now', to indicate that I'm not sure something's
really an invariant, that it might be triggered by (invalid) user
programs, and so require more thought on error handling down the road.
But this may well be an ill-posed distinction. It may be overwhelmingly
uneconomic to continually distinguish between model invariants and error
states for input. I'm starting to grow sympathetic to Google Analytics's
recent approach of just banning assertions altogether. We'll see..
|
|
|
|
|
|
|
|
|
| |
Don't crash on bad types.
I need to be more careful in distinguishing between the two causes of
constraint violations: bad input and internal bugs. Maybe I should
create a second assert() to indicate "this shouldn't really be an
assert, but I'm too lazy to think about it right now."
|
| |
|
| |
|
|
|
|
|
| |
One more place we were missing expanding type abbreviations: inside
container definitions.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
|
|
|
|
| |
Thanks Ella Couch; this was long overdue.
|
|
|
|
|
|
|
|
| |
Always show instruction before any transforms in error messages.
This is likely going to make some errors unclear because they *need* to
show the original instruction. But if we don't have tests for those
situations did they ever really work?
|
| |
|
|
|
|
|
| |
Never mind, always quote direct quotes from code in error messages.
Dilated reagents are the uncommon case.
|
| |
|
|
|
|
|
|
| |
Thanks Caleb for finding this. We'd been using sandboxes for so long, I
hadn't tried a null/0 screen/console in a while and somewhere down the
road Mu stopped matching 0 against concrete addresses.
|
|
|
|
| |
Thanks Caleb Couch for finding and reporting this.
|
|
|
|
|
| |
Show more thorough information about instructions in the trace, but keep
the original form in error messages.
|
|
|
|
| |
This should eradicate the issue of 2771.
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm dropping all mention of 'recipe' terminology from the Readme. That
way I hope to avoid further bike-shedding discussions while I very
slowly decide on the right terminology with my students.
I could be smarter in my error messages and use 'recipe' when code uses
it and 'function' otherwise. But what about other words like ingredient?
It would all add complexity that I'm not yet sure is worthwhile. But I
do want separate experiences for veteran programmers reading about Mu on
github and for people learning programming using Mu.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
I might change my mind on this, but it's worth a try after watching Ella
run up against it today. I got her to build the recipe 'odd?', but then
it failed to run because she couldn't convert a numeric remainder to a
number without a conditional (which I haven't taught her yet).
For now I don't change the value in the boolean, so booleans can store
arbitrary bit patterns like in C. We just say that 0 is false and
anything else is true. I *think* that doesn't break the type system..
|
|
|
|
|
| |
Only Hide_errors when strictly necessary. In other places let test
failures directly show the unexpected error.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All my attempts at staging this change failed with this humongous commit
that took all day and involved debugging three monstrous bugs. Two of
the bugs had to do with forgetting to check the type name in the
implementation of shape-shifting recipes. Bug #2 in particular would
cause core tests in layer 59 to fail -- only when I loaded up edit/! It
got me to just hack directly on mu.cc until I figured out the cause
(snapshot saved in mu.cc.modified). The problem turned out to be that I
accidentally saved a type ingredient in the Type table during
specialization. Now I know that that can be very bad.
I've checked the traces for any stray type numbers (rather than names).
I also found what might be a bug from last November (labeled TODO), but
we'll verify after this commit.
|
|
|
|
|
|
|
| |
Start using type names from the type tree rather than the property tree
in most places. Hopefully the only occurrences of
'properties.at(0).second' left are ones where we're managing it. Next we
can stop writing to it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Stack of plans for cleaning up replace_type_ingredients() and a couple
of other things, from main problem to subproblems:
include type names in the type_tree rather than in the separate properties vector
make type_tree and string_tree real cons cells, with separate leaf nodes
redo the vocabulary for dumping various objects:
do we really need to_string and debug_string?
can we have a version with *all* information?
can we have to_string not call debug_string?
This commit nibbles at the edges of the final task, switching from
member method syntax to global function like almost everything else. I'm
mostly using methods just for STL in this project.
|
|
|
|
|
|
|
|
| |
The old approach of ad hoc boosts and penalties based on various
features was repeatedly running into exceptions and bugs. New
organization: multiple tiered scores interleaved with tie-breaks. The
moment one tier yields one or more candidates, we stop scanning further
tiers. Just break ties and return.
|
|
|
|
|
| |
I was finding it hard to wrap around the directionality of calls with
'lhs' and 'rhs'. Seems to work better with 'to' and 'from'. Let's see.
|
|
|
|
|
| |
This uncovered a bug where I've been forgetting the directionality of
arguments to types_coercible().
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Some more structure to transforms, and flattening of dependencies
between them.
|
|
|
|
|
|
|
| |
In general you only want to specify one transform in terms of
(before/after) another if the other direction wouldn't work. Otherwise
try to make it work by just pushing transforms at the start/end of the
list.
|
|
|
|
|
| |
More cleanup. Haven't bothered to figure out why the trace for
specialize_with_literal_4 is repeatedly perturbed.
|