| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
| |
To do so, run:
$ ./mu --trace test <scenario name>
The trace will then be in file 'interactive'.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Thanks Ella Couch for pointing out that Mu was lying when debugging
small numbers.
def main [
local-scope
x:number <- copy 1
{
x <- divide x, 2
$print x, 10/newline
loop # until SIGFPE
}
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Right now Mu has zero dependency knowledge. If anything changes in our
project the C++ compiler has to redo the entire project. This is
unnecessarily slow, and also causes gcc to run out of RAM on puny
machines.
New vision: carve the tangled mu.cc into multiple files.
includes.h
types.h
globals.cc
globals.h
one .cc file for each function definition
(This is of course in addition to the already auto-generated test_list
and function_list.)
With this approach changes to functions will only require recompiling
the functions that changed. We'd need to be smart to not rewrite files
that don't change (modulo #line directives).
Any changes to includes/types/globals would still require rebuilding the
entire project. That's the (now greatly reduced) price we will continue
to pay for outsourcing dependency management to the computer.
Plan arrived at after conversation with Stephen Malina.
|
| |
|
|
|
|
| |
Fix CI.
|
|
|
|
|
|
|
|
| |
The mu commandline now has four parts: options, commands (of which we
only have one so far: 'test'), files/directories and ingredients to pass
to 'main'. That cleans up the hacky ordering constraint we had earlier.
I've also cleaned up the usage message.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Complicated logic to not run core tests. I only want to disable core
tests if:
a) I'm changing CFLAGS on the commandline (usually to disable
optimizations, causing tests to run slower in a debug cycle)
b) I'm not printing a help message (either with just 'mu' or
'mu --help')
c) I'm loading other files besides just the core.
Under these circumstances I only want to run tests in the files
explicitly loaded at the commandline.
This is all pretty hairy, in spite of my attempts to document it in
four different places. I might end up taking it all out the first time I
need to run core tests under all these conditions.
|
|
|
|
|
|
|
| |
Drastically streamlined floating-point overflow/underflow detection.
For some reason I can't find a way to actually handle SIGFPE traps; they
have to segfault the program.
|
|
|
|
|
|
|
|
| |
I'd included handling for SIGFPE on faith but I'm not actually able to
see it triggering. Drop it until we can at least test it manually.
In general, floating-point is horrendous: https://hal.archives-ouvertes.fr/hal-00576641v1/document.
Neither types nor tests will help deal with it.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This improves on commit 3026; it turns out you need to manually handle
the traps generated by -ftrapv.
https://gist.github.com/mastbaum/1004768
Signal handling is based on https://spin.atomicobject.com/2013/01/13/exceptions-stack-traces-c.
Various combinations of platform+compiler seem to work very differently:
gcc everywhere seems to have extremely threadbare ftrapv support
Clang + OSX generates SIGILL.
Clang + Linux is advertised to generate SIGABRT, so I handle that out
of an excess of caution. However, in my experience it seems to kill
the program (sometimes segfaulting) even without any signal handlers
installed.
In the process, I realized that all my current builds are using Clang,
so I added one little test on CI to use g++ in commit 3029.
|
| |
|
| |
|
|
|
|
|
|
|
| |
How did I not know about -ftrapv for so long?! Found while reading
Memarian et al, "Into the depths of C: Elaborating the de facto
standards".
http://www.cl.cam.ac.uk/~pes20/cerberus/pldi16.pdf
|
| |
|
|
|
|
| |
This should eradicate the issue of 2771.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is long overdue. Let's see if it gets me using traces more during
debugging.
Though perhaps I'm being too persnickety. These are all valid ways to
debug programs:
a) print directly to screen
b) log, and then dump the log on some condition
c) temporarily print selected log statements directly to screen
d) log, and then browse the log using the zoom interface
For a) to work we need to normally keep prints empty.
For b) to work the log needs to be of some manageable size, where it's
tractable to find interesting features.
d) is the ultimate weapon, but might be slow because it's interactive
c) seems like the ugly case. Should I be trying to avoid it altogether?
Let's try, and see if d) is useable when we want to do c). For simple
cases it's still totally acceptable to just print. If the prints get too
complex to parse, then we move to the zoom interface. Hopefully it'll be
easier because we have to spend less time getting the prints just so.
(Independent of all this, often the best way to make a log manageable so
any of the approaches works: distill the bad behavior down to a test.
But that leads to chicken-and-egg situations where you need to first
understand before you can distill.)
|
|
|
|
|
|
| |
Another gotcha uncovered in the process of sorting out the previous
commit: I keep using eof() but forgetting that there are two other
states an istream can get into. Just never use eof().
|
|
|
|
|
|
|
|
|
|
|
| |
Got that idea to work with a special-case for 'new'. Requires parsing
new's first ingredient, performing the replacement, and then turning it
back into a string. I didn't want to replace NEW with ALLOCATE right
here, because then it messes with my invariant that transform should
never see a naked ALLOCATE.
Layer 11 still not working, but everything else is. Let's clean up
before we diagnose the new breakage.
|
|
|
|
| |
Yup, type ingredients were taking size 1 by default.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No, my idea was abortive. My new plan was to run no transforms for
generic recipes, and instead only run them on concrete specializations
as they're created.
The trouble with this approach is that new contains a type specification
in its ingredient which apparently needed to be transformed into an
allocate before specialization.
But no, how was that working? How was new computing size based on type
ingredients? It might have been wrong all along.
|
|
|
|
|
|
|
| |
Commands run:
$ sed -i 's/\([^. (]*\)\.find(\([^)]*\)) != [^.]*\.end()/contains_key(\1, \2)/g' 0[^0]*cc
$ sed -i 's/\([^. (]*\)\.find(\([^)]*\)) == [^.]*\.end()/!contains_key(\1, \2)/g' 0[^0]*cc
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm still seeing all sorts of failures in turning on layer 11 of edit/,
so I'm backing away and nailing down every culprit I run into. First up:
stop accidentally inserting empty objects into maps during lookups.
Commands run:
$ sed -i 's/\(Recipe_ordinal\|Recipe\|Type_ordinal\|Type\|Memory\)\[\([^]]*\)\] = \(.*\);/put(\1, \2, \3);/' 0[1-9]*
$ vi 075scenario_console.cc # manually fix up Memory[Memory[CONSOLE]]
$ sed -i 's/\(Memory\)\[\([^]]*\)\]/get_or_insert(\1, \2)/' 0[1-9]*
$ sed -i 's/\(Recipe_ordinal\|Type_ordinal\)\[\([^]]*\)\]/get(\1, \2)/' 0[1-9]*
$ sed -i 's/\(Recipe\|Type\)\[\([^]]*\)\]/get(\1, \2)/' 0[1-9]*
Now mu dies pretty quickly because of all the places I try to lookup a
missing value.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It was reading lines like this in scenarios:
-warn: f: error error
as:
-warn: f
which was causing them to be silently ignored.
Also found an insane preprocessor expansion from not parenthesizing
preprocessor arguments. SIZE(end+delim) worked even when end was an
integer, but it happily didn't ever get the wrong answer.
|
| |
|
| |
|
| |
|
| |
|
|
I've tried to update the Readme, but there are at least a couple of issues.
|