about summary refs log tree commit diff stats
path: root/001help.cc
Commit message (Collapse)AuthorAgeFilesLines
* 7033Kartik Agaram2020-10-141-2/+4
| | | | | Thanks Max Bernstein for pointing out this bug: https://git.sr.ht/~akkartik/mu-x86_64/commit/9e2ef1c90d
* 6200 - --dump is not needed for incremental tracesKartik Agaram2020-04-091-7/+3
| | | | | | | | This undoes commit 5764, which was ill-considered. We already had incremental prints at that point to 'last_run'. As long as we don't run out of RAM on large traces, there doesn't seem any need to print to stderr. Now '--dump' is only needed when juggling multiple traces.
* 5878Kartik Agaram2020-01-031-27/+0
| | | | | The current prototype doesn't really use floating point; drop the guardrails there.
* 5873Kartik Agaram2020-01-021-2/+5
|
* 5867Kartik Agaram2020-01-021-6/+4
|
* 5865Kartik Agaram2020-01-021-8/+8
| | | | Give the bootstrap C++ program a less salient name.
* 5764Kartik Agaram2019-11-261-1/+6
|
* 5485 - promote SubX to top-levelKartik Agaram2019-07-271-57/+87
|
* 4994Kartik Agaram2019-03-031-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bring back support for incrementally printing the trace to the screen (stderr). I previously thought I didn't need this as long as I'm always incrementally saving to the 'last_run' trace file. But I quickly ran into a use for it: when I want to see a complete trace including switching into the sandbox's trace and back out again. So there are now two separate commandline flags: --trace to save the trace to file --dump to print the trace to screen The former won't handle sandbox traces. But the latter will. I'm deemphasizing --dump in the help message since it should be rarely used. One other situation where I've used stderr in the past is for just raw convenience. But trying to always use the trace was a foolish consistency. Conclusion: a) For simple debugging, feel free to just use cout/cerr. Delete them before committing. b) If the prints get too complex, switch to the trace and browse_trace tool. c) If using nested sandboxes, emit to stderr, redirect to file, and browse_trace. I've gone back and forth on these questions in the past; now I'm trying to be a little more rigorous about capturing reasoning.
* 4413Kartik Agaram2018-07-251-3/+1
| | | | | Never mind, let's drop unused/vestigial altogether. Use absence of names to signal unused arguments.
* 4412Kartik Agaram2018-07-251-1/+1
| | | | Drop names of unused arguments.
* 4252Kartik Agaram2018-06-061-1/+1
|
* 4235 - fix a build issue for Apple clang 900.0.38Kartik K. Agaram2018-04-201-2/+2
| | | | | | | The trouble with rewriting 'unused' to '__attribute__(unused)' is that if we happen to deliberately introduce '__attribute__(unused)' somehow, say in the standard headers, then it gets expanded twice to '__attribute__(__attribute__(unused))'. So we switch to a synonym.
* 4212Kartik K. Agaram2018-02-201-1/+1
|
* 4131Kartik K. Agaram2017-11-191-3/+3
| | | | | | | | | Bugfix: I hadn't been allowing continuations to be copied. Deepens our initial sin of managing the Mu call stack implicitly in the C interpreter. Since the call stack was implicit, continuations had to be implicit as well. Since continuations aren't in Mu's memory, we have to replicate refcounting logic for them.
* 4129Kartik K. Agaram2017-11-191-2/+2
| | | | map::operator[](k) is indeed equivalent to (*((this->insert(make_pair(k,mapped_type()))).first)).second
* 4128Kartik K. Agaram2017-11-191-0/+1
|
* 4089Kartik K. Agaram2017-10-221-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Clean up how we reclaim local scopes. It used to work like this (commit 3216): 1. Update refcounts of products after every instruction, EXCEPT: a) when instruction is a non-primitive and the callee starts with 'local-scope' (because it's already not decremented in 'return') OR: b) when instruction is primitive 'next-ingredient' or 'next-ingredient-without-typechecking', and its result is saved to a variable in the default space (because it's already incremented at the time of the call) 2. If a function starts with 'local-scope', force it to be reclaimed before each return. However, since locals may be returned, *very carefully* don't reclaim those. (See the logic in the old `escaping` and `should_update_refcount` functions.) However, this approach had issues. We needed two separate commands for 'local-scope' (reclaim locals on exit) and 'new-default-space' (programmer takes charge of reclaiming locals). The hard-coded reclamation duplicated refcounting logic. In addition to adding complexity, this implementation failed to work if a function overwrites default-space after setting up a local-scope (the old default-space is leaked). It also fails in the presence of continuations. Calling a continuation more than once was guaranteed to corrupt memory (commit 3986). After this commit, reclaiming local scopes now works like this: Update refcounts of products for every PRIMITIVE instruction. For non-primitive instructions, all the work happens in the `return` instruction: increment refcount of ingredients to `return` (unless -- one last bit of ugliness -- they aren't saved in the caller) decrement the refcount of the default-space use existing infrastructure for reclaiming as necessary if reclaiming default-space, first decrement refcount of each local again, use existing infrastructure for reclaiming as necessary This commit (finally!) completes the bulk[1] of step 2 of the plan in commit 3991. It was very hard until I gave up trying to tweak the existing implementation and just test-drove layer 43 from scratch. [1] There's still potential for memory corruption if we abuse `default-space`. I should probably try to add warnings about that at some point (todo in layer 45).
* 4063Kartik K. Agaram2017-10-141-2/+2
|
* 3965 - get rid of the teardown() functionKartik K. Agaram2017-07-091-2/+2
| | | | | | Instead of setup() and teardown() we'll just use a reset() function from now on, which will bring the machine back to a good state before each test or run, and also before exit (to avoid memory leaks).
* 3897 - various updates to documentationKartik K. Agaram2017-05-291-0/+3
|
* 3707Kartik K. Agaram2016-12-121-1/+1
| | | | | | | | | | | | | | | | | | Be more disciplined about tagging 2 different concepts in the codebase: a) Use the phrase "later layers" to highlight places where a layer doesn't have the simplest possible self-contained implementation. b) Use the word "hook" to point out functions that exist purely to provide waypoints for extension by future layers. Since both these only make sense in the pre-tangled representation of the codebase, using '//:' and '#:' comments to get them stripped out of tangled output. (Though '#:' comments still make it to tangled output at the moment. Let's see if we use it enough to be worth supporting. Scenarios are pretty unreadable in tangled output anyway.)
* 3636Kartik K. Agaram2016-11-061-1/+1
|
* 3630 - generate trace for a single scenarioKartik K. Agaram2016-11-061-0/+6
| | | | | | | | To do so, run: $ ./mu --trace test <scenario name> The trace will then be in file 'interactive'.
* 3557Kartik K. Agaram2016-10-221-2/+2
|
* 3522Kartik K. Agaram2016-10-191-2/+2
|
* 3413Kartik K. Agaram2016-09-241-1/+4
|
* 3305 - show all available precision in numbersKartik K. Agaram2016-09-081-1/+2
| | | | | | | | | | | | | | | Thanks Ella Couch for pointing out that Mu was lying when debugging small numbers. def main [ local-scope x:number <- copy 1 { x <- divide x, 2 $print x, 10/newline loop # until SIGFPE } ]
* 3268 - starting to support separate compilationKartik K. Agaram2016-08-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Right now Mu has zero dependency knowledge. If anything changes in our project the C++ compiler has to redo the entire project. This is unnecessarily slow, and also causes gcc to run out of RAM on puny machines. New vision: carve the tangled mu.cc into multiple files. includes.h types.h globals.cc globals.h one .cc file for each function definition (This is of course in addition to the already auto-generated test_list and function_list.) With this approach changes to functions will only require recompiling the functions that changed. We'd need to be smart to not rewrite files that don't change (modulo #line directives). Any changes to includes/types/globals would still require rebuilding the entire project. That's the (now greatly reduced) price we will continue to pay for outsourcing dependency management to the computer. Plan arrived at after conversation with Stephen Malina.
* 3228Kartik K. Agaram2016-08-191-1/+1
|
* 3172Kartik K. Agaram2016-08-121-0/+4
| | | | Fix CI.
* 3170 - multiple --options at the commandlineKartik K. Agaram2016-08-121-25/+41
| | | | | | | | The mu commandline now has four parts: options, commands (of which we only have one so far: 'test'), files/directories and ingredients to pass to 'main'. That cleans up the hacky ordering constraint we had earlier. I've also cleaned up the usage message.
* 3137Kartik K. Agaram2016-07-221-0/+12
| | | | | | | | | | | | | | | | | | Complicated logic to not run core tests. I only want to disable core tests if: a) I'm changing CFLAGS on the commandline (usually to disable optimizations, causing tests to run slower in a debug cycle) b) I'm not printing a help message (either with just 'mu' or 'mu --help') c) I'm loading other files besides just the core. Under these circumstances I only want to run tests in the files explicitly loaded at the commandline. This is all pretty hairy, in spite of my attempts to document it in four different places. I might end up taking it all out the first time I need to run core tests under all these conditions.
* 3036Kartik K. Agaram2016-06-061-1/+27
| | | | | | | Drastically streamlined floating-point overflow/underflow detection. For some reason I can't find a way to actually handle SIGFPE traps; they have to segfault the program.
* 3035Kartik K. Agaram2016-06-061-33/+0
| | | | | | | | I'd included handling for SIGFPE on faith but I'm not actually able to see it triggering. Drop it until we can at least test it manually. In general, floating-point is horrendous: https://hal.archives-ouvertes.fr/hal-00576641v1/document. Neither types nor tests will help deal with it.
* 3033Kartik K. Agaram2016-06-021-0/+1
|
* 3032Kartik K. Agaram2016-06-021-0/+2
|
* 3031 - better integer overflow protectionKartik K. Agaram2016-06-021-1/+68
| | | | | | | | | | | | | | | | | | | | | | This improves on commit 3026; it turns out you need to manually handle the traps generated by -ftrapv. https://gist.github.com/mastbaum/1004768 Signal handling is based on https://spin.atomicobject.com/2013/01/13/exceptions-stack-traces-c. Various combinations of platform+compiler seem to work very differently: gcc everywhere seems to have extremely threadbare ftrapv support Clang + OSX generates SIGILL. Clang + Linux is advertised to generate SIGABRT, so I handle that out of an excess of caution. However, in my experience it seems to kill the program (sometimes segfaulting) even without any signal handlers installed. In the process, I realized that all my current builds are using Clang, so I added one little test on CI to use g++ in commit 3029.
* 3030Kartik K. Agaram2016-06-021-3/+4
|
* 3027Kartik K. Agaram2016-06-021-5/+5
|
* 3026 - integer overflow protectionKartik K. Agaram2016-06-021-2/+2
| | | | | | | How did I not know about -ftrapv for so long?! Found while reading Memarian et al, "Into the depths of C: Elaborating the de facto standards". http://www.cl.cam.ac.uk/~pes20/cerberus/pldi16.pdf
* 2937Kartik K. Agaram2016-05-081-5/+8
|
* 2773 - switch to 'int'Kartik K. Agaram2016-03-131-1/+6
| | | | This should eradicate the issue of 2771.
* 2688Kartik K. Agaram2016-02-221-0/+1
|
* 2609 - run $browse-trace on old runsKartik K. Agaram2015-11-291-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is long overdue. Let's see if it gets me using traces more during debugging. Though perhaps I'm being too persnickety. These are all valid ways to debug programs: a) print directly to screen b) log, and then dump the log on some condition c) temporarily print selected log statements directly to screen d) log, and then browse the log using the zoom interface For a) to work we need to normally keep prints empty. For b) to work the log needs to be of some manageable size, where it's tractable to find interesting features. d) is the ultimate weapon, but might be slow because it's interactive c) seems like the ugly case. Should I be trying to avoid it altogether? Let's try, and see if d) is useable when we want to do c). For simple cases it's still totally acceptable to just print. If the prints get too complex to parse, then we move to the zoom interface. Hopefully it'll be easier because we have to spend less time getting the prints just so. (Independent of all this, often the best way to make a log manageable so any of the approaches works: distill the bad behavior down to a test. But that leads to chicken-and-egg situations where you need to first understand before you can distill.)
* 2454Kartik K. Agaram2015-11-171-0/+7
| | | | | | Another gotcha uncovered in the process of sorting out the previous commit: I keep using eof() but forgetting that there are two other states an istream can get into. Just never use eof().
* 2393 - redo 2391Kartik K. Agaram2015-11-071-0/+5
| | | | | | | | | | | Got that idea to work with a special-case for 'new'. Requires parsing new's first ingredient, performing the replacement, and then turning it back into a string. I didn't want to replace NEW with ALLOCATE right here, because then it messes with my invariant that transform should never see a naked ALLOCATE. Layer 11 still not working, but everything else is. Let's clean up before we diagnose the new breakage.
* 2392 - undo 2391Kartik K. Agaram2015-11-071-5/+0
| | | | Yup, type ingredients were taking size 1 by default.
* 2391Kartik K. Agaram2015-11-071-0/+5
| | | | | | | | | | | | | No, my idea was abortive. My new plan was to run no transforms for generic recipes, and instead only run them on concrete specializations as they're created. The trouble with this approach is that new contains a type specification in its ingredient which apparently needed to be transformed into an allocate before specialization. But no, how was that working? How was new computing size based on type ingredients? It might have been wrong all along.
* 2379 - further improvements to map operationsKartik K. Agaram2015-11-061-3/+8
| | | | | | | Commands run: $ sed -i 's/\([^. (]*\)\.find(\([^)]*\)) != [^.]*\.end()/contains_key(\1, \2)/g' 0[^0]*cc $ sed -i 's/\([^. (]*\)\.find(\([^)]*\)) == [^.]*\.end()/!contains_key(\1, \2)/g' 0[^0]*cc