| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Start running binaries natively in test_layers as well.
CI is still broken; need to investigate where my SubX emulation has a discrepancy
with native x86.
|
| |
|
|
|
|
|
|
|
|
|
| |
Yet another redrawing of responsibilities between convert and its helpers.
In the process I discovered a bug in `write-stream-buffered` which ended
up taking me through a detour to extract `browse_trace` into its own tool.
It turns out just having long buffers is enough to need browse_trace. Simple
operations like clearing a stream swamp a flat view of the trace.
|
|
|
|
|
| |
Thanks Peter van Hardenberg for causing me to run into this crash (the
first time I tried to demo sandboxes in a long time).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bring back support for incrementally printing the trace to the screen (stderr).
I previously thought I didn't need this as long as I'm always incrementally
saving to the 'last_run' trace file. But I quickly ran into a use for it:
when I want to see a complete trace including switching into the sandbox's
trace and back out again.
So there are now two separate commandline flags:
--trace to save the trace to file
--dump to print the trace to screen
The former won't handle sandbox traces. But the latter will.
I'm deemphasizing --dump in the help message since it should be rarely
used.
One other situation where I've used stderr in the past is for just raw
convenience. But trying to always use the trace was a foolish consistency.
Conclusion:
a) For simple debugging, feel free to just use cout/cerr. Delete them
before committing.
b) If the prints get too complex, switch to the trace and browse_trace
tool.
c) If using nested sandboxes, emit to stderr, redirect to file, and browse_trace.
I've gone back and forth on these questions in the past; now I'm trying
to be a little more rigorous about capturing reasoning.
|
|
|
|
|
| |
Fix CI after commit 4987. And track stack depths more correctly inside
sandboxes.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I've extracted it into a separate binary, independent of my Mu prototype.
I also cleaned up my tracing layer to be a little nicer. Major improvements:
- Realized that incremental tracing really ought to be the default.
And to minimize printing traces to screen.
- Finally figured out how to combine layers and call stack frames in a
single dimension of depth. The answer: optimize for the experience of
`browse_trace`. Instructions occupy a range of depths based on their call
stack frame, and minor details of an instruction lie one level deeper
in each case.
Other than that, I spent some time adjusting levels everywhere to make
`browse_trace` useful.
|
|
|
|
|
|
| |
Now that our test runs are getting longer, debugging is again becoming a
bottleneck. Time to start using trace depths along with `mu browse-trace`
from the top-level.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Standardize name for 'end of file' sentinel. `eof` seems like an ordinary
variable, and `EOF` looks too much like a register (particularly in code
like `if (EAX == EOF)`), so we'll go with `Eof`. Consistent capitalization
for globals, and constants are globals too.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Considering how much trouble a merge phase would be (commit 4978), it seems
simpler to just add the extra syntax for controlling the entry point of
the generated ELF binary.
But I wouldn't have noticed this if I hadn't taken the time to write out
the commit messages of 4976 and 4978.
Even if we happened to already have linked list primitives built, this
may still be a good idea considering that I'm saving quite a lot of code
in duplicated entrypoints.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Phase 1: coalesce different instances/fragments for each segment, correctly
prepending later fragments.
Phase 2: pack bitfields into bytes.
Phase 3: compute addresses for labels, compute the ELF header.
Phase 4: convert hex bytes to binary.
But ugh, phase 1 involves linked lists and I'll have to go down a rabbit
hole building up more standard library functions.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I've been allowing operands in any order just because it simplifies implementation.
I don't actually rely on this flexibility; all the .subx programs in this
repo consistently use a single ordering.
Why is a hard-coded canonical order hard to implement? The order that seems
most logical to me is complicated by the "reg" bits in the ModR/M byte:
- In instructions that interpret it as an `/r32` operand, it needs to be
deemphasized because it refers to a different argument of the instruction
than the `/mod`, `/rm32`, `/base`, `/index` and `/scale` operands that
capture the bulk of instruction decoding complexity and so should be
emphasized. `/r32` can also be unused, which strengthens the case for
deemphasizing it.
- In instructions that interpret the "reg" bits as a `/subop` operand,
it should be colocated with the opcode because it performs the same function:
specifying the *operation* the instruction performs.
In both cases, the bits in the `reg` bitfield are conceptually unrelated
to the other bitfields in the same byte. But they sometimes want to be
close to the opcode bytes on the left, and at other times need to be deemphasized
rightward. Fixing both these possibilities seems complicated and stateful,
particularly since all operands are optional in general. On the other hand,
just pulling operands you need to create each byte, regardless of where
in the instruction they occur, that's nicely stateless.
|
|
|
|
|
|
| |
So far I've been assuming that I'd just statelessly convert each line in
a .subx file. But over the past week or so of constant interruptions I
gradually realized that code and data need different parsers.
|
|
|
|
| |
Fix CI.
|
|
|
|
|
| |
Support immediate operands in the data segment in all the ways we support
them in the code segment.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some of them are no longer useful; drop them.
For the rest, have useful usage messages. And also be a little more principled
in where we introduce CFLAGS, and where we expect it to come in from the
commandline.
I'm choosing not to call gen/run/dgen/drun from test_layers because it
makes test_layers harder for newcomers to read. The scripts aren't the
first thing people should see, they're just useful once you're up and running
hacking on SubX.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Standardize how we show register allocation decisions.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Build the C++ version optimized by default when building/running all apps.
We have enough apps now that the cost of optimized builds is worthwhile.
|
| |
|
|
|
|
|
| |
I think I don't need to special-case packing for different segments. That
should massively cut down on the number of tests.
|
|
|
|
|
| |
It's always seemed ugly to explain the rules for segment names. Let's just
always require a fixed name for the code and data segments.
|
|
|
|
| |
(excluding tests)
|
| |
|
| |
|
|
|
|
| |
Starting to build up Phase 2 (apps/pack) out of recently designed primitives.
|
| |
|
| |
|
| |
|
|
|
|
| |
Cleaner way to compare streams in tests.
|
| |
|
| |
|