| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Like parenthesize, I'm copying tests over from https://github.com/akkartik/wart
Unlike parenthesize, though, I can't just transliterate the code itself.
Wart was operating on an intermediate AST representation. Here I'm all
the way down to cells. That seemed like a good idea when I embarked, but
now I'm not so sure. Operating with the right AST data structure allowed
me to more easily iterate over the elements of a list. The natural recursion
for cells is not a good fit.
This patch and the next couple is an interesting case study in what makes
Unix so effective. Yes, you have to play computer, and yes it gets verbose
and ugly. But just diff and patch go surprisingly far in helping build a
picture of the state space in my brain.
Then again, there's a steep gradient of skills here. There are people who
can visualize state spaces using diff and patch far better than me, and
people who can't do it as well as me. Nature, nurture, having different
priorities, whatever the reason. Giving some people just the right crutch
excludes others.
|
| |
|
|
|
|
|
| |
This is going better than expected; just 3 failing tests among the new
ones.
|
| |
|
| |
|
|
|
|
|
| |
I'm temporarily disabling the pending state. I'm also providing a clearer
error message when we encounter the bug.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out there's another problem, and it predates the ability to create
new definitions:
ctrl-s triggers a call to `evaluate`, which inserts a new definition
into globals. which has a null gap buffer.
All this happens long before the new code in this commit, resulting in a
null gap buffer by the time we get to word-at-cursor.
Which in turn happens because we perform a raw `evaluate`, which doesn't
update the gap buffer like `run` does (using `maybe-stash-gap-buffer-to-global`).
And arguably `evaluate` shouldn't mess with the gap buffer. Gap buffers
are a UI concern.
The hardest version of this immediate scenario: It's unclear how to guarantee
that every definition have a gap buffer, when two definitions may share
one (closures sharing a lexical environment).
New plan:
- improve the logic for detecting definitions. Looking at the outermost
layer isn't enough. And a single expression can create multiple definitions.
- extract a helper to attach a single gap buffer to multiple definitions.
- have the UI detect conflicts in gap buffers and prompt the user for
a decision if a different gap buffer already exists for a definition.
|
|
|
|
|
|
|
| |
I wrote a comment about how some code was not covered by tests, and then
promptly forgot what it was for. This is why we need tests.
Now the hack is gone.
|
|
|
|
|
| |
I had a nice clean definition for word-at-cursor, but it's wrong and I'm
going to have to mangle it.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This protects us from reading null arrays, but not null structs.
It also doesn't protect us from writes to address 0 itself.
It is also incredibly unsafe. According to https://wiki.osdev.org/Memory_Map_(x86),
address 0 contains the real-mode IVT. Am I sure it'll never ever get used
after I switch to protected mode? I really need a page table, something
minimal to protect the first 4KB of physical memory or something.
I wonder what other languages/OSs do to protect against really large struct
definitions.
|
|
|
|
| |
Disabled in commit 1354161a3, and then I forgot about them for a while.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Among other things, we turned off the trace to significantly speed up the
debug cycle.
State as of https://merveilles.town/@akkartik/106079258606146213
Ohhh, as I save the commit I notice a big problem: I've been editing the
disk image directly because writes to the Mu disk lose indentation. But
I've been forgetting that the state in the Mu disk needs to be pre-evaluated.
So function bindings need extra parens for the environment. The `pixel`
calls in the previous commit message are the first statement in the body,
and they aren't actually considered part of the body right now. No wonder
they don't run.
There are lots of other problems, but this will clarify a lot.
|
|
|
|
| |
Mu can now compute (factorial 5)
|
| |
|
| |
|
|
|
|
| |
See shell/README.md for (extremely klunky) instructions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Both LBA and CHS coordinates are now working for the primary disk on the
primary bus.
Failure modes I ran into:
- ATA ports are 16-bit values. Using instructions with 8-bit immediates
will yield strange results. (I had to debug this twice because I missed
poll-ata-primary-bus-primary-drive-regular-status-word the first time
around.)
Mu's toolchain has been found out here. bootstrap has good
errors but doesn't support the instructions I need in boot.subx. The
self-hosted phases support the instructions but provide no error-checking.
Might be worth starting to add error-checking as I encounter the need.
In this case, a vote for validating metadata sizes even if we don't
validate that instructions pass in the right metadata sizes.
- Can't poll readiness first thing. Maybe we need to always select the
drive first.
- Reading 8-bit values from a 16-bit port (data port 0x1f0) returns garbage.
Reading 32-bit values however works totally fine; go figure. (Maybe
it won't work on real hardware?)
https://forum.osdev.org/viewtopic.php?t=36415
- Passing in a 0 segment will never return data.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Baremetal is now the default build target and therefore has its sources
at the top-level. Baremetal programs build using the phase-2 Mu toolchain
that requires a Linux kernel. This phase-2 codebase which used to be at
the top-level is now under the linux/ directory. Finally, the phase-2 toolchain,
while self-hosting, has a way to bootstrap from a C implementation, which
is now stored in linux/bootstrap. The bootstrap C implementation uses some
literate programming tools that are now in linux/bootstrap/tools.
So the whole thing has gotten inverted. Each directory should build one
artifact and include the main sources (along with standard library). Tools
used for building it are relegated to sub-directories, even though those
tools are often useful in their own right, and have had lots of interesting
programs written using them.
A couple of things have gotten dropped in this process:
- I had old ways to run on just a Linux kernel, or with a Soso kernel.
No more.
- I had some old tooling for running a single test at the cursor. I haven't
used that lately. Maybe I'll bring it back one day.
The reorg isn't done yet. Still to do:
- redo documentation everywhere. All the README files, all other markdown,
particularly vocabulary.md.
- clean up how-to-run comments at the start of programs everywhere
- rethink what to do with the html/ directory. Do we even want to keep
supporting it?
In spite of these shortcomings, all the scripts at the top-level, linux/
and linux/bootstrap are working. The names of the scripts also feel reasonable.
This is a good milestone to take stock at.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CI will fail from this commit onward. Currently working:
$ bootstrap translate init.linux 0[4-7]*.subx 080zero-out.subx -o a.elf && ./a.elf test
$ bootstrap run a.elf test
$ chmod +x a.elf; ./a.elf test
Plan: migrate functions that used to return handles to pass in a new arg
of type (addr handle). That's a bit of a weird type. There should be few
of these functions. (Open question: do we even want to expose this type
in the Mu language?)
Functions that just need to read from heap without modifying the handle
will receive `(addr T)` or `(handle T)` types as arguments.
As I sanitize each new file, I need to update signatures for any new functions
and add them to a list. I also need to update calls to any functions on
the list.
|
|
|
|
|
|
|
|
|
|
|
| |
At the SubX level we have to put up with null-terminated kernel strings
for commandline args. But so far we haven't done much with them. Rather
than try to support them we'll just convert them transparently to standard
length-prefixed strings.
In the process I realized that it's not quite right to treat the combination
of argc and argv as an array of kernel strings. Argc counts the number
of elements, whereas the length of an array is usually denominated in bytes.
|
| |
|
| |
|
| |
|
|
|