_by Stephen Malina and Kartik Agaram_ Mu explores ways to turn arbitrary manual tests into reproducible automated tests. Hoped-for benefits: 1. Projects release with confidence without requiring manual QA or causing regressions for their users. 1. Open source projects become easier for outsiders to comprehend, since they can more confidently try out changes with the knowledge that they'll get rapid feedback if they break something. Projects also become more *rewrite-friendly* for insiders: it's easier to leave your project's historical accidents and other baggage behind if you can be confident of not causing regressions. 1. It becomes easier to teach programming by emphasizing tests far earlier than we do today. The hypothesis is that designing the entire system to be testable from day 1 and from the ground up would radically impact the culture of an eco-system in a way that no bolted-on tool or service at higher levels can replicate. It would make it easier to write programs that can be [easily understood by newcomers](http://akkartik.name/about). It would reassure authors that an app is free from regression if all automated tests pass. It would make the stack easy to rewrite and simplify by dropping features, without fear that a subset of targeted apps might break. As a result people might fork projects more easily, and also exchange code between disparate forks more easily (copy the tests over, then try copying code over and making tests pass, rewriting and polishing where necessary). The community would have in effect a diversified portfolio of forks, a “wavefront” of possible combinations of features and alternative implementations of features instead of the single trunk with monotonically growing complexity that we get today. Application writers who wrote thorough tests for their apps (something they just can’t do today) would be able to bounce around between forks more easily without getting locked in to a single one as currently happens. In this quest, Mu is currently experimenting with the following mechanisms: 1. New, testable interfaces for the operating system. Currently manual tests are hard to automate because a file you rely on might be deleted, the network might go down, etc. To make manual tests reproducible it suffices to improve the 15 or so OS syscalls through which a computer talks to the outside world. We have to allow programs to transparently write to a fake screen, read from a fake disk/network, etc. In Mu, printing to screen explicitly takes a screen object, so it can be called on the real screen, or on a fake screen inside tests, so that we can then check the expected state of the screen at the end of a test. Here's a test for a little text-mode chessboard program in Mu (delimiting the edge of the 'screen' with dots): We've built up similarly *dependency-injected* interfaces to the keyboard, mouse, disk and network. 1. Support for testing side-effects like performance, deadlock-freedom, race-freeness, memory usage, etc. Mu's *white-box tests* can check not just the results of a function call, bu
#!/bin/sh
# Translate a given Mu program into an ELF binary on Linux.
set -e
cat $* [0-9]*.mu |apps/mu > a.subx
./translate_subx init.linux [0-9]*.subx mu-init.subx a.subx