Mu explores ways to turn arbitrary manual tests into reproducible automated tests. Hoped-for benefits: 1. Projects release with confidence without requiring manual QA or causing regressions for their users. 1. Open source projects become easier for outsiders to comprehend, since they can more confidently try out changes with the knowledge that they'll get rapid feedback if they break something. Projects also become more *rewrite-friendly* for insiders: it's easier to leave your project's historical accidents and other baggage behind if you can be confident of not causing regressions. 1. It becomes easier to teach programming by emphasizing tests far earlier than we do today. The hypothesis is that designing the entire system to be testable from day 1 and from the ground up would radically impact the culture of an eco-system in a way that no bolted-on tool or service at higher levels can replicate. It would make it easier to write programs that can be [easily understood by newcomers](http://akkartik.name/about). It would reassure authors that an app is free from regression if all automated tests pass. It would make the stack easy to rewrite and simplify by dropping features, without fear that a subset of targeted apps might break. As a result people might fork projects more easily, and also exchange code between disparate forks more easily (copy the tests over, then try copying code over and making tests pass, rewriting and polishing where necessary). The community would have in effect a diversified portfolio of forks, a “wavefront” of possible combinations of features and alternative implementations of features instead of the single trunk with monotonically growing complexity that we get today. Application writers who wrote thorough tests for their apps (something they just can’t do today) would be able to bounce around between forks more easily without getting locked in to a single one as currently happens. In this quest, Mu is currently experimenting with the following mechanisms: 1. New, testable interfaces for the operating system. Currently manual tests are hard to automate because a file you rely on might be deleted, the network might go down, etc. To make manual tests reproducible it suffices to improve the 15 or so OS syscalls through which a computer talks to the outside world. We have to allow programs to transparently write to a fake screen, read from a fake disk/network, etc. In Mu, printing to screen explicitly takes a screen object, so it can be called on the real screen, or on a fake screen inside tests, so that we can then check the expected state of the screen at the end of a test. Here's a test for a little text-mode chessboard program in Mu (delimiting the edge of the 'screen' with dots):       a screen test We've built up similarly *dependency-injected* interfaces to the keyboard, mouse, disk and network. 1. Support for testing side-effects like performance, deadlock-freedom, race-freeness, memory usage, etc. Mu's *white-box tests* can check not just the results of a function call, but also the presence or absence of specific events in the log of its progress. For example, here's a test that our string-comparison function doesn't scan individual characters unless it has to:       white-box test Another example: if a sort function logs each swap, a performance test can check that the number of swaps doesn't quadruple when the size of the input doubles. Besides expanding the scope of tests, this ability also allows more radical refactoring without needing to modify tests. All Mu's tests call a top-level function rather than individual sub-systems directly. As a result the way the subsystems are invoked can be radically changed (interface changes, making synchronous functions asynchronous, etc.). As long as the new versions emit the same implementation-independent events in the logs, the tests will continue to pass. ([More information.](http://akkartik.name/post/tracing-tests)) 1. Organizing code and tests in layers of functionality, so that outsiders can build simple and successively more complex versions of a project, gradually enabling more peripheral features. Think of it as a cleaned-up `git log` for the project. ([More information.](http://akkartik.name/post/wart-layers)) These mechanisms exist in the context of a low-level statement-oriented language (like Basic, or Assembly). The language is as powerful as C for low-level pointer operations and manual memory management, but much safer, paying some run-time overhead to validate pointers. It also provides a number of features usually associated with higher-level languages: strong type
# To check our support for screens in scenarios, rewrite tests from print.mu

scenario print-character-at-top-left-2 [
  local-scope
  assume-screen 3/width, 2/height
  run [
    a:char <- copy 97/a
    screen <- print screen, a
  ]
  screen-should-contain [
    .a  .
    .   .
  ]
]

scenario clear-line-erases-printed-characters-2 [
  local-scope
  assume-screen 5/width, 3/height
  # print a character
  a:char <- copy 97/a
  screen <- print screen, a
  # move cursor to start of line
  screen <- move-cursor screen, 0/row, 0/column
  run [
    screen <- clear-line screen
  ]
  screen-should-contain [
    .     .
    .     .
    .     .
  ]
]

scenario scroll-screen [
  local-scope
  assume-screen 3/width, 2/height
  run [
    a:char <- copy 97/a
    move-cursor screen, 1/row, 2/column
    screen <- print screen, a
    screen <- print screen, a
  ]
  screen-should-contain