title: Breaking the Rules: Refining Prototypes Into Products author: Darren Bane copyright: 2020-21 Darren Bane, CC BY-SA # Abstract Recommendations for a process to refine prototypes into production-quality code are made. # Introduction The conventional wisdom is that prototypes should be discarded once the lessons have been learned, and the final product written again from scratch. In the spirit of [1] I argue that improvements in development tools have invalidated this. # Previous Work There is a long history of recommending prototyping as a way to construct systems. I would personally recommend [2] and [3]. The SmallTalk community probably pioneered this development process. However I argue that Lisp's combination of imperative & OO is an easier sell to industry whereas pure OO as in SmallTalk (or logic programming as in Prolog) is, perhaps undeservedly, more niche. A closely related area is that of "specification animation", quickly writing an implementation of some subset of a formal specification in for example Z or VDM. Prolog is a common choice for this. However, as stated in the introduction, I differ in arguing that it is possible to *refine* a prototype into a product. # Prototyping The first step is to construct a prototype, or in modern terminology a "Minimal Viable Product". These recommendations follow on from [2] and [4]. The following is probably the most work that makes sense without earning money. ## Design Decisions The programming language chosen is ISLisp. Reasons for this decision include: * Contrary to a lot of other languages, Lisp is fairly paradigm-agnostic. Imperative, object-oriented and (limited) functional programming is fairly natural. * The imperative and object-oriented paradigms are commonly taught, used in industry, and have a small "impedence mismatch" to current hardware. * The possible migration path of running under [core-lisp](https://github.com/ruricolist/core-lisp) and using the quicklisp libraries. Detailed implementations, libraries, etc. are as follows: * The Easy-ISLisp interpreter/compiler. * Avoid multi-threading at this stage, event-driven should do the job. * Not sure if this is relevant for a prototype, but you could do simple multi-user with IRCv3, including the bots (nickserv, chanserv), and tilde.chat. For a very mathematical domain, APL might be a better choice. However, if more than the mathematical features are required it could be difficult to satisfy expectations. ### Dependencies Counterintuitively, I chose ISLisp partly *because* it imposes limits in the prototyping phase. Standard UNIX libraries like curses, catgets, xdr and dbm can still be used from compiled code using the FFI. ## Coding standards Even though this is a prototype, attention should be paid to basic craftsmanship. * Divide the system conceptually into packages. This can start from just "section headers" with lots of semicolons. * Write comments for at least each public fun, class and package. There are guidelines in the Elisp manual, but for now one sentence will suffice. * Dynamically check the types of parameters in public interfaces (see below). * Indent all the source code using Emacs. * Some minimal documentation, at least an overview like in [README driven development](https://tom.preston-werner.com/2010/08/23/readme-driven-development.html) and man (actually, [mdoc](https://manpages.bsd.lv/toc.html)) pages[7]. * Certain parts of a system justify greater detail for a *complete* specification. These are (newly-designed) network protocols and complex persistent data models. For new protocols, use XDR with or without RPC but generated from rpcgen .x files. Data models should be documented as commented SQL DDL. ### Run-time type-checking As stated above, `the` should be used for simple run-time type-checking of public functions. For example, the following: ```lisp (defun f (x) (the x) (the (+ x 1))) ``` `assure` might be better according to the standard, but for now only `the` is used for inference by the eisl compiler. # Refinement to Production-Quality Software at the level of the previous section is quite usable. It should be confirmed that further improvement is, in fact, required. If so, I argue that there is a repeatable procedure to improve the quality of a (reasonably well-written) prototype to a releaseable product. It may be useful to distinguish two levels of "production-quality". The first limits to widely portable dependencies, but this should be quite capable. The second could use anything (including the Web protocol stack), obviously at a maintenance cost. Ensure that the surrounding infrastructure is in place: * Configuration management. The prototype should already have been checked into git. * Build. Split sections into different files, write simple Makefile. In the absence of a standard module system, the elisp public/private convention can be copied. * Test. Write *library/test.lsp* test cases. Extend the simple run-time type-checking to contracts where possible. * Track. Start using a defect tracking system. Then, the following code & documentation improvements should be made: * Document the system more exhaustively * Can re-implement more interfaces from the OpenLisp manual using UNIX libraries. * Can port any of the `trivial-*` libraries from quicklisp. * Maybe multi-process to take advantage of all cores Since we have a working prototype, it may make sense to write the documentation (and contracts, and tests) "bottom-up": 1. Contracts 2. Test cases 3. Module interface specs 4. Module guide, uses hierarchy 5. Task hierarchy 6. System requirements ## Documentation Details Depend only on GFM, in the same spirit as the software. The use of tools like nw2md and Pandoc should be minimised. PlantUML *should* be used where it can replace ad-hoc text. Documents should be stored under git in a "doc" subdirectory of the project. It is recommended to keep the separation between library and UI code, e.g. for using a GUI like Tk. The following can be added as sections to the README: * Uses hierarchy (but at a module level of granularity) * Task hierarchy And a proper software requirements spec should be written filling in any blanks that the man pages leave. The specification of input and output variables is best left at the level of tables and Basic English again. ### Library This was the subject of [4]. The output artifacts are a module guide an set of module interface specs. However, some of this documentation is better in the source code: * The summary of functions should be taken care of by having the public functions and classes commented. * The formal requirement for function behaviour can be done with tables with [Basic English](https://en.wikipedia.org/wiki/Basic_English). * Although full design-by-contract may be out of reach a poor-man's version can be used with public functions following a pattern. This can also do some of the formal requirements. ```lisp (defun f (x) (the x) (assert (precondition x)) (let ((res (+ x 1))) (assert (postcondition res)) (the res))) ``` I'm not aware of any static analysis tool. ## Dependencies For productisation you may want to add more features. OpenLisp has idiomatic interfaces for several more UNIX features in its manual, which could be re-implemented. Also quicklisp (and as a second choice non-quicklisp) `trivial-*` libraries should be easy enough to port. Dependencies should be limited to these two initially. Tk can implement a GUI to replace the prototype command-line or terminal-based UI, if it makes sense. The order of preference is: 1. Any UNIX interface documented in the OpenLisp manual. 2. A port of any of the `trivial-` libraries from the Awesome-CL list. 3. A port of any other `trivial-` libraries available in Quicklisp. The complexity of a Web UI should be avoided in favour of simpler protocols like IRC, Gemini and maybe XMPP. ## Testing Unit tests grow in parallel with the module interface specs. Property-based testing would be nice here, but there doesn't seem to be a readily-available library. System tests grow in parallel with the requirements spec. It's ok for system tests to use the same interfaces as the GUI code. All tests should be automated, except possibly for the UI/view layer. These scripts could be generated from a literate test plan, one of the places where it makes sense to use nw2md. As much of the testing work should be pushed "back" in the V model to contracts for the functions, following the pattern above. # Conclusion A method for developing software from an incomplete understanding of the requirements is given. It is hoped that this is more effective than a lot of what is currently-used. # References [1]Kent Beck, Extreme Programming Explained (1999). [2]David Robertson and Jaume Agustí, Software Blueprints: Lightweight Uses of Logic in Conceptual Modelling, Addison Wesley Longman (1999). [3]Kent Pitman, Accelerating Hindsight: Lisp as a Vehicle for Rapid Prototyping (1994). [4]Darren Bane, Design and Documentation of the Kernel of a Set of Tools for Working With Tabular Mathematical Expressions, University of Limerick, Ireland (19 Jul 2008). [5]Darren Bane, An ISLisp-like subset of ANSI Common Lisp, Ireland (21 Aug 2020). [7]Kristaps Dzonsons, Practical UNIX Manuals. available: https://manpages.bsd.lv/toc.html [accessed 9 Oct 2020].