title: Breaking the Rules: Refining Prototypes Into Products author: Darren Bane copyright: 2020 Darren Bane, CC BY-SA # Abstract Recommendations for a process to refine prototypes into production-quality code are made. \.R1 accumulate \.R2 *TODO*: Q: re-cast much of this document as Architecture Decision Records? A: N # Introduction The conventional wisdom is that prototypes should be discarded once the lessons have been learned, and the final product written again from scratch. In the spirit of \.[ beck 1999 \.] I argue that improvements in development tools have invalidated this. *TODO*: case study # Literature Review There is a long history of recommending prototyping as a way to construct systems. I would personally recommend \.[ robertson agust\(i 1999 \.] and \.[ pitman 1994 \.] . *NB*: I am almost certainly re-inventing a SmallTalk wheel. However I argue that Lisp's combination of imperative & OO is an easier sell to industry whereas pure OO as in SmallTalk (or logic programming as in Prolog) is still niche. A closely related are is that of "specification animation", quickly writing an implementation of some subset of a formal specification in for example Z or VDM. Prolog is a common choice for this, but I choose Lisp instead. However, as stated in the introduction, I differ in arguing that it is possible to *refine* a prototype into a product. # Prototyping The first step is to construct a prototype, or in modern terminology a "Minimal Viable Product". These recommendations follow on from \.[ robertson agust\(i 1999 \.] and \.[ bane 2008 \.] . Reasons for choosing Common Lisp include: * Procedural and object-oriented programming is commonly taught. * The existence of quicklisp. Popularity is not really a reason for choosing Common Lisp over ISLisp, but slotting into quicklisp *is*. * Although the official ANSI standard is moribund, quasi-standard libaries are recommended on the [awesome list](https://github.com/CodyReichert/awesome-cl), or [portability layers](https://github.com/CodyReichert/awesome-cl#portability-layers). * Contrary to a lot of other languages, it is fairly paradigm-agnostic. At the same time, I want a clean subset of CL, so cleave as close to ISLisp as practical \.[ bane 2020 \.] . It was decided to use the imperative/object-oriented paradigm, partly for familiarity in industry and partly for a reduced "impedence mismatch" to current hardware. The following technology is recommended: * The SBCL compiler. * ltk for the view layer. For simple multi-user, use the IRCv3 bots (nickserv, chanserv) and IRCCloud or similar. The following is probably the most work that makes sense without earning money. ## Coding standards Even though this is a prototype, attention should be paid to basic craftsmanship. * Divide the system into packages, using the subset of CL that is supported by OpenLisp * Write one-sentence docstrings for at least each public fun and class * Use `declare` to check the types of parameters in public interfaces. * Indent all the source code using Emacs. * Some minimal documentation, at least an overview README file \.[ preston-werner 2010 \.] and man (actually, mdoc) pages \.[ dzonsons \.] . * Certain parts of a system justify greater detail for a *complete* specification. These are (newly-designed) network protocols and complex persistent data models. If there is no standard protocol, I recommend using JSON-RPC as a base and following the documentation style of LSP. The data models should be documented as commented SQL DDL. ### Run-time type-checking As stated above, `declare` should be used for simple run-time type-checking of public functions. For example, the following: ```lisp (defun f (x) (declare (fixnum x)) (the fixnum (+ x 1))) ``` ## Rejected alternatives You could use an integrated environment. With Emacs and JSON-RPC (ELisp and ISLisp are not identical but are very similar), but I reject this because jsonrpc is not "blessed" and I don't want to maintain my own. Full DbC would be another nice-to-have, but I settle for a pattern using `(declare`. # Refinement to Production-Quality First, software at the level of the previous section is quite usable. It should be confirmed that further improvement is, in fact, required. If so, I argue that there is a repeatable procedure to improve the quality of a (reasonably well-written) prototype to a releaseable product. First, ensure that the surrounding infrastructure is in place: * Configuration management. The prototype should already have been checked into git. * Build. Write an ASDF description, and install as a local quicklisp package. * Test. Write FiveAM test cases. Extend the simple run-time type-checking to contracts where possible. * Track. Start using a defect tracking system. Then, the following code & documentation improvements should be made: * Document the system more exhaustively * Can use more of quicklisp, e.g. the trivial-\* libraries. * Port to platform.sh? Since we have a working prototype, it may make sense to write the documentation (and contracts, and tests) "bottom-up": 1. Contracts, static analysis 2. Test cases 3. Module interface specs 4. Module guide, uses hierarchy 5. Task hierarchy 6. System requirements ## Documentation Details Depend only on GFM, in the same spirit as the software. The use of tools like PP and Pandoc should be minised. PlantUML *should* be used where it can replace ad-hoc text. Documents should be stored under git in a "doc" subdirectory of the project. I think it is a good idea to keep the separation between library and UI code when using ltk. The following can be added as sections to the README: * Uses hierarchy (but at a module level of granularity) * Task hierarchy And a proper software requirements spec should be written filling in any blanks that the man pages leave. The specification of input and output variables is best left at the level of tables and Basic English again. ### Library This was the subject of \.[ bane 2008 \.] . The output artifacts are a module guide an set of module interface specs. However, some of this documentation is better in the source code: * The summary of functions should be taken care of by having the public functions and classes commented. * The formal requirement for function behaviour can be done with tables with Basic English \.[ basic english \.]. * Although full design-by-contract may be out of reach a poor-man's version can be used with public functions following a pattern. This can also do some of the formal requirements. ```lisp (defun f (x) (declare (fixnum x)) (assert (precondition x)) (let ((res (+ x 1))) (assert (postcondition res)) (the fixnum res))) ``` `lisp-critic` can be used to perform static analysis of the codebase. ### UI ltk is great for local GUIs. However, a product may require HTMX and the platform.sh stack. Note that I prefer HTMX & ReST (following Fielding) to single-page applications (outside the very specific case of drawing on a canvas using ParenScript). ## Dependencies For the prototyping phase, you should *really* limit yourself to the ISLisp subset. For productisation you may want to make more of an effort, but I would still recommend limiting to the following, in order of preference. The language/library split isn't as clear in CL as in some other languages, but use your judgement. * For "language" functionality, "[Portability layers](http://portability.cl/)" from that list * For "library" functionality, any "stars" from the [Awesome-CL](https://github.com/CodyReichert/awesome-cl) list * Any of the `trivial-` libraries from that list. These may be *forked* and maintained locally. * Any other `trivial-` libraries available in Quicklisp. * Other libraries available in Quicklisp. The `trivial-` libraries may be *forked* and maintained locally. ## Testing Unit (FiveAM) tests grow in parallel with the module interface specs. Property-based testing would be nice here, but there doesn't seem to be a readily-available library. System tests grow in parallel with the requirements spec. It's ok for system tests to use the same interfaces as the ltk code. All tests should be automated, except possibly for the UI/view layer. Q: These scripts could be generated from a literate test plan? A: yes, probably one of the few places to use "PP". As much of the testing work should be pushed "back" in the V model to contracts for the functions, following the pattern above. # Conclusion A method for developing software from an incomplete understanding of the requirements is given. It is hoped that this is more effective than most of what is currently-used. \.[ $LIST$ \.]