about summary refs log tree commit diff stats
path: root/bash/talk-to-computer/corpus/programming
diff options
context:
space:
mode:
Diffstat (limited to 'bash/talk-to-computer/corpus/programming')
-rw-r--r--bash/talk-to-computer/corpus/programming/combinators.md192
-rw-r--r--bash/talk-to-computer/corpus/programming/command_line_data_processing.md200
-rw-r--r--bash/talk-to-computer/corpus/programming/functional_programming.md234
-rw-r--r--bash/talk-to-computer/corpus/programming/lil_guide.md277
4 files changed, 903 insertions, 0 deletions
diff --git a/bash/talk-to-computer/corpus/programming/combinators.md b/bash/talk-to-computer/corpus/programming/combinators.md
new file mode 100644
index 0000000..8e2cfb0
--- /dev/null
+++ b/bash/talk-to-computer/corpus/programming/combinators.md
@@ -0,0 +1,192 @@
+# Combinators - The Ultimate Reusable Functions
+
+## Introduction
+
+In the context of functional programming and computer science, a **combinator** is a higher-order function that uses only function application and other combinators to define a result. Crucially, a combinator contains **no free variables**. This means it is a completely self-contained function that only refers to its own arguments.
+
+Combinators are fundamental concepts from **combinatory logic** and **lambda calculus**. While they have deep theoretical importance, their practical application in software development is to create highly reusable, abstract, and composable code, often leading to a **point-free** or **tacit** programming style. They are the essential glue for building complex logic by piecing together simpler functions.
+
+## Core Concepts
+
+### No Free Variables
+
+The defining characteristic of a combinator is that it has no **free variables**. A free variable is a variable referenced in a function that is not one of its formal arguments or defined within the function's local scope. This self-contained nature makes combinators perfectly portable and predictable.
+
+```javascript
+const y = 10;
+
+// This function is NOT a combinator because it uses a free variable `y`.
+// Its behavior depends on an external context.
+const addY = (x) => x + y;
+
+// This function IS a combinator. It has no free variables.
+// Its behavior only depends on its arguments.
+const add = (x) => (z) => x + z;
+```
+
+### Function Composition and Transformation
+
+Combinators are designed to manipulate and combine other functions. They are the building blocks for creating new functions from existing ones without needing to specify the data that the functions will eventually operate on. The entire logic is expressed as a transformation of functions themselves.
+
+## Key Principles
+
+  - **Point-Free Style (Tacit Programming)**: This is the primary programming style associated with combinators. You define functions as a pipeline or composition of other functions without explicitly mentioning the arguments (the "points"). This can lead to more abstract and declarative code.
+
+    ```javascript
+    // Not point-free: the argument `users` is explicitly mentioned.
+    const getActiveUserNames = (users) => users.filter(user => user.active).map(user => user.name);
+
+    // Point-free style: built by composing functions.
+    // `compose`, `filter`, `map`, and `prop` are all combinators or higher-order functions.
+    const getActiveUserNamesPointFree = compose(map(prop('name')), filter(propEq('active', true)));
+    ```
+
+  - **Abstraction**: Combinators abstract common patterns of execution and control flow. For example, the act of applying one function's result to another is abstracted away by the `compose` combinator.
+
+## Implementation/Usage
+
+Many famous combinators have single-letter names from combinatory logic. Understanding them helps in recognizing fundamental patterns.
+
+### Basic Example
+
+The simplest combinators are the **I-combinator (Identity)** and the **K-combinator (Constant)**.
+
+```javascript
+/**
+ * I-combinator (Identity)
+ * Takes a value and returns it.
+ * I x = x
+ */
+const I = (x) => x;
+
+/**
+ * K-combinator (Constant or Kestrel)
+ * Takes two arguments and returns the first. Creates constant functions.
+ * K x y = x
+ */
+const K = (x) => (y) => x;
+
+// Usage:
+const value = I("hello"); // "hello"
+const always42 = K(42);
+const result = always42("some other value"); // 42
+```
+
+### Advanced Example
+
+More complex combinators handle function composition, like the **B-combinator (Bluebird)**.
+
+```javascript
+/**
+ * B-combinator (Bluebird / Function Composition)
+ * Composes two functions.
+ * B f g x = f (g x)
+ */
+const B = (f) => (g) => (x) => f(g(x));
+
+// In practice, this is often implemented as `compose`.
+const compose = (f, g) => (x) => f(g(x));
+
+// Usage:
+const double = (n) => n * 2;
+const increment = (n) => n + 1;
+
+// Create a new function that increments then doubles.
+const incrementThenDouble = compose(double, increment);
+
+incrementThenDouble(5); // Returns 12, because (5 + 1) * 2
+```
+
+Another useful combinator is the **T-combinator (Thrush)**, which applies a value to a function.
+
+```javascript
+/**
+ * T-combinator (Thrush)
+ * Takes a value and a function, and applies the function to the value.
+ * T x f = f x
+ */
+const T = (x) => (f) => f(x);
+
+// This is the basis for the `pipe` or "thread-first" operator.
+T(5, increment); // 6
+```
+
+## Common Patterns
+
+### Pattern 1: Function Composition (`compose` / `pipe`)
+
+This is the most common and practical application of combinators. `compose` (based on the B-combinator) applies functions from right to left, while `pipe` applies them from left to right. They are used to build data-processing pipelines in a point-free style.
+
+```javascript
+// Ramda-style compose, handles multiple functions
+const compose = (...fns) => (initialVal) => fns.reduceRight((val, fn) => fn(val), initialVal);
+const pipe = (...fns) => (initialVal) => fns.reduce((val, fn) => fn(val), initialVal);
+```
+
+### Pattern 2: Parser Combinators
+
+A parser combinator is a higher-order function that takes several parsers as input and returns a new parser as its output. This is an advanced technique for building complex parsers by combining simple, specialized parsers for different parts of a grammar. It's a powerful real-world application of combinator logic.
+
+## Best Practices
+
+  - **Prioritize Readability**: While point-free style can be elegant, it can also become cryptic. If a composition is too long or complex, break it down and give intermediate functions meaningful names.
+  - **Know Your Library**: If you are using a functional programming library like Ramda or fp-ts, invest time in learning the combinators it provides. They are the building blocks for effective use of the library.
+  - **Use Currying**: Combinators are most powerful in a language that supports currying, as it allows for partial application, creating specialized functions from general ones.
+
+## Common Pitfalls
+
+  - **"Pointless" Code**: Overuse of point-free style can lead to code that is very difficult to read and debug. The goal is clarity through abstraction, not just character count reduction.
+  - **Debugging Complexity**: Debugging a long chain of composed functions is challenging because there are no named intermediate values to inspect. You often have to break the chain apart to find the source of a bug.
+
+## Performance Considerations
+
+  - **Function Call Overhead**: In theory, a deeply nested composition of combinators can introduce a small overhead from the additional function calls.
+  - **Negligible in Practice**: In most real-world applications, this overhead is negligible and completely optimized away by modern JavaScript engines and language compilers. Code clarity and correctness are far more important concerns.
+
+## Integration Points
+
+  - **Functional Programming Libraries**: Libraries like **Ramda**, **Lodash/fp**, and the **Haskell Prelude** are essentially collections of combinators and other higher-order functions.
+  - **Lambda Calculus**: Combinatory logic, the formal study of combinators, is computationally equivalent to lambda calculus. The famous **SKI combinator calculus** (using only S, K, and I combinators) can be used to express any computable algorithm.
+  - **Parser Combinator Libraries**: Libraries like `parsec` in Haskell or `fast-check` in JavaScript use these principles to build robust parsers and property-based testing tools.
+
+## Troubleshooting
+
+### Problem 1: A Composed Function Behaves Incorrectly
+
+**Symptoms:** The final output of a point-free pipeline is `undefined`, `NaN`, or simply the wrong value.
+**Solution:** Temporarily "re-point" the function to debug. Break the composition and insert `console.log` statements (or a `tap` utility function) to inspect the data as it flows from one function to the next.
+
+```javascript
+// A "tap" combinator is useful for debugging.
+const tap = (fn) => (x) => {
+  fn(x);
+  return x;
+};
+
+// Insert it into a pipeline to inspect intermediate values.
+const problematicPipe = pipe(
+  increment,
+  tap(console.log), // See the value after incrementing
+  double
+);
+```
+
+## Examples in Context
+
+  - **Configuration Objects**: Using the K-combinator (constant function) to provide default configuration values.
+  - **Data Validation**: Building a validator by composing smaller validation rule functions, where each function takes data and returns either a success or failure indicator.
+  - **Web Development**: A point-free pipeline in a frontend application that takes a raw API response, filters out inactive items, extracts a specific field, and formats it for display.
+
+## References
+
+  - [To Mock a Mockingbird by Raymond Smullyan](https://en.wikipedia.org/wiki/To_Mock_a_Mockingbird) - An accessible and famous book that teaches combinatory logic through recreational puzzles.
+  - [Wikipedia: Combinatory Logic](https://en.wikipedia.org/wiki/Combinatory_logic)
+  - [Ramda Documentation](https://ramdajs.com/docs/)
+
+## Related Topics
+
+  - Point-Free Style
+  - Lambda Calculus
+  - Functional Programming
+  - Currying
+  - Higher-Order Functions
\ No newline at end of file
diff --git a/bash/talk-to-computer/corpus/programming/command_line_data_processing.md b/bash/talk-to-computer/corpus/programming/command_line_data_processing.md
new file mode 100644
index 0000000..c5ce5f5
--- /dev/null
+++ b/bash/talk-to-computer/corpus/programming/command_line_data_processing.md
@@ -0,0 +1,200 @@
+# Local Data Processing With Unix Tools - Shell-based data wrangling
+
+## Introduction
+
+Leveraging standard Unix command-line tools for data processing is a powerful, efficient, and universally available method for handling text-based data. This guide focuses on the **Unix philosophy** of building complex data processing **pipelines** by composing small, single-purpose utilities. This approach is invaluable for ad-hoc data exploration, log analysis, and pre-processing tasks directly within the shell, often outperforming more complex scripts or dedicated software for common data wrangling operations.
+
+Key applications include analyzing web server logs, filtering and transforming CSV/TSV files, and batch-processing any line-oriented text data.
+
+## Core Concepts
+
+### Streams and Redirection
+
+At the core of Unix inter-process communication are three standard streams:
+
+1.  `stdin` (standard input): The stream of data going into a program.
+2.  `stdout` (standard output): The primary stream of data coming out of a program.
+3.  `stderr` (standard error): A secondary output stream for error messages and diagnostics.
+
+**Redirection** controls these streams. The pipe `|` operator is the most important, as it connects one command's `stdout` to the next command's `stdin`, forming a pipeline.
+
+```bash
+# Redirect stdout to a file (overwrite)
+command > output.txt
+
+# Redirect stdout to a file (append)
+command >> output.txt
+
+# Redirect a file to stdin
+command < input.txt
+
+# Redirect stderr to a file
+command 2> error.log
+
+# Redirect stderr to stdout
+command 2>&1
+```
+
+### The Core Toolkit
+
+A small set of highly-specialized tools forms the foundation of most data pipelines.
+
+  - **`grep`**: Filters lines that match a regular expression.
+  - **`awk`**: A powerful pattern-scanning and processing language. It excels at columnar data, allowing you to manipulate fields within each line.
+  - **`sed`**: A "stream editor" for performing text transformations on an input stream (e.g., search and replace).
+  - **`sort`**: Sorts lines of text files.
+  - **`uniq`**: Reports or omits repeated lines. Often used with `-c` to count occurrences.
+  - **`cut`**: Removes sections from each line of files (e.g., select specific columns).
+  - **`tr`**: Translates or deletes characters.
+  - **`xargs`**: Builds and executes command lines from standard input. It bridges the gap between commands that produce lists of files and commands that operate on them.
+
+## Key Principles
+
+The effectiveness of this approach stems from the **Unix Philosophy**:
+
+1.  **Do one thing and do it well**: Each tool is specialized for a single task (e.g., `grep` only filters, `sort` only sorts).
+2.  **Write programs that work together**: The universal text stream interface (`stdin`/`stdout`) allows for near-infinite combinations of tools.
+3.  **Handle text streams**: Text is a universal interface, making the tools broadly applicable to a vast range of data formats.
+
+## Implementation/Usage
+
+Let's assume we have a web server access log file, `access.log`, with the following format:
+`IP_ADDRESS - - [TIMESTAMP] "METHOD /path HTTP/1.1" STATUS_CODE RESPONSE_SIZE`
+
+Example line:
+`192.168.1.10 - - [20/Aug/2025:15:30:00 -0400] "GET /home HTTP/1.1" 200 5120`
+
+### Basic Example
+
+**Goal**: Find the top 5 IP addresses that accessed the server.
+
+```bash
+# This pipeline extracts, groups, counts, and sorts the IP addresses.
+cat access.log | \
+  awk '{print $1}' | \
+  sort | \
+  uniq -c | \
+  sort -nr | \
+  head -n 5
+```
+
+**Breakdown:**
+
+1.  `cat access.log`: Reads the file and sends its content to `stdout`.
+2.  `awk '{print $1}'`: For each line, print the first field (the IP address).
+3.  `sort`: Sorts the IPs alphabetically, which is necessary for `uniq` to group them.
+4.  `uniq -c`: Collapses adjacent identical lines into one and prepends the count.
+5.  `sort -nr`: Sorts the result numerically (`-n`) and in reverse (`-r`) order to get the highest counts first.
+6.  `head -n 5`: Takes the first 5 lines of the sorted output.
+
+### Advanced Example
+
+**Goal**: Calculate the total bytes served for all successful (`2xx` status code) `POST` requests.
+
+```bash
+# This pipeline filters for specific requests and sums a column.
+grep '"POST ' access.log | \
+  grep ' 2[0-9][0-9] ' | \
+  awk '{total += $10} END {print total}'
+```
+
+**Breakdown:**
+
+1.  `grep '"POST ' access.log`: Filters the log for lines containing ` "POST  ` (note the space to avoid matching other methods).
+2.  `grep ' 2[0-9][0-9] '`: Filters the remaining lines for a 2xx status code. The spaces ensure we match the status code field specifically.
+3.  `awk '{total += $10} END {print total}'`: For each line that passes the filters, `awk` adds the value of the 10th field (response size) to a running `total`. The `END` block executes after all lines are processed, printing the final sum.
+
+## Common Patterns
+
+### Pattern 1: Filter-Map-Reduce
+
+This is a functional programming pattern that maps directly to Unix pipelines.
+
+  - **Filter**: Select a subset of data (`grep`, `head`, `tail`, `awk '/pattern/'`).
+  - **Map**: Transform each line of data (`awk '{...}'`, `sed 's/.../.../'`, `cut`).
+  - **Reduce**: Aggregate data into a summary result (`sort | uniq -c`, `wc -l`, `awk '{sum+=$1} END {print sum}'`).
+
+### Pattern 2: Shuffling (Sort-Based Grouping)
+
+This is the command-line equivalent of a `GROUP BY` operation in SQL. The pattern is to extract a key, sort by that key to group related records together, and then process each group.
+
+```bash
+# Example: Find the most frequent user agent for each IP address.
+# The key here is the IP address ($1).
+awk '{print $1, $12}' access.log | \
+  sort | \
+  uniq -c | \
+  sort -k2,2 -k1,1nr | \
+  awk 'BEGIN{last=""} {if ($2 != last) {print} last=$2}'
+```
+
+This advanced pipeline sorts by IP, then by count, and finally uses `awk` to pick the first (highest count) entry for each unique IP.
+
+## Best Practices
+
+  - **Develop Incrementally**: Build pipelines one command at a time. After adding a `|` and a new command, run it to see if the intermediate output is what you expect.
+  - **Filter Early**: Place `grep` or other filtering commands as early as possible in the pipeline. This reduces the amount of data that subsequent, potentially more expensive commands like `sort` have to process.
+  - **Use `set -o pipefail`**: In shell scripts, this option causes a pipeline to return a failure status if *any* command in the pipeline fails, not just the last one.
+  - **Prefer `awk` for Columns**: For tasks involving multiple columns, `awk` is generally more powerful, readable, and performant than a complex chain of `cut`, `paste`, and shell loops.
+  - **Beware of Locales**: The `sort` command's behavior is affected by the `LC_ALL` environment variable. For byte-wise sorting, use `LC_ALL=C sort`.
+
+## Common Pitfalls
+
+  - **Forgetting to Sort Before `uniq`**: `uniq` only operates on adjacent lines. If the data is not sorted, it will not produce correct counts.
+  - **Greedy Regular Expressions**: A `grep` pattern like ` .  ` can match more than intended. Be as specific as possible with your regex.
+  - **Shell Globbing vs. `grep` Regex**: The wildcards used by the shell (`*`, `?`) are different from those used in regular expressions (`.*`, `.`).
+  - **Word Splitting on Unquoted Variables**: When used in scripts, variables containing spaces can be split into multiple arguments if not quoted (`"my var"` vs `my var`).
+
+## Performance Considerations
+
+  - **I/O is King**: These tools are often I/O-bound. Reading from and writing to disk is the slowest part. Use pipelines to avoid creating intermediate files.
+  - **`awk` vs. `sed` vs. `grep`**: For simple filtering, `grep` is fastest. For simple substitutions, `sed` is fastest. For any field-based logic, `awk` is the right tool and is extremely fast, as it's a single compiled process.
+  - **GNU Parallel**: For tasks that can be broken into independent chunks (e.g., processing thousands of files), `GNU parallel` can be used to execute pipelines in parallel, dramatically speeding up the work on multi-core systems.
+
+## Integration Points
+
+  - **Shell Scripting**: These tools are the fundamental building blocks for automation and data processing scripts in `bash`, `zsh`, etc.
+  - **Data Ingestion Pipelines**: Unix tools are often used as the first step (the "T" in an ELT process) to clean, filter, and normalize raw log files before they are loaded into a database or data warehouse.
+  - **Other Languages**: Languages like Python (`subprocess`) and Go (`os/exec`) can invoke these command-line tools to leverage their performance and functionality without having to re-implement them.
+
+## Troubleshooting
+
+### Problem 1: Pipeline hangs or is extremely slow
+
+**Symptoms:** The command prompt doesn't return, and there's no output.
+**Solution:** This is often caused by a command like `sort` or another tool that needs to read all of its input before producing any output. It may be processing a massive amount of data.
+
+1.  Test your pipeline on a small subset of the data first using `head -n 1000`.
+2.  Use a tool like `pv` (pipe viewer) in the middle of your pipeline (`... | pv | ...`) to monitor the flow of data and see where it's getting stuck.
+
+### Problem 2: `xargs` fails on filenames with spaces
+
+**Symptoms:** An `xargs` command fails with "file not found" errors for files with spaces or special characters in their names.
+**Solution:** Use the "null-delimited" mode of `find` and `xargs`, which is designed to handle all possible characters in filenames safely.
+
+```bash
+# Wrong way, will fail on "file name with spaces.txt"
+find . -name "*.txt" | xargs rm
+
+# Correct, safe way
+find . -name "*.txt" -print0 | xargs -0 rm
+```
+
+## Examples in Context
+
+  - **DevOps/SRE**: Quickly grepping through gigabytes of Kubernetes logs to find error messages related to a specific request ID.
+  - **Bioinformatics**: Processing massive FASTA/FASTQ text files to filter, reformat, or extract sequence data.
+  - **Security Analysis**: Analyzing `auth.log` files to find failed login attempts, group them by IP, and identify brute-force attacks.
+
+## References
+
+  - [The GNU Coreutils Manual](https://www.gnu.org/software/coreutils/manual/coreutils.html)
+  - [The AWK Programming Language (Book by Aho, Kernighan, Weinberger)](https://archive.org/details/pdfy-MgN0H1joIoDVoIC7)
+  - [Greg's Wiki - Bash Pitfalls](https://mywiki.wooledge.org/BashPitfalls)
+
+## Related Topics
+
+  - Shell Scripting
+  - Regular Expressions (Regex)
+  - AWK Programming
+  - Data Wrangling
\ No newline at end of file
diff --git a/bash/talk-to-computer/corpus/programming/functional_programming.md b/bash/talk-to-computer/corpus/programming/functional_programming.md
new file mode 100644
index 0000000..2572442
--- /dev/null
+++ b/bash/talk-to-computer/corpus/programming/functional_programming.md
@@ -0,0 +1,234 @@
+# Functional Programming - A paradigm for declarative, predictable code
+
+## Introduction
+
+**Functional Programming (FP)** is a programming paradigm where software is built by composing **pure functions**, avoiding shared state, mutable data, and side-effects. It treats computation as the evaluation of mathematical functions. Instead of describing *how* to achieve a result (imperative programming), you describe *what* the result is (declarative programming).
+
+This paradigm has gained significant traction because it helps manage the complexity of modern applications, especially those involving concurrency and complex state management. Programs written in a functional style are often easier to reason about, test, and debug.
+
+## Core Concepts
+
+### Pure Functions
+
+A function is **pure** if it adheres to two rules:
+
+1.  **The same input always returns the same output.** The function's return value depends solely on its input arguments.
+2.  **It produces no side effects.** A side effect is any interaction with the "outside world" from within the function. This includes modifying a global variable, changing an argument, logging to the console, or making a network request.
+
+<!-- end list -->
+
+```javascript
+// Pure function: predictable and testable
+const add = (a, b) => a + b;
+add(2, 3); // Always returns 5
+
+// Impure function: has a side effect (console.log)
+let count = 0;
+const incrementWithLog = () => {
+  count++; // And mutates external state
+  console.log(`The count is ${count}`);
+  return count;
+};
+```
+
+### Immutability
+
+**Immutability** means that data, once created, cannot be changed. If you need to modify a data structure (like an object or array), you create a new one with the updated values instead of altering the original. This prevents bugs caused by different parts of your application unexpectedly changing the same piece of data.
+
+```javascript
+// Bad: Mutating an object
+const user = { name: "Alice", age: 30 };
+const celebrateBirthdayMutable = (person) => {
+  person.age++; // This modifies the original user object
+  return person;
+};
+
+// Good: Returning a new object
+const celebrateBirthdayImmutable = (person) => {
+  return { ...person, age: person.age + 1 }; // Creates a new object
+};
+
+const newUser = celebrateBirthdayImmutable(user);
+// user is still { name: "Alice", age: 30 }
+// newUser is { name: "Alice", age: 31 }
+```
+
+### First-Class and Higher-Order Functions
+
+In FP, functions are **first-class citizens**. This means they can be treated like any other value:
+
+  * Assigned to variables
+  * Stored in data structures
+  * Passed as arguments to other functions
+  * Returned as values from other functions
+
+A function that either takes another function as an argument or returns a function is called a **Higher-Order Function**. Common examples are `map`, `filter`, and `reduce`.
+
+```javascript
+const numbers = [1, 2, 3, 4];
+const isEven = (n) => n % 2 === 0;
+const double = (n) => n * 2;
+
+// `filter` and `map` are Higher-Order Functions
+const evenDoubled = numbers.filter(isEven).map(double); // [4, 8]
+```
+
+## Key Principles
+
+  - **Declarative Style**: Focus on *what* the program should accomplish, not *how* it should accomplish it. An SQL query is a great example of a declarative style.
+  - **No Side Effects**: Isolate side effects from the core logic of your application. This makes your code more predictable.
+  - **Function Composition**: Build complex functionality by combining small, reusable functions.
+  - **Referential Transparency**: An expression can be replaced with its value without changing the behavior of the program. This is a natural outcome of using pure functions and immutable data.
+
+## Implementation/Usage
+
+The core idea is to create data transformation pipelines. You start with initial data and pass it through a series of functions to produce the final result.
+
+### Basic Example
+
+```javascript
+// A simple pipeline for processing a list of users
+const users = [
+  { name: "Alice", active: true, score: 90 },
+  { name: "Bob", active: false, score: 80 },
+  { name: "Charlie", active: true, score: 95 },
+];
+
+/**
+ * @param {object[]} users
+ * @returns {string[]}
+ */
+const getHighScoringActiveUserNames = (users) => {
+  return users
+    .filter((user) => user.active)
+    .filter((user) => user.score > 85)
+    .map((user) => user.name.toUpperCase());
+};
+
+console.log(getHighScoringActiveUserNames(users)); // ["ALICE", "CHARLIE"]
+```
+
+### Advanced Example
+
+A common advanced pattern is to use a reducer function to manage application state, a core concept in The Elm Architecture and libraries like Redux.
+
+```javascript
+// The state of our simple counter application
+const initialState = { count: 0 };
+
+// A pure function that describes how state changes in response to an action
+const counterReducer = (state, action) => {
+  switch (action.type) {
+    case 'INCREMENT':
+      return { ...state, count: state.count + 1 };
+    case 'DECREMENT':
+      return { ...state, count: state.count - 1 };
+    case 'RESET':
+      return { ...state, count: 0 };
+    default:
+      return state;
+  }
+};
+
+// Simulate dispatching actions
+let state = initialState;
+state = counterReducer(state, { type: 'INCREMENT' }); // { count: 1 }
+state = counterReducer(state, { type: 'INCREMENT' }); // { count: 2 }
+state = counterReducer(state, { type: 'DECREMENT' }); // { count: 1 }
+
+console.log(state); // { count: 1 }
+```
+
+## Common Patterns
+
+### Pattern 1: Functor
+
+A **Functor** is a design pattern for a data structure that can be "mapped over." It's a container that holds a value and has a `map` method for applying a function to that value without changing the container's structure. The most common example is the `Array`.
+
+```javascript
+// Array is a Functor because it has a .map() method
+const numbers = [1, 2, 3];
+const addOne = (n) => n + 1;
+const result = numbers.map(addOne); // [2, 3, 4]
+```
+
+### Pattern 2: Monad
+
+A **Monad** is a pattern for sequencing computations. Think of it as a "safer" functor that knows how to handle nested contexts or operations that can fail (like Promises or the `Maybe` type). `Promise` is a good practical example; its `.then()` method (or `flatMap`) lets you chain asynchronous operations together seamlessly.
+
+```javascript
+// Promise is a Monad, allowing chaining of async operations
+const fetchUser = (id) => Promise.resolve({ id, name: "Alice" });
+const fetchUserPosts = (user) => Promise.resolve([ { userId: user.id, title: "Post 1" } ]);
+
+fetchUser(1)
+  .then(fetchUserPosts) // .then acts like flatMap here
+  .then(posts => console.log(posts))
+  .catch(err => console.error(err));
+```
+
+## Best Practices
+
+  - **Keep Functions Small**: Each function should do one thing well.
+  - **Use Function Composition**: Use utilities like `pipe` or `compose` to build complex logic from simple building blocks.
+  - **Embrace Immutability**: Use `const` by default. Avoid reassigning variables. When updating objects or arrays, create new ones.
+  - **Isolate Impurity**: Side effects are necessary. Keep them at the boundaries of your application (e.g., in the function that handles an API call) and keep your core business logic pure.
+
+## Common Pitfalls
+
+  - **Accidental Mutation**: JavaScript objects and arrays are passed by reference, making it easy to mutate them accidentally. Be vigilant, especially with nested data.
+  - **Over-Abstraction**: Don't use complex FP concepts like monad transformers if a simple function will do. Prioritize readability.
+  - **Performance Misconceptions**: While creating many short-lived objects can have a performance cost, modern JavaScript engines are highly optimized for this pattern. Don't prematurely optimize; measure first.
+
+## Performance Considerations
+
+  - **Object/Array Creation**: In performance-critical code (e.g., animations, large data processing), the overhead of creating new objects/arrays in a tight loop can be significant.
+  - **Structural Sharing**: Libraries like `Immer` and `Immutable.js` use a technique called structural sharing. When you "change" an immutable data structure, only the parts that changed are created anew; the rest of the structure points to the same old data, saving memory and CPU time.
+  - **Recursion**: Deep recursion can lead to stack overflow errors. While some languages support **Tail Call Optimization (TCO)** to prevent this, JavaScript engines have limited support. Prefer iteration for very large data sets.
+
+## Integration Points
+
+  - **UI Frameworks**: FP concepts are central to modern UI libraries. **React** encourages pure components and uses immutable state patterns with Hooks (`useState`, `useReducer`).
+    \-- **State Management**: Libraries like **Redux** and **Zustand** are built entirely on FP principles, particularly the use of pure reducer functions.
+  - **Data Processing**: FP is excellent for data transformation pipelines. It's often used in backend services for processing streams of data.
+  - **Utility Libraries**: Libraries like **Lodash/fp** and **Ramda** provide a rich toolkit of pre-built, curried, and pure functions for everyday tasks.
+
+## Troubleshooting
+
+### Problem 1: Debugging composed function pipelines
+
+**Symptoms:** A chain of `.map().filter().reduce()` produces an incorrect result, and it's hard to see where it went wrong.
+**Solution:** Break the chain apart. Log the intermediate result after each step to inspect the data as it flows through the pipeline.
+
+```javascript
+const result = users
+  .filter((user) => user.active)
+  // console.log('After active filter:', resultFromActiveFilter)
+  .filter((user) => user.score > 85)
+  // console.log('After score filter:', resultFromScoreFilter)
+  .map((user) => user.name.toUpperCase());
+```
+
+### Problem 2: State changes unexpectedly
+
+**Symptoms:** A piece of state (e.g., in a React component or Redux store) changes when it shouldn't have, leading to bugs or infinite re-renders.
+**Solution:** This is almost always due to accidental mutation. Audit your code to ensure you are not modifying state directly. Use the spread syntax (`...`) for objects and arrays (`[...arr, newItem]`) to create copies. Libraries like `Immer` can make this process safer and more concise.
+
+## Examples in Context
+
+  - **Frontend Web Development**: The **Elm Architecture** (Model, Update, View) is a purely functional pattern for building web apps. It has heavily influenced libraries like Redux.
+  - **Data Analysis**: Running a series of transformations on a large dataset to filter, shape, and aggregate it for a report.
+  - **Concurrency**: Handling multiple events or requests simultaneously without running into race conditions, because data is immutable and shared state is avoided.
+
+## References
+
+  - [MDN Web Docs: Functional Programming](https://www.google.com/search?q=https://developer.mozilla.org/en-US/docs/Glossary/Functional_programming)
+  - [Professor Frisby's Mostly Adequate Guide to Functional Programming](https://mostly-adequate.gitbook.io/mostly-adequate-guide/)
+  - [Ramda Documentation](https://ramdajs.com/docs/)
+
+## Related Topics
+
+  - Immutability
+  - Functional Reactive Programming (FRP)
+  - The Elm Architecture
+  - Algebraic Data Types
\ No newline at end of file
diff --git a/bash/talk-to-computer/corpus/programming/lil_guide.md b/bash/talk-to-computer/corpus/programming/lil_guide.md
new file mode 100644
index 0000000..72df8df
--- /dev/null
+++ b/bash/talk-to-computer/corpus/programming/lil_guide.md
@@ -0,0 +1,277 @@
+# Multi-paradigm Programming with Lil - A Guide to Lil's Diverse Styles
+
+## Introduction
+
+Lil is a richly multi-paradigm scripting language designed for the Decker creative environment. It seamlessly blends concepts from **imperative**, **functional**, **declarative**, and **vector-oriented** programming languages. This flexibility allows developers to choose the most effective and ergonomic approach for a given task, whether it's managing application state, manipulating complex data structures, or performing efficient bulk computations. Understanding these paradigms is key to writing elegant, efficient, and idiomatic Lil code.
+
+## Core Concepts
+
+Lil's power comes from the way it integrates four distinct programming styles.
+
+### Imperative Programming
+
+This is the traditional, statement-by-statement style of programming. It involves creating variables, assigning values to them, and using loops and conditionals to control the flow of execution.
+
+  - **Assignment:** The colon (`:`) is used for assignment.
+  - **Control Flow:** Lil provides `if`/`elseif`/`else` for conditionals and `while` and `each` for loops.
+  - **State Management:** State is typically managed by assigning and re-assigning values to variables, often stored in the properties of Decker widgets between event handlers.
+
+<!-- end list -->
+
+```lil
+# Imperative approach to summing a list
+total: 0
+numbers: [10, 20, 30]
+each n in numbers do
+  total: total + n
+end
+# total is now 60
+```
+
+### Functional Programming
+
+The functional style emphasizes pure functions, immutability, and the composition of functions without side-effects.
+
+  - **Immutability:** All core data structures (lists, dictionaries, tables) have copy-on-write semantics. Modifying one does not alter the original value but instead returns a new, amended value.
+  - **First-Class Functions:** Functions are values that can be defined with `on`, assigned to variables, and passed as arguments to other functions.
+  - **Expressions over Statements:** Every statement in Lil is an expression that returns a value. An `if` block returns the value of its executed branch, and an `each` loop returns a new collection containing the result of each iteration.
+
+<!-- end list -->
+
+```lil
+# Functional approach using a higher-order function
+on twice f x do
+  f[f[x]]
+end
+
+on double x do
+  x * 2
+end
+
+result: twice[double 10] # result is 40
+```
+
+### Declarative (Query-based) Programming
+
+For data manipulation, Lil provides a powerful declarative query engine that resembles SQL. Instead of describing *how* to loop through and filter data, you declare *what* data you want.
+
+  - **Queries:** Use `select`, `update`, and `extract` to query tables (and other collection types).
+  - **Clauses:** Filter, group, and sort data with `where`, `by`, and `orderby` clauses.
+  - **Readability:** Queries often result in more concise and readable code for data transformation tasks compared to imperative loops.
+
+<!-- end list -->
+
+```lil
+# Declarative query to find developers
+people: insert name age job with
+ "Alice"  25 "Development"
+ "Sam"    28 "Sales"
+ "Thomas" 40 "Development"
+end
+
+devs: select name from people where job="Development"
+# devs is now a table with the names "Alice" and "Thomas"
+```
+
+### Vector-Oriented Programming
+
+Influenced by languages like APL and K, this paradigm focuses on applying operations to entire arrays or lists (vectors) at once, a concept known as **conforming**.
+
+  - **Conforming Operators:** Standard arithmetic operators (`+`, `-`, `*`, `/`) work element-wise on lists.
+  - **Efficiency:** Vector operations are significantly more performant than writing equivalent imperative loops.
+  - **The `@` Operator:** The "apply" operator (`@`) can be used to apply a function to each element of a list or to select multiple elements from a list by index.
+
+<!-- end list -->
+
+```lil
+# Vector-oriented approach to add 10 to each number
+numbers: [10, 20, 30]
+result: numbers + 10 # result is [20, 30, 40]
+```
+
+-----
+
+## Key Principles
+
+  - **Right-to-Left Evaluation:** Expressions are evaluated from right to left unless overridden by parentheses `()`. This is a fundamental rule that affects how all expressions are composed.
+  - **Copy-on-Write Immutability:** Lists, Dictionaries, and Tables are immutable. Operations like `update` or indexed assignments on an expression `(foo)[1]:44` return a new value, leaving the original unchanged. Direct assignment `foo[1]:44` is required to modify the variable `foo` itself.
+  - **Data-Centric Design:** The language provides powerful, built-in tools for data manipulation, especially through its query engine and vector operations.
+  - **Lexical Scoping:** Variables are resolved based on their location in the code's structure. Functions "close over" variables from their containing scope, enabling patterns like counters and encapsulated state.
+
+-----
+
+## Implementation/Usage
+
+The true power of Lil emerges when you mix these paradigms to solve problems cleanly and efficiently.
+
+### Basic Example
+
+Here, we combine an imperative loop with a vector-oriented operation to process a list of lists.
+
+```lil
+# Calculate the magnitude of several 2D vectors
+vectors: [[3,4], [5,12], [8,15]]
+magnitudes: []
+
+# Imperative loop over the list of vectors
+each v in vectors do
+  # mag is a vector-oriented unary operator
+  magnitudes: magnitudes & [mag v]
+end
+
+# magnitudes is now [5, 13, 17]
+```
+
+### Advanced Example
+
+This example defines a functional-style utility function (`avg`) and uses it within a declarative query to summarize data, an approach common in data analysis.
+
+```lil
+# Functional helper function
+on avg x do
+  (sum x) / count x
+end
+
+# A table of sales data
+sales: insert product category price with
+ "Apple"  "Fruit"  0.5
+ "Banana" "Fruit"  0.4
+ "Bread"  "Grain"  2.5
+ "Rice"   "Grain"  3.0
+end
+
+# Declarative query that uses the functional helper
+avgPriceByCategory: select category:first category avg_price:avg[price] by category from sales
+
+# avgPriceByCategory is now:
+# +----------+-----------+
+# | category | avg_price |
+# +----------+-----------+
+# | "Fruit"  | 0.45      |
+# | "Grain"  | 2.75      |
+# +----------+-----------+
+```
+
+-----
+
+## Common Patterns
+
+### Pattern 1: Query over Loop
+
+Instead of manually iterating with `each` to filter or transform a collection, use a declarative `select` or `extract` query. This is more concise, often faster, and less error-prone.
+
+```lil
+# Instead of this imperative loop...
+high_scores: []
+scores: [88, 95, 72, 100, 91]
+each s in scores do
+  if s > 90 then
+    high_scores: high_scores & [s]
+  end
+end
+
+# ...use a declarative query.
+high_scores: extract value where value > 90 from scores
+# high_scores is now [95, 100, 91]
+```
+
+### Pattern 2: Function Application with `@`
+
+For simple element-wise transformations on a list, using the `@` operator with a function is cleaner than writing an `each` loop.
+
+```lil
+# Instead of this...
+names: ["alice", "bob", "charlie"]
+capitalized: []
+on capitalize s do first s & (1 drop s) end # Simple capitalize, for demo
+each n in names do
+  capitalized: capitalized & [capitalize n]
+end
+
+# ...use the more functional and concise @ operator.
+on capitalize s do first s & (1 drop s) end
+capitalized: capitalize @ names
+# capitalized is now ["Alice", "Bob", "Charlie"]
+```
+
+-----
+
+## Best Practices
+
+  - **Embrace Queries:** For any non-trivial data filtering, grouping, or transformation, reach for the query engine first.
+  - **Use Vector Operations:** When performing arithmetic or logical operations on lists, use conforming operators (`+`, `<`, `=`) instead of loops for better performance and clarity.
+  - **Distinguish Equality:** Use the conforming equals `=` within query expressions. Use the non-conforming match `~` in `if` or `while` conditions to avoid accidentally getting a list result.
+  - **Encapsulate with Functions:** Use functions to create reusable components and manage scope, especially for complex logic within Decker event handlers.
+
+-----
+
+## Common Pitfalls
+
+  - **Right-to-Left Confusion:** Forgetting that `3*2+5` evaluates to `21`, not `11`. Use parentheses `(3*2)+5` to enforce the desired order of operations.
+  - **Expecting Mutation:** Believing that `update ... from my_table` changes `my_table`. It returns a *new* table. You must reassign it: `my_table: update ... from my_table`.
+  - **Comma as Argument Separator:** Writing `myfunc[arg1, arg2]`. This creates a list of two items and passes it as a single argument. The correct syntax is `myfunc[arg1 arg2]`.
+  - **Using `=` in `if`:** Writing `if some_list = some_value` can produce a list of `0`s and `1`s. An empty list `()` is falsey, but a list like `[0,0]` is truthy. Use `~` for a single boolean result in control flow.
+
+-----
+
+## Performance Considerations
+
+Vector-oriented algorithms are significantly faster and more memory-efficient than their imperative, element-by-element counterparts. The Lil interpreter is optimized for these bulk operations. For example, replacing values in a list using a calculated mask is preferable to an `each` loop with a conditional inside.
+
+```lil
+# Slow, iterative approach
+x: [1, 10, 2, 20, 3, 30]
+result: each v in x
+  if v < 5 99 else v end
+end
+
+# Fast, vector-oriented approach
+mask: x < 5                 # results in [1,0,1,0,1,0]
+result: (99 * mask) + (x * !mask)
+```
+
+-----
+
+## Integration Points
+
+The primary integration point for Lil is **Decker**. Lil scripts are attached to Decker widgets, cards, and the deck itself to respond to user events (`on click`, `on keydown`, etc.). All paradigms are useful within Decker:
+
+  - **Imperative:** To sequence actions, like showing a dialog and then navigating to another card.
+  - **Declarative:** To query data stored in a `grid` widget or to find specific cards in the deck, e.g., `extract value where value..widgets.visited.value from deck.cards`.
+  - **Functional/Vector:** To process data before displaying it, without needing slow loops.
+
+-----
+
+## Troubleshooting
+
+### Problem 1: An `if` statement behaves unpredictably with list comparisons.
+
+  - **Symptoms:** An `if` block either never runs or always runs when comparing a value against a list.
+  - **Solution:** You are likely using the conforming equals operator (`=`), which returns a list of boolean results. In a conditional, you almost always want the non-conforming match operator (`~`), which returns a single `1` or `0`.
+
+### Problem 2: A recursive function crashes with a stack overflow on large inputs.
+
+  - **Symptoms:** The script terminates unexpectedly when a recursive function is called with a large number or deep data structure.
+  - **Solution:** Lil supports **tail-call elimination**. Ensure your recursive call is the very last operation performed in the function. If it's part of a larger expression (e.g., `1 + my_func[...]`), it is not in a tail position. Rewrite the function to accumulate its result in an argument.
+
+-----
+
+## Examples in Context
+
+**Use Case: A Simple To-Do List in Decker**
+
+Imagine a Decker card with a `grid` widget named "tasks" (with columns "desc" and "done") and a `field` widget named "summary".
+
+```lil
+# In the script of the "tasks" grid, to update when it's changed:
+on change do
+  # Use a DECLARATIVE query to get the done/total counts.
+  # The query source is "me.value", the table in the grid.
+  stats: extract done:sum done total:count done from me.value
+
+  # Use IMPERATIVE assignment to update the summary field.
+  summary.text: format["%d of %d tasks complete.", stats.done, stats.total]
+end
+```
+
+This tiny script uses a declarative query to read the state and an imperative command to update the UI, demonstrating a practical mix of paradigms.