Global

Members

(constant) callStackTracker

Tracks function calls to help identify infinite recursion and deep call stacks that cause stack overflow errors. This is essential for debugging the interpreter's recursive evaluation of AST nodes. The tracker maintains a stack of function calls with timestamps and context information, counts function calls to identify hot paths, and detects potential infinite recursion by monitoring stack depth. This tool is particularly important for the combinator-based architecture where function calls are the primary execution mechanism, and nested expressions can lead to deep call stacks. The tracker helps identify when the combinator translation creates unexpectedly deep call chains, enabling optimization of the function composition and application patterns. The tracker provides detailed statistics about function call patterns, helping developers understand the execution characteristics of their code and identify potential performance bottlenecks in the combinator evaluation.
Source:

Methods

debugError(message, erroropt)

Logs debug error messages to console when DEBUG environment variable is set. Provides verbose error output during development while remaining silent in production. Debug functions are gated by the DEBUG environment variable, allowing for verbose output during development and silent operation in production. This approach makes it easy to trace execution and diagnose issues without cluttering normal output. This function is particularly useful for debugging parsing and evaluation errors, providing detailed context about where and why errors occur in the language execution pipeline.
Source:
Parameters:
Name Type Attributes Default Description
message string Debug error message to log
error Error <optional>
null Optional error object to log

debugLog(message, dataopt)

Logs debug messages to console when DEBUG environment variable is set. Provides verbose output during development while remaining silent in production. Debug functions are gated by the DEBUG environment variable, allowing for verbose output during development and silent operation in production. This approach makes it easy to trace execution and diagnose issues without cluttering normal output. This function is essential for debugging the combinator-based architecture, allowing developers to trace how operators are translated to function calls and how the interpreter executes these calls through the standard library. The function is designed to be lightweight and safe to call frequently, making it suitable for tracing execution flow through nested expressions and function applications.
Source:
Parameters:
Name Type Attributes Default Description
message string Debug message to log
data * <optional>
null Optional data to log with the message

(async) executeFile(filePath) → {Promise.<*>}

Main entry point for file execution. Handles the complete language pipeline: file reading, lexical analysis, parsing, and interpretation. This function orchestrates the entire language execution process: 1. Reads the source file using cross-platform I/O utilities 2. Tokenizes the source code using the lexer 3. Parses tokens into an AST using the combinator-based parser 4. Interprets the AST using the combinator-based interpreter The function provides comprehensive error handling and debug output at each stage for transparency and troubleshooting. It also manages the call stack tracker to provide execution statistics and detect potential issues. Supports both synchronous and asynchronous execution, with proper error handling and process exit codes. This function demonstrates the complete combinator-based architecture in action, showing how source code is transformed through each stage of the language pipeline. The function enforces the .txt file extension requirement and provides detailed error reporting with call stack statistics to help developers understand execution behavior and diagnose issues.
Source:
Parameters:
Name Type Description
filePath string Path to the file to execute
Throws:
For file reading, parsing, or execution errors
Type
Error
Returns:
Type:
Promise.<*>
The result of executing the file

initializeStandardLibrary(scope)

Injects higher-order functions and combinator functions into the interpreter's global scope. These functions provide functional programming utilities and implement the combinator foundation that reduces parsing ambiguity by translating all operations to function calls. The standard library includes: - Higher-order functions (map, compose, pipe, apply, filter, reduce, fold, curry) - Arithmetic combinators (add, subtract, multiply, divide, modulo, power, negate) - Comparison combinators (equals, notEquals, lessThan, greaterThan, lessEqual, greaterEqual) - Logical combinators (logicalAnd, logicalOr, logicalXor, logicalNot) - Enhanced combinators (identity, constant, flip, on, both, either) This approach ensures that user code can access these functions as if they were built-in, without special syntax or reserved keywords. The combinator foundation allows the parser to translate all operators to function calls, eliminating ambiguity while preserving syntax. Functions are written to check argument types at runtime since the language is dynamically typed and does not enforce arity or types at parse time. The combinator functions are designed to work seamlessly with the parser's operator translation, providing a consistent and extensible foundation for all language operations. The standard library is the foundation of the combinator-based architecture. Each function is designed to support partial application, enabling currying patterns and function composition. This design choice enables functional programming patterns while maintaining simplicity and consistency across all operations. Error handling is implemented at the function level, with clear error messages that help users understand what went wrong and how to fix it. This includes type checking for function arguments and validation of input data.
Source:
Parameters:
Name Type Description
scope Object The global scope object to inject functions into

interpreter(ast, environmentopt, initialStateopt) → {*}

Evaluates an AST by walking through each node and performing the corresponding operations. Manages scope, handles function calls, and supports both synchronous and asynchronous operations. The interpreter implements a combinator-based architecture where all operations are executed through function calls to standard library combinators. This design reduces parsing ambiguity while preserving intuitive syntax. The parser translates all operators (+, -, *, /, etc.) into FunctionCall nodes that reference combinator functions, ensuring consistent semantics across all operations. Key architectural features: - Combinator Foundation: All operations are function calls to standard library combinators - Scope Management: Prototypal inheritance for variable lookup and function definitions - Forward Declaration: Recursive functions are supported through placeholder creation - Error Handling: Comprehensive error detection and reporting with call stack tracking - Debug Support: Optional debug mode for development and troubleshooting - IO Operations: Support for input/output operations through environment interface The interpreter processes legacy operator expressions (PlusExpression, MinusExpression, etc.) for backward compatibility, but the parser now generates FunctionCall nodes for all operators, which are handled by the standard library combinator functions. This ensures that all operations follow the same execution model and can be extended by adding new combinator functions to the standard library. The interpreter uses a global scope for variable storage and function definitions. Each function call creates a new scope (using prototypal inheritance) to implement lexical scoping. Immutability is enforced by preventing reassignment in the global scope. The interpreter is split into three functions: evalNode (global), localEvalNodeWithScope (for function bodies), and localEvalNode (for internal recursion). This separation allows for correct scope handling and easier debugging. Recursive function support is implemented using a forward declaration pattern: a placeholder function is created in the global scope before evaluation, allowing the function body to reference itself during evaluation. The combinator foundation ensures that all operations are executed through function calls, providing a consistent and extensible execution model. This approach enables abstractions and reduces the need for special handling of different operator types in the interpreter. The interpreter supports both synchronous and asynchronous operations. IO operations like input and output can return Promises, allowing for non-blocking execution when interacting with external systems or user input.
Source:
Parameters:
Name Type Attributes Default Description
ast ASTNode Abstract Syntax Tree to evaluate
environment Environment <optional>
null External environment for IO operations
initialState Object <optional>
{} Initial state for the interpreter
Throws:
For evaluation errors like division by zero, undefined variables, etc.
Type
Error
Returns:
Type:
*
The result of evaluating the AST, or a Promise for async operations

lexer(input) → {Array.<Token>}

The lexer performs lexical analysis by converting source code into a stream of tokens. Each token represents a meaningful unit of the language syntax, such as identifiers, literals, operators, and keywords. The lexer implements a character-by-character scanning approach with lookahead for multi-character tokens. It maintains line and column information for accurate error reporting and debugging. Key features: - Handles whitespace and comments (single-line and multi-line) - Recognizes all language constructs including operators, keywords, and literals - Supports string literals with escape sequences - Provides detailed position information for error reporting - Cross-platform compatibility (Node.js, Bun, browser) - Supports function composition with 'via' keyword - Handles function references with '@' operator The lexer is designed to be robust and provide clear error messages for malformed input, making it easier to debug syntax errors in user code. It supports the combinator-based architecture by recognizing all operators and special tokens needed for function composition and application. The lexer is the first step in the language processing pipeline and must correctly identify all tokens that the parser will translate into function calls. This includes operators that will become combinator function calls, function references that enable higher-order programming, and special keywords that support the functional programming paradigm. The lexer uses a state machine approach where each character type triggers different parsing strategies. This design enables efficient tokenization while maintaining clear separation of concerns for different token types. The character-by-character approach allows for precise error reporting and supports multi-character tokens like operators and string literals with escape sequences. Error handling is designed to provide meaningful feedback by including line and column information in error messages. This enables users to quickly locate and fix syntax errors in their code.
Source:
Parameters:
Name Type Description
input string The source code to tokenize
Throws:
For unexpected characters or malformed tokens
Type
Error
Returns:
Type:
Array.<Token>
Array of token objects with type, value, line, and column

(async) main()

Processes command line arguments and executes the specified file. Provides helpful error messages for incorrect usage. The language is designed for file execution only (no REPL), so the CLI enforces this usage and provides helpful error messages for incorrect invocation. The function validates that exactly one file path is provided and that the file has the correct .txt extension. Exits with appropriate error codes for different failure scenarios.
Source:

parser(tokens) → {ASTNode}

The parser implements a combinator-based architecture where all operators are translated to function calls to standard library combinators. This reduces parsing ambiguity while preserving the original syntax. The parser uses a recursive descent approach with proper operator precedence handling. Each operator expression (e.g., x + y) is translated to a FunctionCall node (e.g., add(x, y)) that will be executed by the interpreter using the corresponding combinator function. Key architectural decisions: - All operators become FunctionCall nodes to eliminate ambiguity - Operator precedence is handled through recursive parsing functions - Function calls are detected by looking for identifiers followed by expressions - When expressions and case patterns are parsed with special handling - Table literals and access are parsed as structured data - Function composition uses 'via' keyword with right-associative precedence - Function application uses juxtaposition with left-associative precedence The parser maintains a current token index and advances through the token stream, building the AST bottom-up from primary expressions to logical expressions. This approach ensures that all operations are consistently represented as function calls, enabling the interpreter to use the combinator foundation for execution. This design choice reduces the need for special operator handling in the interpreter and enables abstractions through the combinator foundation. All operations become function calls, providing a consistent and extensible execution model that can be enhanced by adding new combinator functions. The parser implements a top-down recursive descent strategy where each parsing function handles a specific precedence level. This approach ensures that operator precedence is correctly enforced while maintaining clear separation of concerns for different language constructs. Error handling is designed to provide meaningful feedback by including context about what was expected and what was found. This enables users to quickly identify and fix parsing errors in their code.
Source:
Parameters:
Name Type Description
tokens Array.<Token> Array of tokens from the lexer
Throws:
For parsing errors like unexpected tokens or missing delimiters
Type
Error
Returns:
Type:
ASTNode
Abstract Syntax Tree with program body

(async) readFile(filePath) → {Promise.<string>}

Handles file reading across different platforms (Node.js, Bun, browser) with appropriate fallbacks for each environment. This function is essential for the language's file execution model where scripts are loaded from .txt files. The function prioritizes ES modules compatibility by using dynamic import, but falls back to require for older Node.js versions. Browser environments are not supported for file I/O operations. This cross-platform approach ensures the language can run in various JavaScript environments while maintaining consistent behavior. The file reading capability enables the language to execute scripts from files, supporting the development workflow where tests and examples are stored as .txt files.
Source:
Parameters:
Name Type Description
filePath string Path to the file to read
Throws:
For file reading errors
Type
Error
Returns:
Type:
Promise.<string>
File contents as a string

run(scriptContent, initialStateopt, environmentopt) → {*}

Parses and executes a script using the combinator-based language. This function orchestrates the entire execution pipeline from source code to final result. The function performs the following steps: 1. Tokenize the source code using the lexer 2. Parse the tokens into an AST using the parser 3. Evaluate the AST using the interpreter 4. Return the final result This is the primary interface for executing scripts in the language. It handles the parsing and evaluation pipeline, providing a simple interface for users to run their code. The function supports both synchronous and asynchronous execution. When the script contains IO operations that return Promises, the function will return a Promise that resolves to the final result. This enables non-blocking execution for interactive programs. Error handling is comprehensive, with errors from any stage of the pipeline (lexing, parsing, or evaluation) being caught and re-thrown with appropriate context. This ensures that users get meaningful error messages that help them identify and fix issues in their code. The function is designed to be stateless, with each call creating a fresh interpreter instance. This ensures that scripts don't interfere with each other and enables safe concurrent execution of multiple scripts.
Source:
Parameters:
Name Type Attributes Default Description
scriptContent string The script content to execute
initialState Object <optional>
{} Initial state for the interpreter
environment Environment <optional>
null Environment for IO operations
Throws:
For parsing or evaluation errors
Type
Error
Returns:
Type:
*
The result of executing the script

Type Definitions

ASTNode

AST node types for the language
Properties:
Name Type Attributes Description
type string The node type identifier
value * <optional>
Node value (for literals)
name string <optional>
Identifier name (for identifiers)
body Array.<ASTNode> <optional>
Program or function body
args Array.<ASTNode> <optional>
Function call arguments
params Array.<string> <optional>
Function parameters
parameters Array.<string> <optional>
Function parameters (alternative)
left ASTNode <optional>
Left operand (for binary expressions)
right ASTNode <optional>
Right operand (for binary expressions)
operand ASTNode <optional>
Operand (for unary expressions)
table ASTNode <optional>
Table expression (for table access)
key ASTNode <optional>
Key expression (for table access)
entries Array.<Object> <optional>
Table entries (for table literals)
cases Array.<ASTNode> <optional>
When expression cases
pattern Array.<ASTNode> <optional>
Pattern matching patterns
result Array.<ASTNode> <optional>
Pattern matching results
value ASTNode <optional>
When expression value
Source:
Type:
  • Object

Environment

Environment interface for external system integration
Properties:
Name Type Description
getCurrentState function Returns the current state from external system
emitValue function Sends a value to the external system
Source:
Type:
  • Object

Token

Token object structure
Properties:
Name Type Attributes Description
type string The token type from TokenType enum
value * <optional>
The token's value (for literals and identifiers)
name string <optional>
Function name (for FUNCTION_REF tokens)
line number Line number where token appears (1-indexed)
column number Column number where token appears (1-indexed)
Source:
Type:
  • Object

TokenType

Defines all token types used by the lexer and parser. Each token type represents a distinct syntactic element in the language. The token types are organized into categories: - Literals: NUMBER, STRING, TRUE, FALSE - Operators: PLUS, MINUS, MULTIPLY, DIVIDE, MODULO, POWER, etc. - Keywords: WHEN, IS, THEN, FUNCTION, etc. - Punctuation: LEFT_PAREN, RIGHT_PAREN, SEMICOLON, COMMA, etc. - Special: IO_IN, IO_OUT, IO_ASSERT, IO_LISTEN, IO_EMIT, FUNCTION_REF, FUNCTION_ARG This enumeration provides a centralized definition of all possible token types, ensuring consistency between lexer and parser. The token types are designed to support the combinator-based architecture where all operations are translated to function calls.
Properties:
Name Type Description
NUMBER string Numeric literals (integers and floats)
PLUS string Addition operator (+)
MINUS string Subtraction operator (-)
MULTIPLY string Multiplication operator (*)
DIVIDE string Division operator (/)
IDENTIFIER string Variable names and function names
ASSIGNMENT string Assignment operator (:)
ARROW string Function arrow (->)
CASE string Case keyword
OF string Of keyword
WHEN string When keyword for pattern matching
IS string Is keyword for pattern matching
THEN string Then keyword for pattern matching
WILDCARD string Wildcard pattern (_)
FUNCTION string Function keyword
LEFT_PAREN string Left parenthesis (()
RIGHT_PAREN string Right parenthesis ())
LEFT_BRACE string Left brace ({)
RIGHT_BRACE string Right brace (})
LEFT_BRACKET string Left bracket ([)
RIGHT_BRACKET string Right bracket (])
SEMICOLON string Semicolon (;)
COMMA string Comma (,)
DOT string Dot (.)
STRING string String literals
TRUE string Boolean true literal
FALSE string Boolean false literal
AND string Logical AND operator
OR string Logical OR operator
XOR string Logical XOR operator
NOT string Logical NOT operator
EQUALS string Equality operator (==)
LESS_THAN string Less than operator (<)
GREATER_THAN string Greater than operator (>)
LESS_EQUAL string Less than or equal operator (<=)
GREATER_EQUAL string Greater than or equal operator (>=)
NOT_EQUAL string Not equal operator (!=)
MODULO string Modulo operator (%)
POWER string Power operator (^)
IO_IN string Input operation (..in)
IO_OUT string Output operation (..out)
IO_ASSERT string Assertion operation (..assert)
IO_LISTEN string Listen operation (..listen)
IO_EMIT string Emit operation (..emit)
FUNCTION_REF string Function reference (@function)
FUNCTION_ARG string Function argument (@(expression))
COMPOSE string Function composition (via)
Source:
Type:
  • Object