about summary refs log tree commit diff stats
path: root/bash/talk-to-computer
diff options
context:
space:
mode:
Diffstat (limited to 'bash/talk-to-computer')
-rw-r--r--bash/talk-to-computer/README.md876
-rwxr-xr-xbash/talk-to-computer/classifier.sh281
-rwxr-xr-xbash/talk-to-computer/common.sh234
-rwxr-xr-xbash/talk-to-computer/computer477
-rwxr-xr-xbash/talk-to-computer/config.sh126
-rwxr-xr-xbash/talk-to-computer/consensus388
-rw-r--r--bash/talk-to-computer/corpus/.file_processors3
-rw-r--r--bash/talk-to-computer/corpus/.topic_keywords6
-rw-r--r--bash/talk-to-computer/corpus/README.md236
-rw-r--r--bash/talk-to-computer/corpus/corpus_registry.txt9
-rw-r--r--bash/talk-to-computer/corpus/corpus_registry.txt.backup9
-rw-r--r--bash/talk-to-computer/corpus/programming/combinators.md192
-rw-r--r--bash/talk-to-computer/corpus/programming/command_line_data_processing.md200
-rw-r--r--bash/talk-to-computer/corpus/programming/functional_programming.md234
-rw-r--r--bash/talk-to-computer/corpus/programming/lil_guide.md277
-rw-r--r--bash/talk-to-computer/corpus/science/physics_basics.txt94
-rw-r--r--bash/talk-to-computer/corpus/topic_template.md39
-rwxr-xr-xbash/talk-to-computer/corpus_manager.sh303
-rw-r--r--bash/talk-to-computer/corpus_prompt_template.md125
-rwxr-xr-xbash/talk-to-computer/critique171
-rwxr-xr-xbash/talk-to-computer/exploration304
-rwxr-xr-xbash/talk-to-computer/lil_tester.sh288
-rwxr-xr-xbash/talk-to-computer/logging.sh247
-rwxr-xr-xbash/talk-to-computer/metrics18
-rwxr-xr-xbash/talk-to-computer/model_selector.sh380
-rwxr-xr-xbash/talk-to-computer/peer-review275
-rwxr-xr-xbash/talk-to-computer/puzzle442
-rwxr-xr-xbash/talk-to-computer/quality_guard.sh366
-rw-r--r--bash/talk-to-computer/rag_config.sh118
-rw-r--r--bash/talk-to-computer/rag_integration.sh336
-rwxr-xr-xbash/talk-to-computer/rag_search.sh187
-rwxr-xr-xbash/talk-to-computer/socratic229
-rwxr-xr-xbash/talk-to-computer/synthesis248
-rwxr-xr-xbash/talk-to-computer/test_framework.sh434
-rwxr-xr-xbash/talk-to-computer/test_model_selector.sh50
-rwxr-xr-xbash/talk-to-computer/test_quality_guard.sh70
36 files changed, 8272 insertions, 0 deletions
diff --git a/bash/talk-to-computer/README.md b/bash/talk-to-computer/README.md
new file mode 100644
index 0000000..493a54c
--- /dev/null
+++ b/bash/talk-to-computer/README.md
@@ -0,0 +1,876 @@
+# AI Thinking Mechanisms
+
+A Bash-based system that routes user prompts through different AI thinking mechanisms. It uses pattern matching and basic analysis to select from multiple approaches: direct response, socratic questioning, exploration, critique, consensus, synthesis, and puzzle solving.
+
+## Architecture Overview
+
+The system processes prompts through several stages:
+
+```
+User Prompt → Computer (Dispatcher) → Classifier → RAG System → Mechanism → LLM Models → Quality Guard → Response
+     ↓              ↓                      ↓              ↓              ↓              ↓              ↓
+  Validation    Pattern Analysis      Selection      Corpus Search  Processing     Model Calls    Quality Check
+     ↓              ↓                      ↓              ↓              ↓              ↓              ↓
+  Sanitization  Keyword Matching      Routing        Context Augment Execution      Fallbacks      Error Handling
+```
+
+### Core Components
+
+- **`computer`** - Main dispatcher script with manual mechanism selection
+- **`classifier.sh`** - Advanced prompt classification with Lil-specific routing
+- **`logging.sh`** - Logging and validation utilities
+- **`quality_guard.sh`** - System-wide response quality monitoring
+- **`corpus/`** - RAG knowledge corpus directory structure
+- **`corpus_manager.sh`** - Corpus management and auto-discovery
+- **`rag_search.sh`** - Efficient corpus searching with Unix tools
+- **Thinking Mechanisms** - Specialized AI interaction patterns
+- **Dynamic Model Selection** - Intelligent model routing based on task type
+
+## Getting Started
+
+### Prerequisites
+
+- **Bash 4.0+** (for advanced features)
+- **Ollama** installed and running
+- **jq** (optional, for enhanced JSON processing)
+- **bc** (optional, for precise timing calculations)
+
+### Installation
+
+1. Clone or download the scripts to your desired directory
+2. Ensure all scripts have execute permissions:
+   ```bash
+   chmod +x computer exploration consensus socratic critique peer-review synthesis puzzle
+   chmod +x logging.sh classifier.sh quality_guard.sh
+   ```
+3. Verify Ollama is running and accessible:
+   ```bash
+   ollama list
+   ```
+
+### Basic Usage
+
+```bash
+# Use the intelligent dispatcher (recommended)
+./computer "Your question or prompt here"
+
+# Force direct response (bypass thinking mechanisms)
+./computer -d "Simple question"
+
+# Include file context
+./computer -f input.txt "Analyze this file"
+
+# Specify number of rounds
+./computer "Complex question" 3
+
+# Manual mechanism selection
+./computer -m puzzle "How can I implement a sorting algorithm?"
+./computer -m socratic "Analyze this deeply"
+
+# Get help
+./computer --help              # Show all options and examples
+./computer --mechanisms        # List available thinking mechanisms
+```
+
+## Quality Guard System
+
+### Basic Response Quality Monitoring
+
+The Quality Guard provides simple monitoring and error handling for AI responses.
+
+#### **What It Does**
+
+- Monitors basic response characteristics
+- Detects obvious quality issues
+- Provides fallback responses when needed
+- Attempts to regenerate responses up to 2 times
+
+#### **How It Works**
+
+1. **Basic Checks** - Simple pattern matching for obvious issues
+2. **Scoring** - Basic heuristics for response quality
+3. **Retry Logic** - Up to 2 attempts to get better responses
+4. **Fallbacks** - Generic helpful responses when retries fail
+
+#### **Limitations**
+
+- Uses simple pattern matching, not advanced analysis
+- May not catch subtle quality issues
+- Fallback responses are generic
+- Not a substitute for careful prompt engineering
+
+#### **Configuration Options**
+
+```bash
+# Basic threshold adjustments
+export MIN_RESPONSE_LENGTH=30        # Minimum words required
+export MAX_REPETITION_RATIO=0.4      # Maximum repetition allowed
+export MAX_NONSENSE_SCORE=0.6        # Maximum nonsense score
+export DEGRADATION_THRESHOLD=0.65    # Quality threshold for correction
+export MAX_CORRECTION_ATTEMPTS=2     # Number of correction attempts
+export FALLBACK_ENABLED=true         # Enable fallback responses
+```
+
+## RAG (Retrieval-Augmented Generation) System
+
+### Knowledge Corpus Architecture
+
+The RAG system provides intelligent knowledge augmentation by searching a structured corpus of documentation and returning relevant context to enhance AI responses.
+
+#### **Key Features**
+
+- **Extensible Corpus Structure** - Easy to add new topics and content
+- **Efficient Search** - Uses grep/sed/awk for sub-second lookups
+- **Auto-Discovery** - Automatically finds and indexes new content
+- **Topic-Based Routing** - Matches queries to relevant knowledge areas
+- **Context Injection** - Provides relevant information to AI models
+
+#### **Corpus Organization**
+
+```
+corpus/
+├── README.md              # Usage guide and templates
+├── corpus_registry.txt    # Auto-generated registry of topics
+├── corpus_manager.sh      # Management utilities
+├── .topic_keywords       # Topic keyword mappings
+├── .file_processors      # File type handlers
+│
+├── programming/          # Programming topics
+│   ├── lil/             # Lil programming language
+│   │   └── guide.md
+│   └── algorithms.txt
+│
+├── science/              # Scientific topics
+│   ├── physics.txt
+│   └── chemistry.md
+│
+└── [your_topics]/        # Add your own topics here
+```
+
+#### **Corpus Manager Usage**
+
+```bash
+# Update corpus registry after adding files
+./corpus_manager.sh update
+
+# List all available topics
+./corpus_manager.sh list
+
+# Check if topic exists
+./corpus_manager.sh exists programming
+
+# List files in a topic
+./corpus_manager.sh files science
+
+# Create template for new topic
+./corpus_manager.sh template "machine-learning"
+
+# Get corpus statistics
+./corpus_manager.sh count programming
+```
+
+#### **RAG Search Usage**
+
+```bash
+# Search entire corpus
+./rag_search.sh search "quantum physics"
+
+# Search specific topic
+./rag_search.sh search "lil programming" programming
+
+# Get context around matches
+./rag_search.sh context "variables" programming
+
+# Extract relevant sections
+./rag_search.sh extract "functions" programming
+
+# Show corpus statistics
+./rag_search.sh stats
+```
+
+#### **Adding New Content**
+
+1. **Create topic directory**:
+   ```bash
+   mkdir -p corpus/newtopic
+   ```
+
+2. **Add content files** (use .md, .txt, or .html):
+   ```bash
+   vim corpus/newtopic/guide.md
+   vim corpus/newtopic/examples.txt
+   ```
+
+3. **Update registry**:
+   ```bash
+   ./corpus_manager.sh update
+   ```
+
+4. **Test search**:
+   ```bash
+   ./rag_search.sh search "keyword" newtopic
+   ```
+
+#### **File Format Guidelines**
+
+- **Markdown (.md)** - Recommended for structured content
+- **Plain text (.txt)** - Simple notes and documentation
+- **HTML (.html)** - Rich content with formatting
+- **Descriptive names** - Use clear, descriptive filenames
+- **Consistent headers** - Use standard Markdown headers (# ## ###)
+- **Cross-references** - Link related topics when helpful
+
+#### **Search Behavior**
+
+- **Case-insensitive** matching across all text files
+- **Multi-word queries** supported
+- **Partial matching** within words
+- **Context extraction** with configurable line limits
+- **Topic filtering** for focused searches
+- **Relevance ranking** based on match proximity
+
+#### **Integration with AI**
+
+The RAG system integrates seamlessly with thinking mechanisms:
+
+- **Automatic RAG detection** - Knows when to search corpus
+- **Topic classification** - Routes queries to relevant knowledge
+- **Context injection** - Provides relevant information to enhance responses
+- **Fallback handling** - Graceful degradation when no corpus available
+
+#### **Performance**
+
+- **Sub-second lookups** using cached registry
+- **Efficient Unix tools** (grep/sed/awk) for processing
+- **Memory efficient** with file-based storage
+- **Scalable architecture** supporting thousands of files
+- **Minimal latency** for AI response enhancement
+
+#### **Configuration**
+
+```bash
+# RAG system settings (in rag_config.sh)
+export CORPUS_DIR="corpus"                    # Corpus root directory
+export CORPUS_REGISTRY="corpus_registry.txt"  # Topic registry
+export MAX_SEARCH_RESULTS=5                  # Max results to return
+export MIN_CONTENT_LENGTH=50                 # Min content length
+export SEARCH_CONTEXT_LINES=3                # Context lines around matches
+```
+
+## Prompt Classification
+
+### Basic Pattern Matching
+
+The system uses keyword and pattern matching to route prompts to different mechanisms:
+
+#### **Pattern Matching Rules**
+- **Question type detection**: what/when/where → DIRECT, why/how → SOCRATIC
+- **Action-oriented patterns**: improve → CRITIQUE, compare → EXPLORATION
+- **Puzzle & coding patterns**: algorithm/implement → PUZZLE, challenge/problem → PUZZLE
+- **Lil-specific routing**: "using lil"/"in lil" → PUZZLE (highest priority)
+- **Context-aware scoring**: strategy/planning → EXPLORATION, analysis → SOCRATIC
+- **Enhanced scoring system** with multi-layer analysis
+
+#### **Basic Analysis**
+- **Word count analysis**: Short prompts → DIRECT, longer → complex mechanisms
+- **Keyword presence**: Simple keyword matching for routing decisions
+- **Basic confidence scoring**: Simple scoring mechanism
+
+#### **Limitations**
+- Relies on keyword matching, not deep understanding
+- May misclassify prompts without obvious keywords
+- Confidence scores are basic heuristics, not accurate measures
+- Not a substitute for manual routing when precision matters
+
+### Decision Making
+
+- **Basic confidence scoring** from pattern matching
+- **Keyword-based routing** with fallback to DIRECT
+- **Simple word count analysis** for complexity estimation
+
+### Classification Examples
+
+```bash
+# Strategic Planning
+Input: "What are the different approaches to solve climate change?"
+Output: EXPLORATION:1.00 (matches "different approaches" pattern)
+
+# Improvement Request
+Input: "How can we improve our development workflow?"
+Output: CRITIQUE:1.00 (matches "improve" keyword)
+
+# Complex Analysis
+Input: "Why do you think this approach might fail and what are the underlying assumptions?"
+Output: SOCRATIC:0.85 (matches "why" and complexity indicators)
+
+# Simple Question
+Input: "What is 2+2?"
+Output: DIRECT:0.8 (simple, short question)
+
+# Algorithm Challenge
+Input: "How can I implement a binary search algorithm?"
+Output: PUZZLE:1.00 (matches "algorithm" and "implement" keywords)
+```
+
+## Thinking Mechanisms
+
+### 1. **Exploration** - Multiple Path Analysis
+**Purpose**: Generate multiple solution approaches and compare them
+
+```bash
+./exploration -p 4 "How can we improve our development process?"
+```
+
+**Process**:
+- **Phase 1**: Generate multiple solution paths
+- **Phase 2**: Basic analysis of each path
+- **Phase 3**: Simple comparison and recommendations
+
+**Notes**: Uses multiple LLM calls to generate different approaches
+
+### 2. **Consensus** - Multiple Model Responses
+**Purpose**: Get responses from multiple models and compare them
+
+```bash
+./consensus "What's the best approach to this problem?"
+```
+
+**Process**:
+- **Phase 1**: Get responses from multiple models
+- **Phase 2**: Basic comparison
+- **Phase 3**: Simple voting mechanism
+- **Phase 4**: Combine responses
+
+**Notes**: Limited by available models and simple comparison logic
+
+### 3. **Socratic** - Question-Based Analysis
+**Purpose**: Use AI-generated questions to analyze prompts
+
+```bash
+./socratic "Explain the implications of this decision"
+```
+
+**Process**:
+- **Phase 1**: Generate initial response
+- **Phase 2**: Generate follow-up questions
+- **Phase 3**: Get responses to questions
+- **Phase 4**: Combine into final output
+
+**Notes**: Creates a back-and-forth conversation between AI models
+
+### 4. **Critique** - Improvement Analysis
+**Purpose**: Get improvement suggestions for code or text
+
+```bash
+./critique -f code.py "How can we improve this code?"
+```
+
+**Process**:
+- **Phase 1**: Initial assessment
+- **Phase 2**: Generate critique
+- **Phase 3**: Suggest improvements
+- **Phase 4**: Provide guidance
+
+**Notes**: Basic improvement suggestions based on AI analysis
+
+### 5. **Peer Review** - Multiple AI Reviewers
+**Purpose**: Get feedback from multiple AI perspectives
+
+```bash
+./peer-review "Review this proposal"
+```
+
+**Process**:
+- **Phase 1**: Generate multiple reviews
+- **Phase 2**: Basic consolidation
+- **Phase 3**: Combine feedback
+
+**Notes**: Simple multiple AI review approach
+
+### 6. **Synthesis** - Combine Approaches
+**Purpose**: Combine multiple approaches into one
+
+```bash
+./synthesis "How can we combine these different approaches?"
+```
+
+**Process**:
+- **Phase 1**: Identify approaches
+- **Phase 2**: Basic analysis
+- **Phase 3**: Simple combination
+
+**Notes**: Basic approach combination mechanism
+
+### 7. **Puzzle** - Coding Problem Solving
+**Purpose**: Help with coding problems and algorithms
+
+```bash
+./puzzle "How can I implement a sorting algorithm?"
+./puzzle -l python "What's the best way to solve this data structure problem?"
+```
+
+**Process**:
+- **Phase 1**: Basic problem analysis
+- **Phase 2**: Solution approach
+- **Phase 3**: Code examples
+- **Phase 4**: Basic validation
+- **Phase 5**: Optional Lil code testing if available
+
+**Features**:
+- **Enhanced Lil language knowledge** with comprehensive documentation
+- **Intelligent Lil routing** - automatically triggered by Lil-related keywords
+- **RAG integration** - searches Lil corpus for relevant context
+- **Code testing** with secure Lil script execution
+- **Multi-language support** with Lil as primary focus
+
+**Notes**: Includes extensive Lil documentation and testing capabilities
+
+## Computer Script Features
+
+### Intelligent Routing with Manual Override
+
+The main `computer` script provides both automatic routing and manual mechanism selection:
+
+#### **Automatic Routing**
+```bash
+# Automatically detects and routes based on content
+./computer "Using Lil, how can I implement a sorting algorithm?"
+# → PUZZLE (Lil-specific routing)
+
+./computer "How can we improve our development process?"
+# → EXPLORATION (improvement keywords)
+
+./computer "What is 2+2?"
+# → DIRECT (simple question)
+```
+
+#### **Manual Selection**
+```bash
+# Force specific mechanism
+./computer -m puzzle "Complex algorithm question"
+./computer -m socratic "Deep analysis needed"
+./computer -m exploration "Compare multiple approaches"
+./computer -m consensus "Get multiple perspectives"
+./computer -m critique "Review and improve"
+./computer -m synthesis "Combine different ideas"
+./computer -m peer-review "Get feedback"
+./computer -m direct "Simple factual question"
+```
+
+#### **Help System**
+```bash
+# Comprehensive help
+./computer --help
+
+# List all available mechanisms
+./computer --mechanisms
+```
+
+### Advanced Options
+
+```bash
+# File integration
+./computer -f document.txt "Analyze this content"
+
+# Multi-round processing
+./computer "Complex topic" 3
+
+# Force direct response (bypass mechanisms)
+./computer -d "Simple question"
+```
+
+### Routing Intelligence
+
+The computer script uses multi-layer classification:
+- **Pattern Analysis**: Keyword and pattern matching
+- **Semantic Analysis**: LLM-based content understanding
+- **Complexity Assessment**: Word count and structure analysis
+- **Lil-Specific Routing**: Automatic PUZZLE for Lil-related queries
+- **Confidence Scoring**: Ensures high-confidence routing decisions
+
+## Configuration
+
+### Model Selection
+
+### Dynamic Model Selection (NEW)
+
+The system now includes intelligent model selection based on task type and model capabilities:
+
+```bash
+# Enable dynamic selection
+source model_selector.sh
+selected_model=$(select_model_for_task "How can I implement a sorting algorithm?" "puzzle" "")
+echo "Selected: $selected_model"
+```
+
+**Features:**
+- **Task-aware selection**: Matches models to coding, reasoning, or creative tasks
+- **Capability scoring**: Rates models by performance in different areas (0.0-1.0)
+- **Real-time discovery**: Automatically finds available models via Ollama
+- **Performance weighting**: Considers speed, size, and capability scores
+- **Fallback handling**: Graceful degradation when preferred models unavailable
+
+**Model Capabilities Database:**
+- `llama3:8b-instruct-q4_K_M`: Excellent reasoning (0.9), good coding (0.8)
+- `phi3:3.8b-mini-4k-instruct-q4_K_M`: Fast (0.9 speed), good reasoning (0.8)
+- `gemma3n:e2b`: Balanced performer (0.8 across all categories)
+- `deepseek-r1:1.5b`: Excellent reasoning (0.9), fast (0.95 speed)
+
+### Static Model Selection (Legacy)
+
+For simple setups, you can still use static model configuration:
+
+```bash
+# Models for different mechanisms
+EXPLORATION_MODEL="llama3:8b-instruct-q4_K_M"
+ANALYSIS_MODEL="phi3:3.8b-mini-4k-instruct-q4_K_M"
+
+# Models for consensus mechanism
+MODELS=(
+    "llama3:8b-instruct-q4_K_M"
+    "phi3:3.8b-mini-4k-instruct-q4_K_M"
+    "deepseek-r1:1.5b"
+    "gemma3n:e2b"
+    "dolphin3:latest"
+)
+```
+
+### Model Management
+
+The system includes both basic and advanced model management:
+
+- **Availability checking**: Verifies models are available before use
+- **Fallback mechanisms**: Automatic fallback to alternative models
+- **Error handling**: Graceful handling of model unavailability
+- **Performance tracking**: Optional model performance history
+
+## Logging & Metrics
+
+### Session Logging
+
+All sessions are logged with comprehensive metadata:
+- **Timing information** (start/end/duration)
+- **Input validation** results
+- **Classification decisions** with confidence scores
+- **Model selection** decisions
+- **Full conversation** transcripts
+- **Error handling** details
+- **Quality monitoring** results and correction attempts
+
+### Performance Metrics
+
+```bash
+# View performance summary
+get_metrics_summary
+
+# Metrics stored in JSON format
+~/tmp/ai_thinking/performance_metrics.json
+```
+
+### Error Logging
+
+```bash
+# Error log location
+~/tmp/ai_thinking/errors.log
+
+# Warning log location
+~/tmp/ai_thinking/errors.log
+
+# Classification logs
+~/tmp/ai_thinking/classification.log
+
+# Quality monitoring logs (integrated into session files)
+```
+
+## Security & Validation
+
+### Input Sanitization
+
+- **Prompt length** validation (max 10,000 characters)
+- **Special character** sanitization with warnings
+- **File path** validation and security checks
+- **Parameter** bounds checking (rounds: 1-5, paths: 1-10)
+
+### Error Handling
+
+- **Graceful degradation** when models unavailable
+- **Comprehensive error** logging and reporting
+- **User-friendly** error messages with actionable guidance
+- **Fallback mechanisms** for critical failures
+- **Input validation** with clear error reporting
+- **Quality degradation** protection with automatic correction
+
+## Advanced Features
+
+### File Integration
+
+```bash
+# Include file contents in prompts
+./computer -f document.txt "Analyze this document"
+
+# File validation and security
+- Path existence checking
+- Read permission validation
+- Content sanitization
+- Graceful error handling
+```
+
+### Multi-Round Processing
+
+```bash
+# Specify processing rounds (1-5)
+./computer "Complex question" 3
+
+# Each round builds on previous insights
+- Round 1: Initial analysis
+- Round 2: Deep exploration
+- Round 3: Synthesis and conclusions
+```
+
+### Intelligent Routing
+
+The `computer` script automatically routes prompts based on:
+- **Advanced classification** with confidence scoring
+- **Multi-layer analysis** (pattern + semantic + complexity)
+- **Context-aware** mechanism selection
+- **Optimal mechanism** selection with fallbacks
+
+### Quality Monitoring Integration
+
+All thinking mechanisms now include:
+- **Automatic quality assessment** of every LLM response
+- **Degradation detection** with pattern recognition
+- **Automatic correction** attempts for poor quality outputs
+- **Intelligent fallbacks** when correction fails
+- **Mechanism-specific** quality relevance checking
+
+## Testing & Validation
+
+### Validation Functions
+
+```bash
+# Prompt validation
+validate_prompt "Your prompt here"
+
+# File validation
+validate_file_path "/path/to/file"
+
+# Model validation
+validate_model "model_name" "fallback_model"
+
+# Classification testing
+classify_prompt "Test prompt" false  # Pattern-only mode
+classify_prompt "Test prompt" true   # Full semantic mode
+
+# Quality monitoring testing
+assess_quality "response" "context" "mechanism"
+detect_degradation_patterns "response"
+guard_output_quality "response" "context" "mechanism" "model"
+```
+
+### Error Testing
+
+```bash
+# Test with invalid inputs
+./computer ""                    # Empty prompt
+./computer -f nonexistent.txt   # Missing file
+./computer -x                    # Invalid option
+./computer "test" 10            # Invalid rounds
+
+# Test quality monitoring
+./test_quality_guard.sh         # Quality guard system test
+```
+
+## Performance Considerations
+
+### Optimization Features
+
+- **Model availability** checking before execution
+- **Efficient file** handling and validation
+- **Minimal overhead** for simple queries
+- **Classification caching** for repeated patterns
+- **Parallel processing** where applicable
+- **Quality monitoring** with minimal performance impact
+
+### Resource Management
+
+- **Temporary file** cleanup
+- **Session management** with unique IDs
+- **Memory-efficient** processing
+- **Graceful timeout** handling
+- **Classification result** caching
+- **Quality assessment** optimization
+
+## Troubleshooting
+
+### Common Issues
+
+1. **"Ollama not found"**
+   - Ensure Ollama is installed and in PATH
+   - Check if Ollama service is running
+
+2. **"Model not available"**
+   - Verify model names with `ollama list`
+   - Check model download status
+   - System will automatically fall back to available models
+
+3. **"Permission denied"**
+   - Ensure scripts have execute permissions
+   - Check file ownership and permissions
+
+4. **"File not found"**
+   - Verify file path is correct
+   - Check file exists and is readable
+   - Ensure absolute or correct relative paths
+
+5. **"Low classification confidence"**
+   - Check error logs for classification details
+   - Consider using -d flag for direct responses
+   - Review prompt clarity and specificity
+
+6. **"Quality below threshold"**
+   - System automatically attempts correction
+   - Check quality monitoring logs for details
+   - Fallback responses ensure helpful output
+   - Consider rephrasing complex prompts
+
+### Debug Mode
+
+```bash
+# Enable verbose logging
+export AI_THINKING_DEBUG=1
+
+# Check logs
+tail -f ~/tmp/ai_thinking/errors.log
+
+# Test classification directly
+source classifier.sh
+classify_prompt "Your test prompt" true
+
+# Test quality monitoring
+source quality_guard.sh
+assess_quality "test response" "test context" "puzzle"
+```
+
+## Future Improvements
+
+### Possible Enhancements
+
+- Additional thinking mechanisms
+- More model integration options
+- Improved caching
+- Better error handling
+- Enhanced testing
+- Performance optimizations
+
+### Extensibility
+
+The basic modular structure allows for:
+- Adding new mechanisms
+- Integrating additional models
+- Modifying validation rules
+- Extending logging
+- Updating classification patterns
+
+## Examples
+
+### Strategic Planning
+
+```bash
+./exploration -p 5 "What are our options for scaling this application?"
+```
+
+### Code Review
+
+```bash
+./critique -f main.py "How can we improve this code's performance and maintainability?"
+```
+
+### Decision Making
+
+```bash
+./consensus "Should we migrate to a new database system?"
+```
+
+### Problem Analysis
+
+```bash
+./socratic "What are the root causes of our deployment failures?"
+```
+
+### Algorithm & Coding Challenges
+
+```bash
+# The system automatically routes to puzzle mechanism
+./computer "How can I implement a binary search algorithm?"
+./puzzle "What's the most efficient way to sort this data structure?"
+```
+
+### Complex Classification
+
+```bash
+# The system automatically detects this needs exploration
+./computer "Compare different approaches to implementing microservices"
+```
+
+### Quality Monitoring in Action
+
+```bash
+# All mechanisms now include quality protection
+./computer "Complex question requiring deep analysis" 3
+# Quality monitoring automatically protects against degradation
+# Fallback responses ensure helpful output even with poor LLM performance
+```
+
+### RAG-Enhanced Responses
+
+```bash
+# Lil-specific questions automatically use RAG
+./computer "Using Lil, how can I implement a recursive function?"
+# → PUZZLE with Lil knowledge corpus context
+
+# Manual corpus search
+./rag_search.sh search "function definition" programming
+
+# Corpus management
+./corpus_manager.sh template "data-structures"
+./corpus_manager.sh update
+```
+
+### Manual Mechanism Selection
+
+```bash
+# Force specific thinking style
+./computer -m socratic "What are the fundamental assumptions here?"
+./computer -m exploration "What are our strategic options?"
+./computer -m puzzle "How can I optimize this algorithm?"
+
+# Get help with available options
+./computer --mechanisms
+```
+
+## Contributing
+
+### Development Guidelines
+
+- **Maintain modularity** - Each mechanism should be self-contained
+- **Follow error handling** patterns established in logging.sh
+- **Add comprehensive** documentation for new features
+- **Include validation** for all inputs and parameters
+- **Test thoroughly** with various input types and edge cases
+- **Update classification** patterns when adding new mechanisms
+- **Integrate quality monitoring** for all new LLM interactions
+- **Follow quality guard** patterns for consistent protection
+- **Consider RAG integration** for domain-specific knowledge
+- **Add corpus documentation** when extending knowledge areas
+- **Update classification patterns** for new routing logic
+
+### Code Style
+
+- Consistent naming conventions
+- Clear comments explaining complex logic
+- Error handling for all external calls
+- Modular functions with single responsibilities
+- Validation functions for all inputs
+- Comprehensive logging for debugging
+- Quality monitoring integration for all AI responses
\ No newline at end of file
diff --git a/bash/talk-to-computer/classifier.sh b/bash/talk-to-computer/classifier.sh
new file mode 100755
index 0000000..38f4869
--- /dev/null
+++ b/bash/talk-to-computer/classifier.sh
@@ -0,0 +1,281 @@
+#!/bin/bash
+
+# Advanced Prompt Classification System
+# Multi-layer approach combining semantic analysis, pattern matching, and confidence scoring
+
+# Get the directory where this script is located
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+source "${SCRIPT_DIR}/logging.sh"
+
+# --- Classification Configuration ---
+CLASSIFIER_MODEL="gemma3n:e2b"  # Lightweight model for classification
+CONFIDENCE_THRESHOLD=0.7
+
+# --- Semantic Classification ---
+classify_semantic() {
+    local prompt="$1"
+    
+    local classification_prompt="You are a prompt classifier. Analyze this prompt and determine which AI thinking mechanism would be most appropriate.
+
+PROMPT: \"$prompt\"
+
+AVAILABLE MECHANISMS:
+- DIRECT: Simple questions, factual queries, straightforward requests
+- CONSENSUS: Multiple perspectives needed, voting, agreement/disagreement
+- SYNTHESIS: Combining approaches, integration, unification
+- EXPLORATION: Comparing alternatives, strategic planning, option analysis  
+- SOCRATIC: Deep analysis, questioning assumptions, thorough investigation
+- CRITIQUE: Improvement suggestions, refinement, enhancement
+- PEER_REVIEW: Collaborative feedback, review processes, advice
+
+Respond with ONLY the mechanism name and confidence (0.0-1.0):
+Format: MECHANISM_NAME:CONFIDENCE
+
+Example: EXPLORATION:0.85"
+
+    local result=$(ollama run "$CLASSIFIER_MODEL" "$classification_prompt" 2>/dev/null)
+    echo "$result"
+}
+
+# --- Intent Pattern Analysis ---
+analyze_intent_patterns() {
+    local prompt="$1"
+    
+    # Initialize scores using individual variables (more portable)
+    local direct_score=0
+    local consensus_score=0
+    local synthesis_score=0
+    local exploration_score=0
+    local socratic_score=0
+    local critique_score=0
+    local peer_review_score=0
+    local puzzle_score=0
+    
+    # Question type patterns
+    if [[ "$prompt" =~ ^(what|when|where|who|which|how\ much|how\ many) ]]; then
+        direct_score=$((direct_score + 3))
+    fi
+    
+    if [[ "$prompt" =~ ^(why|how|explain) ]]; then
+        socratic_score=$((socratic_score + 2))
+    fi
+    
+    # Action-oriented patterns
+    if [[ "$prompt" =~ (compare|contrast|evaluate|assess) ]]; then
+        exploration_score=$((exploration_score + 3))
+    fi
+    
+    if [[ "$prompt" =~ (improve|enhance|fix|refine|optimize|better) ]]; then
+        critique_score=$((critique_score + 3))
+    fi
+    
+    if [[ "$prompt" =~ (combine|merge|integrate|synthesize|unify) ]]; then
+        synthesis_score=$((synthesis_score + 3))
+    fi
+    
+    if [[ "$prompt" =~ (review|feedback|opinion|thoughts|suggest) ]]; then
+        peer_review_score=$((peer_review_score + 2))
+    fi
+    
+    if [[ "$prompt" =~ (consensus|vote|agree|disagree|multiple.*view) ]]; then
+        consensus_score=$((consensus_score + 3))
+    fi
+    
+    # Context patterns
+    if [[ "$prompt" =~ (strategy|strategic|plan|approach|option|alternative) ]]; then
+        exploration_score=$((exploration_score + 2))
+    fi
+    
+    if [[ "$prompt" =~ (analyze|analysis|examine|investigate|deep|thorough) ]]; then
+        socratic_score=$((socratic_score + 2))
+    fi
+    
+    # Puzzle and coding patterns
+    if [[ "$prompt" =~ (puzzle|solve|algorithm|code|programming|implement|sort|search|optimize|data.*structure) ]]; then
+        puzzle_score=$((puzzle_score + 3))
+    fi
+
+    if [[ "$prompt" =~ (challenge|problem|question|task|assignment|exercise) ]]; then
+        puzzle_score=$((puzzle_score + 2))
+    fi
+
+    # Lil-specific patterns - highest priority for puzzle mechanism
+    if [[ "$prompt" =~ (lil|LIL|using lil|in lil|with lil|lil programming|lil language|lil script) ]]; then
+        puzzle_score=$((puzzle_score + 5))  # Higher score than other patterns
+    fi
+    
+    # Find highest scoring mechanism
+    local max_score=0
+    local best_mechanism="DIRECT"
+    
+    if [ "$direct_score" -gt "$max_score" ]; then
+        max_score="$direct_score"
+        best_mechanism="DIRECT"
+    fi
+    if [ "$consensus_score" -gt "$max_score" ]; then
+        max_score="$consensus_score"
+        best_mechanism="CONSENSUS"
+    fi
+    if [ "$synthesis_score" -gt "$max_score" ]; then
+        max_score="$synthesis_score"
+        best_mechanism="SYNTHESIS"
+    fi
+    if [ "$exploration_score" -gt "$max_score" ]; then
+        max_score="$exploration_score"
+        best_mechanism="EXPLORATION"
+    fi
+    if [ "$socratic_score" -gt "$max_score" ]; then
+        max_score="$socratic_score"
+        best_mechanism="SOCRATIC"
+    fi
+    if [ "$critique_score" -gt "$max_score" ]; then
+        max_score="$critique_score"
+        best_mechanism="CRITIQUE"
+    fi
+    if [ "$peer_review_score" -gt "$max_score" ]; then
+        max_score="$peer_review_score"
+        best_mechanism="PEER_REVIEW"
+    fi
+    if [ "$puzzle_score" -gt "$max_score" ]; then
+        max_score="$puzzle_score"
+        best_mechanism="PUZZLE"
+    fi
+    
+    # Calculate confidence based on score distribution
+    local total_score=$((direct_score + consensus_score + synthesis_score + exploration_score + socratic_score + critique_score + peer_review_score + puzzle_score))
+    
+    local confidence="0.0"
+    if [ "$total_score" -gt 0 ]; then
+        confidence=$(echo "scale=2; $max_score / $total_score" | bc -l 2>/dev/null || echo "0.5")
+    fi
+    
+    echo "$best_mechanism:$confidence"
+}
+
+# --- Complexity Analysis ---
+analyze_complexity() {
+    local prompt="$1"
+    local word_count=$(echo "$prompt" | wc -w)
+    local sentence_count=$(echo "$prompt" | tr '.' '\n' | wc -l)
+    local question_count=$(echo "$prompt" | grep -o '?' | wc -l)
+    
+    # Simple heuristics for complexity
+    local complexity_score=0
+    
+    # Word count factor
+    if [ "$word_count" -gt 50 ]; then
+        complexity_score=$((complexity_score + 3))
+    elif [ "$word_count" -gt 20 ]; then
+        complexity_score=$((complexity_score + 2))
+    elif [ "$word_count" -le 5 ]; then
+        complexity_score=$((complexity_score - 2))
+    fi
+    
+    # Multiple questions suggest complexity
+    if [ "$question_count" -gt 1 ]; then
+        complexity_score=$((complexity_score + 2))
+    fi
+    
+    # Multiple sentences suggest complexity
+    if [ "$sentence_count" -gt 3 ]; then
+        complexity_score=$((complexity_score + 1))
+    fi
+    
+    echo "$complexity_score"
+}
+
+# --- Confidence Weighted Classification ---
+classify_prompt() {
+    local prompt="$1"
+    local use_semantic="${2:-true}"
+    
+    echo "=== Advanced Prompt Classification ===" >&2
+    echo "Analyzing: \"$prompt\"" >&2
+    echo >&2
+    
+    # Get pattern-based classification
+    local pattern_result=$(analyze_intent_patterns "$prompt")
+    local pattern_mechanism=$(echo "$pattern_result" | cut -d':' -f1)
+    local pattern_confidence=$(echo "$pattern_result" | cut -d':' -f2)
+    
+    echo "Pattern Analysis: $pattern_mechanism (confidence: $pattern_confidence)" >&2
+    
+    # Get complexity score
+    local complexity=$(analyze_complexity "$prompt")
+    echo "Complexity Score: $complexity" >&2
+    
+    # Apply complexity adjustments
+    if [ "$complexity" -lt 0 ] && [ "$pattern_mechanism" != "DIRECT" ]; then
+        echo "Low complexity detected - suggesting DIRECT" >&2
+        pattern_mechanism="DIRECT"
+        pattern_confidence="0.8"
+    elif [ "$complexity" -gt 4 ]; then
+        echo "High complexity detected - boosting complex mechanisms" >&2
+        case "$pattern_mechanism" in
+            "DIRECT")
+                pattern_mechanism="SOCRATIC"
+                pattern_confidence="0.7"
+                ;;
+        esac
+    fi
+    
+    local final_mechanism="$pattern_mechanism"
+    local final_confidence="$pattern_confidence"
+    
+    # Use semantic classification if available and requested
+    if [ "$use_semantic" = "true" ] && command -v ollama >/dev/null 2>&1; then
+        echo "Running semantic analysis..." >&2
+        local semantic_result=$(classify_semantic "$prompt")
+        
+        # Clean up the result
+        semantic_result=$(echo "$semantic_result" | tr -d ' ' | head -n1)
+        
+        if [[ "$semantic_result" =~ ^[A-Z_]+:[0-9.]+$ ]]; then
+            local semantic_mechanism=$(echo "$semantic_result" | cut -d':' -f1)
+            local semantic_confidence=$(echo "$semantic_result" | cut -d':' -f2)
+            
+            echo "Semantic Analysis: $semantic_mechanism (confidence: $semantic_confidence)" >&2
+            
+            # Weighted combination of pattern and semantic results
+            local pattern_weight=$(echo "$pattern_confidence * 0.6" | bc -l 2>/dev/null || echo "0.3")
+            local semantic_weight=$(echo "$semantic_confidence * 0.4" | bc -l 2>/dev/null || echo "0.2")
+            
+            # If both agree, boost confidence
+            if [ "$pattern_mechanism" = "$semantic_mechanism" ]; then
+                final_confidence=$(echo "$pattern_confidence + 0.2" | bc -l 2>/dev/null || echo "0.8")
+                if (( $(echo "$final_confidence > 1.0" | bc -l 2>/dev/null || echo "0") )); then
+                    final_confidence="1.0"
+                fi
+                echo "Pattern and semantic agree - boosting confidence to $final_confidence" >&2
+            # If semantic has higher confidence, use it
+            elif (( $(echo "$semantic_confidence > $pattern_confidence + 0.1" | bc -l 2>/dev/null || echo "0") )); then
+                final_mechanism="$semantic_mechanism"
+                final_confidence="$semantic_confidence"
+                echo "Using semantic result due to higher confidence" >&2
+            fi
+        else
+            log_warning "Semantic classification failed or returned invalid format: $semantic_result"
+        fi
+    fi
+    
+    # Final confidence check
+    if (( $(echo "$final_confidence < $CONFIDENCE_THRESHOLD" | bc -l 2>/dev/null || echo "0") )); then
+        echo "Low confidence ($final_confidence < $CONFIDENCE_THRESHOLD) - defaulting to DIRECT" >&2
+        final_mechanism="DIRECT"
+        final_confidence="0.5"
+    fi
+    
+    echo >&2
+    echo "=== Final Classification ===" >&2
+    echo "Mechanism: $final_mechanism" >&2
+    echo "Confidence: $final_confidence" >&2
+    echo "============================" >&2
+    
+    echo "$final_mechanism:$final_confidence"
+}
+
+# --- Export Functions ---
+export -f classify_prompt
+export -f analyze_intent_patterns
+export -f analyze_complexity
+export -f classify_semantic
diff --git a/bash/talk-to-computer/common.sh b/bash/talk-to-computer/common.sh
new file mode 100755
index 0000000..4f11ffe
--- /dev/null
+++ b/bash/talk-to-computer/common.sh
@@ -0,0 +1,234 @@
+#!/bin/bash
+
+# Common functionality shared across all AI thinking mechanisms
+# This file contains utilities and initialization code used by multiple scripts
+
+# --- Script Directory Setup ---
+# Get the directory where this script is located
+get_script_dir() {
+    echo "$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+}
+
+# --- Initialization Functions ---
+
+# Initialize a thinking mechanism script with common dependencies
+init_thinking_mechanism() {
+    local script_path="$1"
+
+    # Set up script directory
+    SCRIPT_DIR="$(cd "$(dirname "${script_path}")" && pwd)"
+
+    # Source configuration
+    source "${SCRIPT_DIR}/config.sh"
+
+    # Source logging system
+    source "${SCRIPT_DIR}/logging.sh"
+
+    # Source quality guard for output quality protection
+    source "${SCRIPT_DIR}/quality_guard.sh"
+
+    # Get mechanism name automatically
+    MECHANISM_NAME=$(get_mechanism_name "$script_path")
+
+    # Set up resource management
+    setup_cleanup_trap
+
+    export SCRIPT_DIR MECHANISM_NAME
+}
+
+# Initialize the main dispatcher with common dependencies
+init_dispatcher() {
+    local script_path="$1"
+
+    # Set up script directory
+    SCRIPT_DIR="$(cd "$(dirname "${script_path}")" && pwd)"
+
+    # Source logging system (dispatcher sources this later)
+    # source "${SCRIPT_DIR}/logging.sh"
+
+    export SCRIPT_DIR
+}
+
+# --- Model Validation Functions ---
+
+# Validate and set a model with fallback
+validate_and_set_model() {
+    local model_var="$1"
+    local model_name="$2"
+    local fallback_model="$3"
+
+    local validated_model
+    validated_model=$(validate_model "$model_name" "$fallback_model")
+
+    if [ $? -ne 0 ]; then
+        log_error "No valid model available for $model_var"
+        return 1
+    fi
+
+    eval "$model_var=\"$validated_model\""
+    echo "Set $model_var to: $validated_model"
+}
+
+# --- Argument Processing Functions ---
+
+# Common file path validation
+validate_file_arg() {
+    local file_path="$1"
+
+    if [ -n "$file_path" ]; then
+        validate_file_path "$file_path"
+        if [ $? -ne 0 ]; then
+            return 1
+        fi
+    fi
+
+    echo "$file_path"
+}
+
+# --- Cleanup Functions ---
+
+# Global array to track resources for cleanup
+declare -a CLEANUP_RESOURCES=()
+
+# Register a resource for cleanup
+register_cleanup_resource() {
+    local resource="$1"
+    CLEANUP_RESOURCES+=("$resource")
+}
+
+# Clean up temporary resources
+cleanup_resources() {
+    local exit_code=$?
+
+    # Clean up registered resources
+    for resource in "${CLEANUP_RESOURCES[@]}"; do
+        if [ -d "$resource" ]; then
+            rm -rf "$resource" 2>/dev/null || true
+        elif [ -f "$resource" ]; then
+            rm -f "$resource" 2>/dev/null || true
+        fi
+    done
+
+    # Clean up any additional temp directories
+    if [ -n "$TEMP_DIR" ] && [ -d "$TEMP_DIR" ]; then
+        rm -rf "$TEMP_DIR" 2>/dev/null || true
+    fi
+
+    exit $exit_code
+}
+
+# Set up trap for cleanup on script exit
+setup_cleanup_trap() {
+    trap cleanup_resources EXIT INT TERM
+}
+
+# Create a temporary directory with automatic cleanup
+create_managed_temp_dir() {
+    local prefix="${1:-ai_thinking}"
+    local temp_dir
+
+    temp_dir=$(mktemp -d -t "${prefix}_XXXXXX")
+    register_cleanup_resource "$temp_dir"
+
+    echo "$temp_dir"
+}
+
+# Create a temporary file with automatic cleanup
+create_managed_temp_file() {
+    local prefix="${1:-ai_thinking}"
+    local suffix="${2:-tmp}"
+    local temp_file
+
+    temp_file=$(mktemp -t "${prefix}_XXXXXX.${suffix}")
+    register_cleanup_resource "$temp_file"
+
+    echo "$temp_file"
+}
+
+# --- Standardized Error Handling ---
+
+# Standardized error codes
+ERROR_INVALID_ARGUMENT=1
+ERROR_FILE_NOT_FOUND=2
+ERROR_MODEL_UNAVAILABLE=3
+ERROR_VALIDATION_FAILED=4
+ERROR_PROCESSING_FAILED=5
+ERROR_RESOURCE_ERROR=6
+
+# Standardized error handling function
+handle_error() {
+    local error_code="$1"
+    local error_message="$2"
+    local script_name="${3:-$(basename "${BASH_SOURCE[1]}")}"
+    local line_number="${4:-${BASH_LINENO[0]}}"
+
+    # Log the error
+    log_error "[$script_name:$line_number] $error_message"
+
+    # Print user-friendly error message
+    echo "Error: $error_message" >&2
+
+    # Exit with appropriate code
+    exit "$error_code"
+}
+
+# Validation error handler
+handle_validation_error() {
+    local error_message="$1"
+    local script_name="${2:-$(basename "${BASH_SOURCE[1]}")}"
+
+    handle_error "$ERROR_VALIDATION_FAILED" "$error_message" "$script_name" "${BASH_LINENO[0]}"
+}
+
+# Model error handler
+handle_model_error() {
+    local model_name="$1"
+    local script_name="${2:-$(basename "${BASH_SOURCE[1]}")}"
+
+    handle_error "$ERROR_MODEL_UNAVAILABLE" "Model '$model_name' is not available" "$script_name" "${BASH_LINENO[0]}"
+}
+
+# File error handler
+handle_file_error() {
+    local file_path="$1"
+    local operation="$2"
+    local script_name="${3:-$(basename "${BASH_SOURCE[1]}")}"
+
+    handle_error "$ERROR_FILE_NOT_FOUND" "Cannot $operation file: $file_path" "$script_name" "${BASH_LINENO[0]}"
+}
+
+# Processing error handler
+handle_processing_error() {
+    local operation="$1"
+    local details="$2"
+    local script_name="${3:-$(basename "${BASH_SOURCE[1]}")}"
+
+    handle_error "$ERROR_PROCESSING_FAILED" "Failed to $operation: $details" "$script_name" "${BASH_LINENO[0]}"
+}
+
+# --- Utility Functions ---
+
+# Check if a command exists
+command_exists() {
+    command -v "$1" >/dev/null 2>&1
+}
+
+# Create a temporary directory with cleanup
+create_temp_dir() {
+    local prefix="${1:-ai_thinking}"
+    local temp_dir
+
+    temp_dir=$(mktemp -d -t "${prefix}_XXXXXX")
+    echo "$temp_dir"
+}
+
+# --- Common Constants ---
+
+# Default values
+DEFAULT_ROUNDS=2
+DEFAULT_MODEL="gemma3n:e2b"
+
+# File paths
+LOG_DIR=~/tmp/ai_thinking
+
+export DEFAULT_ROUNDS DEFAULT_MODEL LOG_DIR
diff --git a/bash/talk-to-computer/computer b/bash/talk-to-computer/computer
new file mode 100755
index 0000000..77fffcd
--- /dev/null
+++ b/bash/talk-to-computer/computer
@@ -0,0 +1,477 @@
+#!/bin/bash
+
+# Get the directory where this script is located
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+
+# Computer Dispatch System
+# This script intelligently routes prompts to the most appropriate thinking mechanism
+# or directly to Ollama based on complexity, question type, and user intent.
+#
+# APPLICATION LOGIC:
+# The computer dispatch system implements an intelligent routing mechanism that
+# analyzes user prompts and determines the optimal response strategy. The system
+# operates through three distinct phases designed to maximize response quality:
+#
+# PHASE 1 - PROMPT ANALYSIS:
+#   - Analyzes prompt complexity, length, and question type
+#   - Identifies user intent and specific keywords
+#   - Determines if direct Ollama response is appropriate
+#   - Classifies prompts into response categories
+#
+# PHASE 2 - MECHANISM SELECTION:
+#   - Routes to appropriate thinking mechanism based on classification
+#   - Uses decision tree with keywords for clear cases
+#   - Considers prompt complexity and user intent
+#   - Falls back to direct Ollama for simple cases
+#
+# PHASE 3 - RESPONSE EXECUTION:
+#   - Executes the selected mechanism or direct Ollama call
+#   - Maintains transparency about the routing decision
+#   - Provides consistent output format regardless of mechanism
+#   - Logs the decision process for analysis
+#
+# DISPATCH MODELING:
+# The system applies intelligent routing principles to AI response generation:
+#   - Prompt classification helps match complexity to appropriate mechanism
+#   - Keyword analysis identifies specific user needs and intent
+#   - Decision tree provides consistent, predictable routing logic
+#   - Direct Ollama routing handles simple cases efficiently
+#   - Transparency shows users how their prompt was processed
+#   - The system may improve response quality by using specialized mechanisms
+#
+# The dispatch process emphasizes efficiency and appropriateness,
+# ensuring users get the best possible response for their specific needs.
+# The system balances speed with depth based on prompt characteristics.
+
+# --- Model Configuration ---
+DEFAULT_MODEL="gemma3n:e2b"
+
+# --- Defaults ---
+DEFAULT_ROUNDS=2
+
+# --- Argument Validation ---
+if [ "$#" -lt 1 ]; then
+    show_computer_help
+    exit 1
+fi
+
+# Help function
+show_computer_help() {
+    echo -e "\n\tComputer"
+    echo -e "\tThis script intelligently routes prompts to the most appropriate thinking mechanism"
+    echo -e "\tor directly to Ollama based on complexity, question type, and user intent."
+    echo -e "\n\tUsage: $0 [options] \"<your prompt>\" [number_of_rounds]"
+    echo -e "\n\tOptions:"
+    echo -e "\t  -f <file_path>    Append the contents of the file to the prompt"
+    echo -e "\t  -d               Force direct Ollama response (bypass thinking mechanisms)"
+    echo -e "\t  -m <mechanism>   Manually select thinking mechanism:"
+    echo -e "\t                    direct, socratic, exploration, consensus, critique,"
+    echo -e "\t                    synthesis, peer-review, puzzle"
+    echo -e "\t  -h, --help       Show this help message"
+    echo -e "\n\tExamples:"
+    echo -e "\t  $0 \"What is 2+2?\"                                    # Auto-routing"
+    echo -e "\t  $0 -f document.txt \"Analyze this\" 3                 # With file, 3 rounds"
+    echo -e "\t  $0 -d \"Simple question\"                             # Direct response only"
+    echo -e "\t  $0 -m puzzle \"Using Lil, how can I...\"             # Force puzzle mechanism"
+    echo -e "\n\tIf number_of_rounds is not provided, defaults to $DEFAULT_ROUNDS rounds."
+    echo -e "\n"
+}
+
+# Available mechanisms
+show_mechanisms() {
+    echo -e "\n\tAvailable Thinking Mechanisms:"
+    echo -e "\t  direct        - Simple questions, direct answers"
+    echo -e "\t  socratic      - Deep questioning and analysis"
+    echo -e "\t  exploration   - Multiple solution paths and comparison"
+    echo -e "\t  consensus     - Multiple model agreement"
+    echo -e "\t  critique      - Improvement suggestions and refinement"
+    echo -e "\t  synthesis     - Combining and integrating approaches"
+    echo -e "\t  peer-review   - Collaborative feedback and review"
+    echo -e "\t  puzzle        - Coding problems and Lil programming"
+    echo -e "\n"
+}
+
+# --- Argument Parsing ---
+FILE_PATH=""
+FORCE_DIRECT=false
+MANUAL_MECHANISM=""
+while getopts "f:dm:h-:" opt; do
+  case $opt in
+    f)
+      FILE_PATH="$OPTARG"
+      ;;
+    d)
+      FORCE_DIRECT=true
+      ;;
+    m)
+      MANUAL_MECHANISM="$OPTARG"
+      ;;
+    h)
+      show_computer_help
+      exit 0
+      ;;
+    -)
+      case "${OPTARG}" in
+        help)
+          show_computer_help
+          exit 0
+          ;;
+        mechanisms)
+          show_mechanisms
+          exit 0
+          ;;
+        *)
+          echo "Invalid option: --${OPTARG}" >&2
+          exit 1
+          ;;
+      esac
+      ;;
+    *)
+      echo "Invalid option: -$OPTARG" >&2
+      show_computer_help
+      exit 1
+      ;;
+  esac
+done
+shift $((OPTIND -1))
+
+PROMPT="$1"
+if [ -z "$2" ]; then
+    ROUNDS=$DEFAULT_ROUNDS
+else
+    ROUNDS=$2
+fi
+
+# Store original prompt for validation after sourcing
+ORIGINAL_PROMPT="$PROMPT"
+ORIGINAL_FILE_PATH="$FILE_PATH"
+ORIGINAL_ROUNDS="$ROUNDS"
+
+# Source the logging system using absolute path
+source "${SCRIPT_DIR}/logging.sh"
+
+# Ensure validation functions are available
+if ! command -v validate_prompt >/dev/null 2>&1; then
+    echo "Error: Validation functions not loaded properly" >&2
+    exit 1
+fi
+
+# Validate and set default model with fallback
+DEFAULT_MODEL=$(validate_model "$DEFAULT_MODEL" "llama3:8b-instruct-q4_K_M")
+if [ $? -ne 0 ]; then
+    log_error "No valid default model available"
+    exit 1
+fi
+
+# Validate prompt
+PROMPT=$(validate_prompt "$ORIGINAL_PROMPT")
+if [ $? -ne 0 ]; then
+    exit 1
+fi
+
+# Validate file path if provided
+if [ -n "$ORIGINAL_FILE_PATH" ]; then
+    if ! validate_file_path "$ORIGINAL_FILE_PATH"; then
+        exit 1
+    fi
+    FILE_CONTENTS=$(cat "$ORIGINAL_FILE_PATH")
+    PROMPT="$PROMPT\n[FILE CONTENTS]\n$FILE_CONTENTS\n[END FILE]"
+fi
+
+# Validate rounds
+if ! [[ "$ORIGINAL_ROUNDS" =~ ^[1-9][0-9]*$ ]] || [ "$ORIGINAL_ROUNDS" -gt 5 ]; then
+    log_error "Invalid number of rounds: $ORIGINAL_ROUNDS (must be 1-5)"
+    exit 1
+fi
+
+# --- File Initialization ---
+# Create a temporary directory if it doesn't exist
+mkdir -p ~/tmp
+# Create a unique file for this session based on the timestamp
+SESSION_FILE=~/tmp/computer_$(date +%Y%m%d_%H%M%S).txt
+
+# Initialize timing
+SESSION_ID=$(generate_session_id)
+start_timer "$SESSION_ID" "computer"
+
+echo "Computer Dispatch Session Log: ${SESSION_FILE}"
+echo "---------------------------------"
+
+# Store the initial user prompt in the session file
+echo "USER PROMPT: ${PROMPT}" >> "${SESSION_FILE}"
+echo "FORCE DIRECT: ${FORCE_DIRECT}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+# --- Advanced Prompt Analysis Function ---
+analyze_prompt() {
+    local prompt="$1"
+    local use_advanced="${2:-true}"
+    
+    # Check for direct Ollama requests (explicit user intent)
+    if [[ "$prompt" =~ (direct|simple|quick|fast|straight) ]]; then
+        echo "DIRECT:1.0"
+        return
+    fi
+    
+    # Use advanced classification if available
+    if [ "$use_advanced" = "true" ] && [ -f "${SCRIPT_DIR}/classifier.sh" ]; then
+        source "${SCRIPT_DIR}/classifier.sh"
+        local result=$(classify_prompt "$prompt" true)
+        if [[ "$result" =~ ^[A-Z_]+:[0-9.]+$ ]]; then
+            echo "$result"
+            return
+        else
+            log_warning "Advanced classifier failed, falling back to simple classification"
+        fi
+    fi
+    
+    # Fallback to simple classification
+    local analysis=""
+    local confidence="0.6"
+    
+    # Check prompt length (simple heuristic for complexity)
+    local word_count=$(echo "$prompt" | wc -w)
+    
+    # Very short prompts (likely simple questions)
+    if [ "$word_count" -le 5 ]; then
+        echo "DIRECT:0.8"
+        return
+    fi
+    
+    # Keyword-based classification with priority order
+    if [[ "$prompt" =~ (consensus|agree|disagree|vote|multiple.*perspectives|multiple.*opinions) ]]; then
+        analysis="CONSENSUS"
+    elif [[ "$prompt" =~ (synthesize|combine|integrate|unify|merge|consolidate) ]]; then
+        analysis="SYNTHESIS"
+    elif [[ "$prompt" =~ (explore.*paths|explore.*alternatives|compare.*strategies|compare.*approaches|what.*options) ]]; then
+        analysis="EXPLORATION"
+    elif [[ "$prompt" =~ (improve|refine|edit|revise|better|enhance|polish|fix|optimize) ]]; then
+        analysis="CRITIQUE"
+    elif [[ "$prompt" =~ (review|feedback|peer.*review|collaborate|suggest|advice) ]]; then
+        analysis="PEER_REVIEW"
+    elif [[ "$prompt" =~ (analyze|examine|investigate|deep.*dive|thorough.*analysis|comprehensive) ]]; then
+        analysis="SOCRATIC"
+    elif [[ "$prompt" =~ (explore|alternatives|options|compare|strategies|approaches) ]]; then
+        analysis="EXPLORATION"
+        confidence="0.5"  # Lower confidence due to ambiguous keywords
+    else
+        # Default to direct for unclear cases
+        analysis="DIRECT"
+        confidence="0.4"
+    fi
+    
+    echo "$analysis:$confidence"
+}
+
+# --- Mechanism Selection ---
+echo "Analyzing prompt and selecting mechanism..."
+echo "PROMPT ANALYSIS:" >> "${SESSION_FILE}"
+
+if [ "$FORCE_DIRECT" = true ]; then
+    MECHANISM="DIRECT"
+    CONFIDENCE="1.0"
+    REASON="User requested direct response with -d flag"
+else
+    # Check for manual mechanism selection
+    if [ -n "$MANUAL_MECHANISM" ]; then
+        # Validate manual mechanism selection
+        case "$MANUAL_MECHANISM" in
+            direct|DIRECT)
+                MECHANISM="DIRECT"
+                CONFIDENCE="1.0"
+                REASON="User manually selected direct mechanism"
+                ;;
+            socratic|SOCRATIC)
+                MECHANISM="SOCRATIC"
+                CONFIDENCE="1.0"
+                REASON="User manually selected socratic mechanism"
+                ;;
+            exploration|EXPLORATION)
+                MECHANISM="EXPLORATION"
+                CONFIDENCE="1.0"
+                REASON="User manually selected exploration mechanism"
+                ;;
+            consensus|CONSENSUS)
+                MECHANISM="CONSENSUS"
+                CONFIDENCE="1.0"
+                REASON="User manually selected consensus mechanism"
+                ;;
+            critique|CRITIQUE)
+                MECHANISM="CRITIQUE"
+                CONFIDENCE="1.0"
+                REASON="User manually selected critique mechanism"
+                ;;
+            synthesis|SYNTHESIS)
+                MECHANISM="SYNTHESIS"
+                CONFIDENCE="1.0"
+                REASON="User manually selected synthesis mechanism"
+                ;;
+            peer-review|peer_review|PEER_REVIEW|PEER-REVIEW)
+                MECHANISM="PEER_REVIEW"
+                CONFIDENCE="1.0"
+                REASON="User manually selected peer-review mechanism"
+                ;;
+            puzzle|PUZZLE)
+                MECHANISM="PUZZLE"
+                CONFIDENCE="1.0"
+                REASON="User manually selected puzzle mechanism"
+                ;;
+            *)
+                echo "Error: Invalid mechanism '$MANUAL_MECHANISM'" >&2
+                echo "Use --mechanisms to see available options." >&2
+                exit 1
+                ;;
+        esac
+    else
+        ANALYSIS_RESULT=$(analyze_prompt "$PROMPT")
+        MECHANISM=$(echo "$ANALYSIS_RESULT" | cut -d':' -f1)
+        CONFIDENCE=$(echo "$ANALYSIS_RESULT" | cut -d':' -f2)
+
+        # Validate confidence score
+        if [[ ! "$CONFIDENCE" =~ ^[0-9.]+$ ]]; then
+            CONFIDENCE="0.5"
+            log_warning "Invalid confidence score, defaulting to 0.5"
+        fi
+    fi
+    
+    case "$MECHANISM" in
+        "DIRECT")
+            REASON="Simple prompt or direct request (confidence: $CONFIDENCE)"
+            ;;
+        "CONSENSUS")
+            REASON="Multiple perspectives or consensus needed (confidence: $CONFIDENCE)"
+            ;;
+        "SYNTHESIS")
+            REASON="Integration of multiple approaches needed (confidence: $CONFIDENCE)"
+            ;;
+        "EXPLORATION")
+            REASON="Systematic exploration of alternatives needed (confidence: $CONFIDENCE)"
+            ;;
+        "SOCRATIC")
+            REASON="Deep analysis or exploration required (confidence: $CONFIDENCE)"
+            ;;
+        "CRITIQUE")
+            REASON="Improvement or refinement requested (confidence: $CONFIDENCE)"
+            ;;
+        "PEER_REVIEW")
+            REASON="Collaborative review or feedback needed (confidence: $CONFIDENCE)"
+            ;;
+        "PUZZLE")
+            REASON="Puzzle solving or coding challenge (confidence: $CONFIDENCE)"
+            ;;
+        *)
+            REASON="Default fallback (confidence: $CONFIDENCE)"
+            MECHANISM="DIRECT"
+            ;;
+    esac
+    
+    # Low confidence warning
+    if (( $(echo "$CONFIDENCE < 0.6" | bc -l 2>/dev/null || echo "0") )); then
+        log_warning "Low classification confidence ($CONFIDENCE) for prompt: $PROMPT"
+        echo "Note: Classification confidence is low ($CONFIDENCE). Consider using -d for direct response." >&2
+    fi
+fi
+
+echo "Selected mechanism: ${MECHANISM}" >> "${SESSION_FILE}"
+echo "Reason: ${REASON}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+echo "Selected mechanism: ${MECHANISM}"
+echo "Reason: ${REASON}"
+echo "---------------------------------"
+
+# --- Response Execution ---
+echo "Executing selected mechanism..."
+echo "RESPONSE EXECUTION:" >> "${SESSION_FILE}"
+
+case "$MECHANISM" in
+    "DIRECT")
+        echo "Using direct Ollama response..."
+        echo "DIRECT OLLAMA RESPONSE:" >> "${SESSION_FILE}"
+        
+        DIRECT_PROMPT="You are an expert assistant. You always flag if you don't know something. Please provide a clear, helpful response to the following prompt: ${PROMPT}"
+        
+        RESPONSE=$(ollama run "${DEFAULT_MODEL}" "${DIRECT_PROMPT}")
+        
+        echo "${RESPONSE}" >> "${SESSION_FILE}"
+        echo "" >> "${SESSION_FILE}"
+        
+        echo "---------------------------------"
+        echo "Direct response:"
+        echo "---------------------------------"
+        echo "${RESPONSE}"
+        ;;
+        
+    "CONSENSUS")
+        echo "Delegating to consensus mechanism..."
+        echo "DELEGATING TO CONSENSUS:" >> "${SESSION_FILE}"
+        
+        # Execute consensus script and display output directly
+        "${SCRIPT_DIR}/consensus" "${PROMPT}" "${ROUNDS}" 2>&1 | tee -a "${SESSION_FILE}"
+        ;;
+        
+    "SOCRATIC")
+        echo "Delegating to Socratic mechanism..."
+        echo "DELEGATING TO SOCRATIC:" >> "${SESSION_FILE}"
+        
+        # Execute Socratic script and display output directly
+        "${SCRIPT_DIR}/socratic" "${PROMPT}" "${ROUNDS}" 2>&1 | tee -a "${SESSION_FILE}"
+        ;;
+        
+    "CRITIQUE")
+        echo "Delegating to critique mechanism..."
+        echo "DELEGATING TO CRITIQUE:" >> "${SESSION_FILE}"
+        
+        # Execute critique script and display output directly
+        "${SCRIPT_DIR}/critique" "${PROMPT}" "${ROUNDS}" 2>&1 | tee -a "${SESSION_FILE}"
+        ;;
+        
+        "PEER_REVIEW")
+        echo "Delegating to peer-review mechanism..."
+        echo "DELEGATING TO PEER_REVIEW:" >> "${SESSION_FILE}"
+        
+        # Execute peer-review script and display output directly
+        "${SCRIPT_DIR}/peer-review" "${PROMPT}" "${ROUNDS}" 2>&1 | tee -a "${SESSION_FILE}"
+        ;;
+        
+        "SYNTHESIS")
+        echo "Delegating to synthesis mechanism..."
+        echo "DELEGATING TO SYNTHESIS:" >> "${SESSION_FILE}"
+        
+        # Execute synthesis script and display output directly
+        "${SCRIPT_DIR}/synthesis" "${PROMPT}" "${ROUNDS}" 2>&1 | tee -a "${SESSION_FILE}"
+        ;;
+        
+        "EXPLORATION")
+        echo "Delegating to exploration mechanism..."
+        echo "DELEGATING TO EXPLORATION:" >> "${SESSION_FILE}"
+        
+        # Execute exploration script and display output directly
+        "${SCRIPT_DIR}/exploration" "${PROMPT}" "${ROUNDS}" 2>&1 | tee -a "${SESSION_FILE}"
+        ;;
+        
+        "PUZZLE")
+            echo "Delegating to puzzle mechanism..."
+            echo "DELEGATING TO PUZZLE:" >> "${SESSION_FILE}"
+            
+            # Execute puzzle script and display output directly
+            "${SCRIPT_DIR}/puzzle" "${PROMPT}" "${ROUNDS}" 2>&1 | tee -a "${SESSION_FILE}"
+            ;;
+esac
+
+# --- Final Summary ---
+echo "" >> "${SESSION_FILE}"
+echo "DISPATCH SUMMARY:" >> "${SESSION_FILE}"
+echo "================" >> "${SESSION_FILE}"
+echo "Original Prompt: ${PROMPT}" >> "${SESSION_FILE}"
+echo "Selected Mechanism: ${MECHANISM}" >> "${SESSION_FILE}"
+echo "Reason: ${REASON}" >> "${SESSION_FILE}"
+echo "Rounds: ${ROUNDS}" >> "${SESSION_FILE}"
+
+# End timing
+duration=$(end_timer "$SESSION_ID" "computer")
+
+echo ""
+echo "Execution time: ${duration} seconds"
+echo "Full dispatch log: ${SESSION_FILE}" 
+echo "Full dispatch log: ${SESSION_FILE}" 
\ No newline at end of file
diff --git a/bash/talk-to-computer/config.sh b/bash/talk-to-computer/config.sh
new file mode 100755
index 0000000..ec612cc
--- /dev/null
+++ b/bash/talk-to-computer/config.sh
@@ -0,0 +1,126 @@
+#!/bin/bash
+
+# Centralized Configuration File
+# This file contains all model configurations, defaults, and system settings
+# for the AI thinking mechanisms system.
+
+# --- Default Models ---
+
+# Main dispatcher model
+DEFAULT_MODEL="gemma3n:e2b"
+
+# Classification model
+CLASSIFIER_MODEL="gemma3n:e2b"
+
+# --- Thinking Mechanism Models ---
+
+# Exploration mechanism
+EXPLORATION_MODEL="llama3:8b-instruct-q4_K_M"
+ANALYSIS_MODEL="phi3:3.8b-mini-4k-instruct-q4_K_M"
+
+# Consensus mechanism
+CONSENSUS_MODELS=(
+    "llama3:8b-instruct-q4_K_M"
+    "phi3:3.8b-mini-4k-instruct-q4_K_M"
+    "deepseek-r1:1.5b"
+    "gemma3n:e2b"
+    "dolphin3:latest"
+)
+CONSENSUS_JUDGE_MODEL="gemma3n:e2b"
+
+# Socratic mechanism
+SOCRATIC_RESPONSE_MODEL="llama3:8b-instruct-q4_K_M"
+SOCRATIC_QUESTION_MODEL="phi3:3.8b-mini-4k-instruct-q4_K_M"
+
+# Critique mechanism
+CRITIQUE_MODEL="llama3:8b-instruct-q4_K_M"
+
+# Synthesis mechanism
+SYNTHESIS_MODEL="llama3:8b-instruct-q4_K_M"
+
+# Peer Review mechanism
+PEER_REVIEW_MODEL="llama3:8b-instruct-q4_K_M"
+
+# Puzzle mechanism
+PUZZLE_MODEL="llama3:8b-instruct-q4_K_M"
+PUZZLE_ANALYSIS_MODEL="phi3:3.8b-mini-4k-instruct-q4_K_M"
+
+# --- System Settings ---
+
+# Default values
+DEFAULT_ROUNDS=2
+DEFAULT_LANGUAGE="lil"
+
+# Quality Guard settings
+MIN_RESPONSE_LENGTH=30
+MAX_REPETITION_RATIO=0.4
+MAX_NONSENSE_SCORE=0.6
+DEGRADATION_THRESHOLD=0.65
+MAX_CORRECTION_ATTEMPTS=2
+FALLBACK_ENABLED=true
+
+# Logging settings
+LOG_DIR=~/tmp/ai_thinking
+SESSION_LOG="${LOG_DIR}/session_$(date +%Y%m%d_%H%M%S).json"
+ERROR_LOG="${LOG_DIR}/errors.log"
+METRICS_FILE="${LOG_DIR}/performance_metrics.json"
+CLASSIFICATION_LOG="${LOG_DIR}/classification.log"
+
+# Security settings
+MAX_PROMPT_LENGTH=10000
+
+# --- Model Fallbacks ---
+
+# Fallback model for any model that fails validation
+FALLBACK_MODEL="gemma3n:e2b"
+
+# --- Environment Variable Support ---
+
+# Allow overriding models via environment variables
+if [ -n "$AI_DEFAULT_MODEL" ]; then
+    DEFAULT_MODEL="$AI_DEFAULT_MODEL"
+fi
+
+if [ -n "$AI_CLASSIFIER_MODEL" ]; then
+    CLASSIFIER_MODEL="$AI_CLASSIFIER_MODEL"
+fi
+
+if [ -n "$AI_EXPLORATION_MODEL" ]; then
+    EXPLORATION_MODEL="$AI_EXPLORATION_MODEL"
+fi
+
+if [ -n "$AI_ANALYSIS_MODEL" ]; then
+    ANALYSIS_MODEL="$AI_ANALYSIS_MODEL"
+fi
+
+if [ -n "$AI_PUZZLE_MODEL" ]; then
+    PUZZLE_MODEL="$AI_PUZZLE_MODEL"
+fi
+
+# --- Utility Functions ---
+
+# Get a model with fallback support
+get_model_with_fallback() {
+    local primary_model="$1"
+    local fallback_model="$2"
+
+    if [ -n "$primary_model" ]; then
+        echo "$primary_model"
+    else
+        echo "$fallback_model"
+    fi
+}
+
+# Validate if a model is available
+is_model_available() {
+    local model="$1"
+    ollama list 2>/dev/null | grep -q "$model"
+}
+
+# Export all configuration variables
+export DEFAULT_MODEL CLASSIFIER_MODEL EXPLORATION_MODEL ANALYSIS_MODEL
+export CONSENSUS_MODELS CONSENSUS_JUDGE_MODEL SOCRATIC_RESPONSE_MODEL SOCRATIC_QUESTION_MODEL
+export CRITIQUE_MODEL SYNTHESIS_MODEL PEER_REVIEW_MODEL PUZZLE_MODEL PUZZLE_ANALYSIS_MODEL
+export DEFAULT_ROUNDS DEFAULT_LANGUAGE MIN_RESPONSE_LENGTH MAX_REPETITION_RATIO
+export MAX_NONSENSE_SCORE DEGRADATION_THRESHOLD MAX_CORRECTION_ATTEMPTS FALLBACK_ENABLED
+export LOG_DIR SESSION_LOG ERROR_LOG METRICS_FILE CLASSIFICATION_LOG MAX_PROMPT_LENGTH FALLBACK_MODEL
diff --git a/bash/talk-to-computer/consensus b/bash/talk-to-computer/consensus
new file mode 100755
index 0000000..4089dfa
--- /dev/null
+++ b/bash/talk-to-computer/consensus
@@ -0,0 +1,388 @@
+#!/bin/bash
+
+# Consensus System
+# This script uses multiple LLM models to achieve consensus on a response through voting.
+#
+# APPLICATION LOGIC:
+# The consensus process uses a multi-round voting system where multiple AI models
+# attempt to reach agreement on a response. The system operates through four phases
+# designed to reduce bias and improve reliability:
+#
+# PHASE 1 - RESPONSE GENERATION:
+#   - Models independently generate responses to avoid identical outputs
+#   - Self-assessment of confidence provides internal quality indicators
+#   - Different model architectures may produce varied perspectives
+#   - Robust extraction handles formatting inconsistencies
+#
+# PHASE 2 - CONFIDENCE VALIDATION:
+#   - A randomly selected judge model provides external quality assessment
+#   - Random selection helps prevent bias toward any particular model
+#   - External validation may catch overconfident self-assessments
+#   - Quality control through independent review
+#
+# PHASE 3 - CROSS-MODEL VOTING:
+#   - Each model evaluates others' work, creating a peer-review system
+#   - Exclusion of self-voting prevents self-preference bias
+#   - Collective evaluation uses different model perspectives
+#   - Voting process distributes decision-making across models
+#
+# PHASE 4 - CONSENSUS DETERMINATION:
+#   - >50% threshold requires majority agreement rather than plurality
+#   - Fallback mechanisms provide output even when consensus isn't reached
+#   - Transparent vote counting shows decision process
+#   - Caveats indicate when consensus wasn't reached
+#
+# CONSENSUS MODELING:
+# The system applies voting principles to AI model collaboration:
+#   - Random judge selection helps reduce systematic bias
+#   - Collective decision-making may reduce individual model errors
+#   - Peer review provides multiple evaluation perspectives
+#   - Transparency shows how decisions were made
+#   - Iterative rounds may improve response quality
+#   - Error handling addresses model inconsistencies
+#
+# The consensus threshold (>50%) requires majority agreement,
+# while random judge selection helps prevent single-model dominance.
+# The system emphasizes transparency and reliability in the decision process.
+
+# Get the directory where this script is located
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+
+# Source the logging system
+source "${SCRIPT_DIR}/logging.sh"
+
+# Source the quality guard for output quality protection
+source "${SCRIPT_DIR}/quality_guard.sh"
+
+# Get mechanism name automatically
+MECHANISM_NAME=$(get_mechanism_name "$0")
+
+# --- Model Configuration ---
+MODELS=(
+    "llama3:8b-instruct-q4_K_M"
+    "phi3:3.8b-mini-4k-instruct-q4_K_M"
+    "deepseek-r1:1.5b"
+    "gemma3n:e2b"
+    "dolphin3:latest"
+)
+
+# Randomly select judge model from available models
+JUDGE_MODEL="${MODELS[$((RANDOM % ${#MODELS[@]}))]}"
+
+# --- Defaults ---
+DEFAULT_ROUNDS=2
+
+# --- Argument Validation ---
+if [ "$#" -lt 1 ]; then
+    echo -e "\n\tConsensus"
+    echo -e "\tThis script uses multiple LLM models to achieve consensus through voting."
+    echo -e "\n\tUsage: $0 [-f <file_path>] \"<your prompt>\" [number_of_rounds]"
+    echo -e "\n\tExample: $0 -f ./input.txt \"Please summarize this text file\" 2"
+    echo -e "\n\tIf number_of_rounds is not provided, the program will default to $DEFAULT_ROUNDS rounds."
+    echo -e "\n\t-f <file_path> (optional): Append the contents of the file to the prompt."
+    echo -e "\n"
+    exit 1
+fi
+
+# --- Argument Parsing ---
+FILE_PATH=""
+while getopts "f:" opt; do
+  case $opt in
+    f)
+      FILE_PATH="$OPTARG"
+      ;;
+    *)
+      echo "Invalid option: -$OPTARG" >&2
+      exit 1
+      ;;
+  esac
+done
+shift $((OPTIND -1))
+
+PROMPT="$1"
+if [ -z "$2" ]; then
+    ROUNDS=$DEFAULT_ROUNDS
+else
+    ROUNDS=$2
+fi
+
+# If file path is provided, append its contents to the prompt
+if [ -n "$FILE_PATH" ]; then
+    if [ ! -f "$FILE_PATH" ]; then
+        echo "File not found: $FILE_PATH" >&2
+        exit 1
+    fi
+    FILE_CONTENTS=$(cat "$FILE_PATH")
+    PROMPT="$PROMPT\n[FILE CONTENTS]\n$FILE_CONTENTS\n[END FILE]"
+fi
+
+# --- File Initialization ---
+# Create a temporary directory if it doesn't exist
+mkdir -p ~/tmp
+# Create a unique file for this session based on the timestamp
+SESSION_FILE=~/tmp/consensus_$(date +%Y%m%d_%H%M%S).txt
+
+echo "Consensus Session Log: ${SESSION_FILE}"
+echo "---------------------------------"
+echo "Judge model selected: ${JUDGE_MODEL}"
+echo "---------------------------------"
+
+# Store the initial user prompt in the session file
+echo "USER PROMPT: ${PROMPT}" >> "${SESSION_FILE}"
+echo "JUDGE MODEL: ${JUDGE_MODEL}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+echo "Processing consensus with ${#MODELS[@]} models over ${ROUNDS} rounds..."
+
+# --- Consensus Rounds ---
+for round in $(seq 1 "${ROUNDS}"); do
+    echo "Starting consensus round ${round} of ${ROUNDS}..."
+    echo "ROUND ${round}:" >> "${SESSION_FILE}"
+    echo "================" >> "${SESSION_FILE}"
+    
+    # --- Step 1: Each model generates a response with confidence ---
+    echo "Step 1: Generating responses with confidence scores..."
+    echo "STEP 1 - MODEL RESPONSES:" >> "${SESSION_FILE}"
+    
+    declare -a responses
+    declare -a confidences
+    declare -a model_names
+    
+    for i in "${!MODELS[@]}"; do
+        model="${MODELS[$i]}"
+        echo "  Generating response from ${model}..."
+        
+        # Prompt for response with confidence
+        RESPONSE_PROMPT="You are an expert assistant. Please respond to the following prompt and provide your confidence level (strictly 'low', 'medium', or 'high') at the end of your response.
+
+PROMPT: ${PROMPT}
+
+IMPORTANT: Format your response exactly as follows:
+[RESPONSE]
+Your detailed response here...
+[CONFIDENCE]
+low
+
+OR
+
+[RESPONSE]
+Your detailed response here...
+[CONFIDENCE]
+medium
+
+OR
+
+[RESPONSE]
+Your detailed response here...
+[CONFIDENCE]
+high
+
+Make sure to include both [RESPONSE] and [CONFIDENCE] tags exactly as shown."
+
+        response_output=$(ollama run "${model}" "${RESPONSE_PROMPT}")
+        response_output=$(guard_output_quality "$response_output" "$PROMPT" "$MECHANISM_NAME" "$model")
+        
+        # Extract response and confidence
+        response_text=$(echo "${response_output}" | sed -n '/\[RESPONSE\]/,/\[CONFIDENCE\]/p' | sed '1d;$d' | sed '$d')
+        
+        # If response extraction failed, use the full output (excluding confidence line)
+        if [ -z "$response_text" ]; then
+            response_text=$(echo "${response_output}" | sed '/\[CONFIDENCE\]/,$d' | sed 's/^[[:space:]]*//;s/[[:space:]]*$//')
+        fi
+        
+        confidence=$(echo "${response_output}" | grep -A1 "\[CONFIDENCE\]" | tail -n1 | tr '[:upper:]' '[:lower:]' | xargs)
+        
+        # If confidence extraction failed, try alternative methods
+        if [ -z "$confidence" ]; then
+            confidence=$(echo "${response_output}" | grep -i "confidence" | tail -n1 | grep -o -i "\(low\|medium\|high\)" | head -n1)
+        fi
+        
+        # Validate confidence level
+        if [[ ! "$confidence" =~ ^(low|medium|high)$ ]]; then
+            confidence="medium"  # Default if invalid
+        fi
+        
+        # Store results
+        responses[$i]="${response_text}"
+        confidences[$i]="${confidence}"
+        model_names[$i]="${model}"
+        
+        # Debug: Check if response was extracted properly
+        if [ -z "${response_text}" ]; then
+            echo "  WARNING: Empty response extracted from ${model}" >&2
+        fi
+        
+        # Log to session file
+        echo "MODEL ${i+1} (${model}):" >> "${SESSION_FILE}"
+        echo "Response: ${response_text}" >> "${SESSION_FILE}"
+        echo "Confidence: ${confidence}" >> "${SESSION_FILE}"
+        echo "" >> "${SESSION_FILE}"
+    done
+    
+    # --- Step 2: Judge validates confidence scores ---
+    echo "Step 2: Validating confidence scores..."
+    echo "STEP 2 - CONFIDENCE VALIDATION:" >> "${SESSION_FILE}"
+    
+    declare -a validated_confidences
+    
+    for i in "${!MODELS[@]}"; do
+        model="${MODELS[$i]}"
+        response="${responses[$i]}"
+        confidence="${confidences[$i]}"
+        
+        JUDGE_PROMPT="You are a judge evaluating confidence scores. Review this response and its claimed confidence level, then provide your own confidence assessment.
+
+RESPONSE: ${response}
+CLAIMED CONFIDENCE: ${confidence}
+
+Based on the quality, completeness, and accuracy of this response, what is your confidence level? Respond with only: low, medium, or high"
+
+        judge_output=$(ollama run "${JUDGE_MODEL}" "${JUDGE_PROMPT}")
+        judge_output=$(guard_output_quality "$judge_output" "$PROMPT" "$MECHANISM_NAME" "$JUDGE_MODEL")
+        judge_confidence=$(echo "${judge_output}" | tr '[:upper:]' '[:lower:]' | grep -o -i "\(low\|medium\|high\)" | head -n1)
+        
+        # Validate judge confidence
+        if [[ ! "$judge_confidence" =~ ^(low|medium|high)$ ]]; then
+            judge_confidence="medium"  # Default if invalid
+        fi
+        
+        validated_confidences[$i]="${judge_confidence}"
+        
+        echo "MODEL ${i+1} (${model}):" >> "${SESSION_FILE}"
+        echo "  Claimed confidence: ${confidence}" >> "${SESSION_FILE}"
+        echo "  Validated confidence: ${judge_confidence}" >> "${SESSION_FILE}"
+        echo "" >> "${SESSION_FILE}"
+    done
+    
+    # --- Step 3: Models vote on best response ---
+    echo "Step 3: Models voting on best response..."
+    echo "STEP 3 - VOTING:" >> "${SESSION_FILE}"
+    
+    # Create voting prompt with all responses
+    voting_prompt="You are a voter in a consensus system. Below are responses from different models to the same prompt. Please vote for the BEST response by providing the model number (1-${#MODELS[@]}).
+
+ORIGINAL PROMPT: ${PROMPT}
+
+RESPONSES:"
+    
+    for i in "${!MODELS[@]}"; do
+        voting_prompt="${voting_prompt}
+
+MODEL ${i+1} (${model_names[$i]}):
+${responses[$i]}
+Validated Confidence: ${validated_confidences[$i]}"
+    done
+    
+    voting_prompt="${voting_prompt}
+
+Please vote by responding with only the model number (1-${#MODELS[@]}) that you think provided the best response."
+
+    declare -a votes
+    declare -a vote_counts
+    
+    # Initialize vote counts
+    for i in "${!MODELS[@]}"; do
+        vote_counts[$i]=0
+    done
+    
+    # Each model votes
+    for i in "${!MODELS[@]}"; do
+        model="${MODELS[$i]}"
+        echo "  Getting vote from ${model}..."
+        
+        vote_output=$(ollama run "${model}" "${voting_prompt}")
+        vote_output=$(guard_output_quality "$vote_output" "$PROMPT" "$MECHANISM_NAME" "$model")
+        vote=$(echo "${vote_output}" | grep -o '[0-9]\+' | head -1)
+        
+        # Validate vote
+        if [[ "$vote" =~ ^[0-9]+$ ]] && [ "$vote" -ge 1 ] && [ "$vote" -le "${#MODELS[@]}" ]; then
+            votes[$i]=$((vote - 1))  # Convert to 0-based index
+            vote_counts[$((vote - 1))]=$((${vote_counts[$((vote - 1))]} + 1))
+        else
+            votes[$i]=$i  # Default to voting for self if invalid
+            vote_counts[$i]=$((${vote_counts[$i]} + 1))
+        fi
+        
+        echo "MODEL ${i+1} (${model}) voted for MODEL $((votes[$i] + 1))" >> "${SESSION_FILE}"
+    done
+    
+    # --- Step 4: Determine consensus ---
+    echo "Step 4: Determining consensus..."
+    echo "STEP 4 - CONSENSUS DETERMINATION:" >> "${SESSION_FILE}"
+    
+    # Find the response with the most votes
+    max_votes=0
+    winning_model=-1
+    
+    for i in "${!MODELS[@]}"; do
+        if [ "${vote_counts[$i]}" -gt "$max_votes" ]; then
+            max_votes="${vote_counts[$i]}"
+            winning_model=$i
+        fi
+    done
+    
+    # Check if we have consensus (more than 50% of votes)
+    total_votes=${#MODELS[@]}
+    consensus_threshold=$((total_votes / 2 + 1))
+    
+    if [ "$max_votes" -ge "$consensus_threshold" ]; then
+        consensus_reached=true
+        consensus_message="CONSENSUS REACHED: Model $((winning_model + 1)) (${model_names[$winning_model]}) won with ${max_votes}/${total_votes} votes"
+    else
+        consensus_reached=false
+        consensus_message="NO CONSENSUS: Model $((winning_model + 1)) (${model_names[$winning_model]}) had highest votes (${max_votes}/${total_votes}) but consensus threshold is ${consensus_threshold}"
+    fi
+    
+    echo "Vote counts:" >> "${SESSION_FILE}"
+    for i in "${!MODELS[@]}"; do
+        echo "  Model $((i + 1)) (${model_names[$i]}): ${vote_counts[$i]} votes" >> "${SESSION_FILE}"
+    done
+    echo "" >> "${SESSION_FILE}"
+    echo "${consensus_message}" >> "${SESSION_FILE}"
+    echo "" >> "${SESSION_FILE}"
+    
+    # Store the winning response for next round or final output
+    if [ "$winning_model" -ge 0 ]; then
+        CURRENT_RESPONSE="${responses[$winning_model]}"
+        CURRENT_CONFIDENCE="${validated_confidences[$winning_model]}"
+        CURRENT_MODEL="${model_names[$winning_model]}"
+        
+        # Fallback: if winning response is empty, use the first non-empty response
+        if [ -z "$CURRENT_RESPONSE" ]; then
+            for i in "${!responses[@]}"; do
+                if [ -n "${responses[$i]}" ]; then
+                    CURRENT_RESPONSE="${responses[$i]}"
+                    CURRENT_CONFIDENCE="${validated_confidences[$i]}"
+                    CURRENT_MODEL="${model_names[$i]}"
+                    echo "  Using fallback response from ${CURRENT_MODEL}" >&2
+                    break
+                fi
+            done
+        fi
+    fi
+    
+    echo "Round ${round} complete: ${consensus_message}"
+    echo "" >> "${SESSION_FILE}"
+done
+
+# --- Final Output ---
+echo "---------------------------------"
+echo "Consensus process complete."
+echo "Final result:"
+echo "---------------------------------"
+
+# Print final summary
+echo "CONSENSUS SUMMARY:" >> "${SESSION_FILE}"
+echo "==================" >> "${SESSION_FILE}"
+echo "Final Answer: ${CURRENT_RESPONSE}" >> "${SESSION_FILE}"
+echo "Model: ${CURRENT_MODEL}" >> "${SESSION_FILE}"
+echo "Confidence: ${CURRENT_CONFIDENCE}" >> "${SESSION_FILE}"
+echo "Consensus Status: ${consensus_message}" >> "${SESSION_FILE}"
+
+echo "Final Answer:"
+echo "${CURRENT_RESPONSE}"
+echo ""
+echo "Model: ${CURRENT_MODEL}"
+echo "Confidence: ${CURRENT_CONFIDENCE}"
+echo "Consensus Status: ${consensus_message}"
+echo ""
+echo "Full session log: ${SESSION_FILE}" 
\ No newline at end of file
diff --git a/bash/talk-to-computer/corpus/.file_processors b/bash/talk-to-computer/corpus/.file_processors
new file mode 100644
index 0000000..0c00161
--- /dev/null
+++ b/bash/talk-to-computer/corpus/.file_processors
@@ -0,0 +1,3 @@
+txt|cat
+md|cat
+html|cat
diff --git a/bash/talk-to-computer/corpus/.topic_keywords b/bash/talk-to-computer/corpus/.topic_keywords
new file mode 100644
index 0000000..486c24e
--- /dev/null
+++ b/bash/talk-to-computer/corpus/.topic_keywords
@@ -0,0 +1,6 @@
+programming|bash shell scripting code algorithm programming software development
+lil|decker lil language terse programming scripting deck
+science|physics chemistry biology research scientific experiment
+physics|quantum relativity mechanics thermodynamics energy force
+literature|book author writing novel poem analysis criticism
+general|knowledge fact information general misc miscellaneous
diff --git a/bash/talk-to-computer/corpus/README.md b/bash/talk-to-computer/corpus/README.md
new file mode 100644
index 0000000..d87af43
--- /dev/null
+++ b/bash/talk-to-computer/corpus/README.md
@@ -0,0 +1,236 @@
+# RAG Knowledge Corpus
+
+This directory contains the knowledge corpus for the RAG (Retrieval-Augmented Generation) system. The corpus is organized as a structured knowledge base that can be searched and used to augment AI responses with relevant context.
+
+## 📁 Directory Structure
+
+```
+corpus/
+├── README.md              # This file
+├── corpus_registry.txt    # Auto-generated registry of available topics
+├── corpus_manager.sh      # Management script (in parent directory)
+├── topic_template.md      # Template for new topics
+├── .topic_keywords        # Topic keyword mappings
+├── .file_processors       # File processing configurations
+│
+├── programming/           # Programming topics
+│   ├── lil/              # Lil programming language
+│   │   └── guide.md
+│   └── algorithms.txt
+│
+├── science/              # Scientific topics
+│   ├── physics.txt
+│   └── biology.md
+│
+├── literature/           # Literary topics
+├── general/              # General knowledge
+└── examples/             # Example content
+```
+
+## 🔧 Management Tools
+
+### Corpus Manager (`./corpus_manager.sh`)
+
+The corpus manager provides utilities for managing the knowledge base:
+
+```bash
+# Update the corpus registry (run after adding new files)
+./corpus_manager.sh update
+
+# List all available topics
+./corpus_manager.sh list
+
+# Check if a topic exists
+./corpus_manager.sh exists programming
+
+# List files in a specific topic
+./corpus_manager.sh files programming
+
+# Create template files for a new topic
+./corpus_manager.sh template newtopic
+
+# Get corpus statistics
+./corpus_manager.sh count science
+```
+
+### RAG Search (`./rag_search.sh`)
+
+Search the corpus using efficient Unix tools:
+
+```bash
+# Search entire corpus
+./rag_search.sh search "quantum physics"
+
+# Search specific topic
+./rag_search.sh search "lil programming" programming
+
+# Get context around matches
+./rag_search.sh context "variables" programming
+
+# Extract relevant sections
+./rag_search.sh extract "functions" programming
+
+# Show corpus statistics
+./rag_search.sh stats
+```
+
+## 📝 File Format Guidelines
+
+### Supported Formats
+- **`.txt`** - Plain text files
+- **`.md`** - Markdown files (recommended)
+- **`.html`** - HTML files
+
+### Content Organization
+1. **Use clear, descriptive headers** (`#`, `##`, `###`)
+2. **Include examples and code blocks** where relevant
+3. **Add cross-references** between related topics
+4. **Use consistent formatting** and terminology
+5. **Include practical applications** and use cases
+
+### Markdown Template
+```markdown
+# Topic Name - Comprehensive Guide
+
+## Introduction
+[Brief overview of the topic]
+
+## Core Concepts
+### [Subtopic 1]
+[Explanation and details]
+
+### [Subtopic 2]
+[Explanation and details]
+
+## Examples
+[Code examples, diagrams, practical applications]
+
+## Best Practices
+[Recommended approaches and common pitfalls]
+
+## References
+[Links to additional resources]
+```
+
+## ➕ Adding New Content
+
+### Step 1: Create Topic Directory
+```bash
+# Create a new topic directory
+mkdir -p corpus/newtopic
+
+# Or use the template command
+./corpus_manager.sh template newtopic
+```
+
+### Step 2: Add Content Files
+```bash
+# Create content files in your preferred format
+vim corpus/newtopic/guide.md
+vim corpus/newtopic/examples.txt
+vim corpus/newtopic/reference.html
+```
+
+### Step 3: Update Registry
+```bash
+# Update the corpus registry to include new files
+./corpus_manager.sh update
+
+# Verify the topic is recognized
+./corpus_manager.sh exists newtopic
+./corpus_manager.sh files newtopic
+```
+
+### Step 4: Test Search
+```bash
+# Test that content is searchable
+./rag_search.sh search "keyword" newtopic
+./rag_search.sh context "concept" newtopic
+```
+
+## 🔍 Search Behavior
+
+### Keyword Matching
+- **Case-insensitive** search across all text files
+- **Multi-word queries** supported
+- **Partial matches** found within words
+- **Context extraction** shows surrounding lines
+
+### Topic Filtering
+- **General search**: Searches entire corpus
+- **Topic-specific**: Limited to specific directories
+- **Hierarchical**: Supports subtopics (e.g., `science/physics`)
+
+### Performance
+- **Sub-second lookups** using Unix tools
+- **Efficient grep/sed/awk** processing
+- **Cached registry** for fast topic discovery
+- **Minimal memory usage**
+
+## 🔧 Advanced Configuration
+
+### Custom Topic Keywords
+Edit `corpus/.topic_keywords` to add custom topic detection:
+```
+newtopic|keyword1 keyword2 keyword3
+```
+
+### File Processors
+Edit `corpus/.file_processors` to add support for new file types:
+```
+custom|processing_command
+```
+
+### Registry Customization
+The `corpus_registry.txt` file can be manually edited for custom topic mappings:
+```
+topic|path/to/files|keywords|description
+```
+
+## 🎯 Integration with AI Systems
+
+The corpus is designed to integrate with AI thinking mechanisms:
+
+### Automatic RAG Detection
+- **Query analysis** determines when corpus search is needed
+- **Topic classification** matches queries to appropriate corpus sections
+- **Confidence scoring** determines RAG vs direct response
+
+### Context Injection
+- **Relevant sections** extracted and formatted
+- **Context length** managed to stay within token limits
+- **Multiple sources** combined for comprehensive answers
+
+### Fallback Strategy
+- **Graceful degradation** when no relevant corpus found
+- **Direct LLM response** when corpus search yields no results
+- **Error handling** for missing or corrupted files
+
+## 📊 Current Corpus Statistics
+
+*Run `./rag_search.sh stats` to see current corpus statistics.*
+
+## 🚀 Best Practices
+
+1. **Keep files focused** - One topic per file when possible
+2. **Use descriptive names** - File names should indicate content
+3. **Regular updates** - Run `update` after adding new files
+4. **Test searches** - Verify content is discoverable
+5. **Cross-reference** - Link related topics when appropriate
+6. **Version control** - Track changes to corpus files
+
+## 🔄 Maintenance
+
+### Regular Tasks
+- Run `./corpus_manager.sh update` after adding files
+- Test search functionality with new content
+- Review and update outdated information
+- Archive unused or deprecated topics
+
+### Performance Monitoring
+- Monitor search response times
+- Check registry file size and complexity
+- Validate file integrity periodically
+- Clean up temporary search files
+
+This corpus system provides a scalable, efficient foundation for knowledge-augmented AI responses while maintaining the flexibility to grow and adapt to new requirements.
diff --git a/bash/talk-to-computer/corpus/corpus_registry.txt b/bash/talk-to-computer/corpus/corpus_registry.txt
new file mode 100644
index 0000000..2c1bae3
--- /dev/null
+++ b/bash/talk-to-computer/corpus/corpus_registry.txt
@@ -0,0 +1,9 @@
+# Corpus Registry - Auto-generated by corpus_manager.sh
+# Format: TOPIC|PATH|KEYWORDS|DESCRIPTION
+# This file is automatically maintained - do not edit manually
+
+examples|corpus/examples|examples|Examples topics and resources
+general|corpus/general|general|General topics and resources
+literature|corpus/literature|books,authors,literature,writing,analysis|Literature topics and resources
+programming|corpus/programming|bash,shell,scripting,programming,lil,algorithm,code,software,development|Programming topics and resources
+science|corpus/science|physics,chemistry,biology,science,research,scientific|Science topics and resources
diff --git a/bash/talk-to-computer/corpus/corpus_registry.txt.backup b/bash/talk-to-computer/corpus/corpus_registry.txt.backup
new file mode 100644
index 0000000..2c1bae3
--- /dev/null
+++ b/bash/talk-to-computer/corpus/corpus_registry.txt.backup
@@ -0,0 +1,9 @@
+# Corpus Registry - Auto-generated by corpus_manager.sh
+# Format: TOPIC|PATH|KEYWORDS|DESCRIPTION
+# This file is automatically maintained - do not edit manually
+
+examples|corpus/examples|examples|Examples topics and resources
+general|corpus/general|general|General topics and resources
+literature|corpus/literature|books,authors,literature,writing,analysis|Literature topics and resources
+programming|corpus/programming|bash,shell,scripting,programming,lil,algorithm,code,software,development|Programming topics and resources
+science|corpus/science|physics,chemistry,biology,science,research,scientific|Science topics and resources
diff --git a/bash/talk-to-computer/corpus/programming/combinators.md b/bash/talk-to-computer/corpus/programming/combinators.md
new file mode 100644
index 0000000..8e2cfb0
--- /dev/null
+++ b/bash/talk-to-computer/corpus/programming/combinators.md
@@ -0,0 +1,192 @@
+# Combinators - The Ultimate Reusable Functions
+
+## Introduction
+
+In the context of functional programming and computer science, a **combinator** is a higher-order function that uses only function application and other combinators to define a result. Crucially, a combinator contains **no free variables**. This means it is a completely self-contained function that only refers to its own arguments.
+
+Combinators are fundamental concepts from **combinatory logic** and **lambda calculus**. While they have deep theoretical importance, their practical application in software development is to create highly reusable, abstract, and composable code, often leading to a **point-free** or **tacit** programming style. They are the essential glue for building complex logic by piecing together simpler functions.
+
+## Core Concepts
+
+### No Free Variables
+
+The defining characteristic of a combinator is that it has no **free variables**. A free variable is a variable referenced in a function that is not one of its formal arguments or defined within the function's local scope. This self-contained nature makes combinators perfectly portable and predictable.
+
+```javascript
+const y = 10;
+
+// This function is NOT a combinator because it uses a free variable `y`.
+// Its behavior depends on an external context.
+const addY = (x) => x + y;
+
+// This function IS a combinator. It has no free variables.
+// Its behavior only depends on its arguments.
+const add = (x) => (z) => x + z;
+```
+
+### Function Composition and Transformation
+
+Combinators are designed to manipulate and combine other functions. They are the building blocks for creating new functions from existing ones without needing to specify the data that the functions will eventually operate on. The entire logic is expressed as a transformation of functions themselves.
+
+## Key Principles
+
+  - **Point-Free Style (Tacit Programming)**: This is the primary programming style associated with combinators. You define functions as a pipeline or composition of other functions without explicitly mentioning the arguments (the "points"). This can lead to more abstract and declarative code.
+
+    ```javascript
+    // Not point-free: the argument `users` is explicitly mentioned.
+    const getActiveUserNames = (users) => users.filter(user => user.active).map(user => user.name);
+
+    // Point-free style: built by composing functions.
+    // `compose`, `filter`, `map`, and `prop` are all combinators or higher-order functions.
+    const getActiveUserNamesPointFree = compose(map(prop('name')), filter(propEq('active', true)));
+    ```
+
+  - **Abstraction**: Combinators abstract common patterns of execution and control flow. For example, the act of applying one function's result to another is abstracted away by the `compose` combinator.
+
+## Implementation/Usage
+
+Many famous combinators have single-letter names from combinatory logic. Understanding them helps in recognizing fundamental patterns.
+
+### Basic Example
+
+The simplest combinators are the **I-combinator (Identity)** and the **K-combinator (Constant)**.
+
+```javascript
+/**
+ * I-combinator (Identity)
+ * Takes a value and returns it.
+ * I x = x
+ */
+const I = (x) => x;
+
+/**
+ * K-combinator (Constant or Kestrel)
+ * Takes two arguments and returns the first. Creates constant functions.
+ * K x y = x
+ */
+const K = (x) => (y) => x;
+
+// Usage:
+const value = I("hello"); // "hello"
+const always42 = K(42);
+const result = always42("some other value"); // 42
+```
+
+### Advanced Example
+
+More complex combinators handle function composition, like the **B-combinator (Bluebird)**.
+
+```javascript
+/**
+ * B-combinator (Bluebird / Function Composition)
+ * Composes two functions.
+ * B f g x = f (g x)
+ */
+const B = (f) => (g) => (x) => f(g(x));
+
+// In practice, this is often implemented as `compose`.
+const compose = (f, g) => (x) => f(g(x));
+
+// Usage:
+const double = (n) => n * 2;
+const increment = (n) => n + 1;
+
+// Create a new function that increments then doubles.
+const incrementThenDouble = compose(double, increment);
+
+incrementThenDouble(5); // Returns 12, because (5 + 1) * 2
+```
+
+Another useful combinator is the **T-combinator (Thrush)**, which applies a value to a function.
+
+```javascript
+/**
+ * T-combinator (Thrush)
+ * Takes a value and a function, and applies the function to the value.
+ * T x f = f x
+ */
+const T = (x) => (f) => f(x);
+
+// This is the basis for the `pipe` or "thread-first" operator.
+T(5, increment); // 6
+```
+
+## Common Patterns
+
+### Pattern 1: Function Composition (`compose` / `pipe`)
+
+This is the most common and practical application of combinators. `compose` (based on the B-combinator) applies functions from right to left, while `pipe` applies them from left to right. They are used to build data-processing pipelines in a point-free style.
+
+```javascript
+// Ramda-style compose, handles multiple functions
+const compose = (...fns) => (initialVal) => fns.reduceRight((val, fn) => fn(val), initialVal);
+const pipe = (...fns) => (initialVal) => fns.reduce((val, fn) => fn(val), initialVal);
+```
+
+### Pattern 2: Parser Combinators
+
+A parser combinator is a higher-order function that takes several parsers as input and returns a new parser as its output. This is an advanced technique for building complex parsers by combining simple, specialized parsers for different parts of a grammar. It's a powerful real-world application of combinator logic.
+
+## Best Practices
+
+  - **Prioritize Readability**: While point-free style can be elegant, it can also become cryptic. If a composition is too long or complex, break it down and give intermediate functions meaningful names.
+  - **Know Your Library**: If you are using a functional programming library like Ramda or fp-ts, invest time in learning the combinators it provides. They are the building blocks for effective use of the library.
+  - **Use Currying**: Combinators are most powerful in a language that supports currying, as it allows for partial application, creating specialized functions from general ones.
+
+## Common Pitfalls
+
+  - **"Pointless" Code**: Overuse of point-free style can lead to code that is very difficult to read and debug. The goal is clarity through abstraction, not just character count reduction.
+  - **Debugging Complexity**: Debugging a long chain of composed functions is challenging because there are no named intermediate values to inspect. You often have to break the chain apart to find the source of a bug.
+
+## Performance Considerations
+
+  - **Function Call Overhead**: In theory, a deeply nested composition of combinators can introduce a small overhead from the additional function calls.
+  - **Negligible in Practice**: In most real-world applications, this overhead is negligible and completely optimized away by modern JavaScript engines and language compilers. Code clarity and correctness are far more important concerns.
+
+## Integration Points
+
+  - **Functional Programming Libraries**: Libraries like **Ramda**, **Lodash/fp**, and the **Haskell Prelude** are essentially collections of combinators and other higher-order functions.
+  - **Lambda Calculus**: Combinatory logic, the formal study of combinators, is computationally equivalent to lambda calculus. The famous **SKI combinator calculus** (using only S, K, and I combinators) can be used to express any computable algorithm.
+  - **Parser Combinator Libraries**: Libraries like `parsec` in Haskell or `fast-check` in JavaScript use these principles to build robust parsers and property-based testing tools.
+
+## Troubleshooting
+
+### Problem 1: A Composed Function Behaves Incorrectly
+
+**Symptoms:** The final output of a point-free pipeline is `undefined`, `NaN`, or simply the wrong value.
+**Solution:** Temporarily "re-point" the function to debug. Break the composition and insert `console.log` statements (or a `tap` utility function) to inspect the data as it flows from one function to the next.
+
+```javascript
+// A "tap" combinator is useful for debugging.
+const tap = (fn) => (x) => {
+  fn(x);
+  return x;
+};
+
+// Insert it into a pipeline to inspect intermediate values.
+const problematicPipe = pipe(
+  increment,
+  tap(console.log), // See the value after incrementing
+  double
+);
+```
+
+## Examples in Context
+
+  - **Configuration Objects**: Using the K-combinator (constant function) to provide default configuration values.
+  - **Data Validation**: Building a validator by composing smaller validation rule functions, where each function takes data and returns either a success or failure indicator.
+  - **Web Development**: A point-free pipeline in a frontend application that takes a raw API response, filters out inactive items, extracts a specific field, and formats it for display.
+
+## References
+
+  - [To Mock a Mockingbird by Raymond Smullyan](https://en.wikipedia.org/wiki/To_Mock_a_Mockingbird) - An accessible and famous book that teaches combinatory logic through recreational puzzles.
+  - [Wikipedia: Combinatory Logic](https://en.wikipedia.org/wiki/Combinatory_logic)
+  - [Ramda Documentation](https://ramdajs.com/docs/)
+
+## Related Topics
+
+  - Point-Free Style
+  - Lambda Calculus
+  - Functional Programming
+  - Currying
+  - Higher-Order Functions
\ No newline at end of file
diff --git a/bash/talk-to-computer/corpus/programming/command_line_data_processing.md b/bash/talk-to-computer/corpus/programming/command_line_data_processing.md
new file mode 100644
index 0000000..c5ce5f5
--- /dev/null
+++ b/bash/talk-to-computer/corpus/programming/command_line_data_processing.md
@@ -0,0 +1,200 @@
+# Local Data Processing With Unix Tools - Shell-based data wrangling
+
+## Introduction
+
+Leveraging standard Unix command-line tools for data processing is a powerful, efficient, and universally available method for handling text-based data. This guide focuses on the **Unix philosophy** of building complex data processing **pipelines** by composing small, single-purpose utilities. This approach is invaluable for ad-hoc data exploration, log analysis, and pre-processing tasks directly within the shell, often outperforming more complex scripts or dedicated software for common data wrangling operations.
+
+Key applications include analyzing web server logs, filtering and transforming CSV/TSV files, and batch-processing any line-oriented text data.
+
+## Core Concepts
+
+### Streams and Redirection
+
+At the core of Unix inter-process communication are three standard streams:
+
+1.  `stdin` (standard input): The stream of data going into a program.
+2.  `stdout` (standard output): The primary stream of data coming out of a program.
+3.  `stderr` (standard error): A secondary output stream for error messages and diagnostics.
+
+**Redirection** controls these streams. The pipe `|` operator is the most important, as it connects one command's `stdout` to the next command's `stdin`, forming a pipeline.
+
+```bash
+# Redirect stdout to a file (overwrite)
+command > output.txt
+
+# Redirect stdout to a file (append)
+command >> output.txt
+
+# Redirect a file to stdin
+command < input.txt
+
+# Redirect stderr to a file
+command 2> error.log
+
+# Redirect stderr to stdout
+command 2>&1
+```
+
+### The Core Toolkit
+
+A small set of highly-specialized tools forms the foundation of most data pipelines.
+
+  - **`grep`**: Filters lines that match a regular expression.
+  - **`awk`**: A powerful pattern-scanning and processing language. It excels at columnar data, allowing you to manipulate fields within each line.
+  - **`sed`**: A "stream editor" for performing text transformations on an input stream (e.g., search and replace).
+  - **`sort`**: Sorts lines of text files.
+  - **`uniq`**: Reports or omits repeated lines. Often used with `-c` to count occurrences.
+  - **`cut`**: Removes sections from each line of files (e.g., select specific columns).
+  - **`tr`**: Translates or deletes characters.
+  - **`xargs`**: Builds and executes command lines from standard input. It bridges the gap between commands that produce lists of files and commands that operate on them.
+
+## Key Principles
+
+The effectiveness of this approach stems from the **Unix Philosophy**:
+
+1.  **Do one thing and do it well**: Each tool is specialized for a single task (e.g., `grep` only filters, `sort` only sorts).
+2.  **Write programs that work together**: The universal text stream interface (`stdin`/`stdout`) allows for near-infinite combinations of tools.
+3.  **Handle text streams**: Text is a universal interface, making the tools broadly applicable to a vast range of data formats.
+
+## Implementation/Usage
+
+Let's assume we have a web server access log file, `access.log`, with the following format:
+`IP_ADDRESS - - [TIMESTAMP] "METHOD /path HTTP/1.1" STATUS_CODE RESPONSE_SIZE`
+
+Example line:
+`192.168.1.10 - - [20/Aug/2025:15:30:00 -0400] "GET /home HTTP/1.1" 200 5120`
+
+### Basic Example
+
+**Goal**: Find the top 5 IP addresses that accessed the server.
+
+```bash
+# This pipeline extracts, groups, counts, and sorts the IP addresses.
+cat access.log | \
+  awk '{print $1}' | \
+  sort | \
+  uniq -c | \
+  sort -nr | \
+  head -n 5
+```
+
+**Breakdown:**
+
+1.  `cat access.log`: Reads the file and sends its content to `stdout`.
+2.  `awk '{print $1}'`: For each line, print the first field (the IP address).
+3.  `sort`: Sorts the IPs alphabetically, which is necessary for `uniq` to group them.
+4.  `uniq -c`: Collapses adjacent identical lines into one and prepends the count.
+5.  `sort -nr`: Sorts the result numerically (`-n`) and in reverse (`-r`) order to get the highest counts first.
+6.  `head -n 5`: Takes the first 5 lines of the sorted output.
+
+### Advanced Example
+
+**Goal**: Calculate the total bytes served for all successful (`2xx` status code) `POST` requests.
+
+```bash
+# This pipeline filters for specific requests and sums a column.
+grep '"POST ' access.log | \
+  grep ' 2[0-9][0-9] ' | \
+  awk '{total += $10} END {print total}'
+```
+
+**Breakdown:**
+
+1.  `grep '"POST ' access.log`: Filters the log for lines containing ` "POST  ` (note the space to avoid matching other methods).
+2.  `grep ' 2[0-9][0-9] '`: Filters the remaining lines for a 2xx status code. The spaces ensure we match the status code field specifically.
+3.  `awk '{total += $10} END {print total}'`: For each line that passes the filters, `awk` adds the value of the 10th field (response size) to a running `total`. The `END` block executes after all lines are processed, printing the final sum.
+
+## Common Patterns
+
+### Pattern 1: Filter-Map-Reduce
+
+This is a functional programming pattern that maps directly to Unix pipelines.
+
+  - **Filter**: Select a subset of data (`grep`, `head`, `tail`, `awk '/pattern/'`).
+  - **Map**: Transform each line of data (`awk '{...}'`, `sed 's/.../.../'`, `cut`).
+  - **Reduce**: Aggregate data into a summary result (`sort | uniq -c`, `wc -l`, `awk '{sum+=$1} END {print sum}'`).
+
+### Pattern 2: Shuffling (Sort-Based Grouping)
+
+This is the command-line equivalent of a `GROUP BY` operation in SQL. The pattern is to extract a key, sort by that key to group related records together, and then process each group.
+
+```bash
+# Example: Find the most frequent user agent for each IP address.
+# The key here is the IP address ($1).
+awk '{print $1, $12}' access.log | \
+  sort | \
+  uniq -c | \
+  sort -k2,2 -k1,1nr | \
+  awk 'BEGIN{last=""} {if ($2 != last) {print} last=$2}'
+```
+
+This advanced pipeline sorts by IP, then by count, and finally uses `awk` to pick the first (highest count) entry for each unique IP.
+
+## Best Practices
+
+  - **Develop Incrementally**: Build pipelines one command at a time. After adding a `|` and a new command, run it to see if the intermediate output is what you expect.
+  - **Filter Early**: Place `grep` or other filtering commands as early as possible in the pipeline. This reduces the amount of data that subsequent, potentially more expensive commands like `sort` have to process.
+  - **Use `set -o pipefail`**: In shell scripts, this option causes a pipeline to return a failure status if *any* command in the pipeline fails, not just the last one.
+  - **Prefer `awk` for Columns**: For tasks involving multiple columns, `awk` is generally more powerful, readable, and performant than a complex chain of `cut`, `paste`, and shell loops.
+  - **Beware of Locales**: The `sort` command's behavior is affected by the `LC_ALL` environment variable. For byte-wise sorting, use `LC_ALL=C sort`.
+
+## Common Pitfalls
+
+  - **Forgetting to Sort Before `uniq`**: `uniq` only operates on adjacent lines. If the data is not sorted, it will not produce correct counts.
+  - **Greedy Regular Expressions**: A `grep` pattern like ` .  ` can match more than intended. Be as specific as possible with your regex.
+  - **Shell Globbing vs. `grep` Regex**: The wildcards used by the shell (`*`, `?`) are different from those used in regular expressions (`.*`, `.`).
+  - **Word Splitting on Unquoted Variables**: When used in scripts, variables containing spaces can be split into multiple arguments if not quoted (`"my var"` vs `my var`).
+
+## Performance Considerations
+
+  - **I/O is King**: These tools are often I/O-bound. Reading from and writing to disk is the slowest part. Use pipelines to avoid creating intermediate files.
+  - **`awk` vs. `sed` vs. `grep`**: For simple filtering, `grep` is fastest. For simple substitutions, `sed` is fastest. For any field-based logic, `awk` is the right tool and is extremely fast, as it's a single compiled process.
+  - **GNU Parallel**: For tasks that can be broken into independent chunks (e.g., processing thousands of files), `GNU parallel` can be used to execute pipelines in parallel, dramatically speeding up the work on multi-core systems.
+
+## Integration Points
+
+  - **Shell Scripting**: These tools are the fundamental building blocks for automation and data processing scripts in `bash`, `zsh`, etc.
+  - **Data Ingestion Pipelines**: Unix tools are often used as the first step (the "T" in an ELT process) to clean, filter, and normalize raw log files before they are loaded into a database or data warehouse.
+  - **Other Languages**: Languages like Python (`subprocess`) and Go (`os/exec`) can invoke these command-line tools to leverage their performance and functionality without having to re-implement them.
+
+## Troubleshooting
+
+### Problem 1: Pipeline hangs or is extremely slow
+
+**Symptoms:** The command prompt doesn't return, and there's no output.
+**Solution:** This is often caused by a command like `sort` or another tool that needs to read all of its input before producing any output. It may be processing a massive amount of data.
+
+1.  Test your pipeline on a small subset of the data first using `head -n 1000`.
+2.  Use a tool like `pv` (pipe viewer) in the middle of your pipeline (`... | pv | ...`) to monitor the flow of data and see where it's getting stuck.
+
+### Problem 2: `xargs` fails on filenames with spaces
+
+**Symptoms:** An `xargs` command fails with "file not found" errors for files with spaces or special characters in their names.
+**Solution:** Use the "null-delimited" mode of `find` and `xargs`, which is designed to handle all possible characters in filenames safely.
+
+```bash
+# Wrong way, will fail on "file name with spaces.txt"
+find . -name "*.txt" | xargs rm
+
+# Correct, safe way
+find . -name "*.txt" -print0 | xargs -0 rm
+```
+
+## Examples in Context
+
+  - **DevOps/SRE**: Quickly grepping through gigabytes of Kubernetes logs to find error messages related to a specific request ID.
+  - **Bioinformatics**: Processing massive FASTA/FASTQ text files to filter, reformat, or extract sequence data.
+  - **Security Analysis**: Analyzing `auth.log` files to find failed login attempts, group them by IP, and identify brute-force attacks.
+
+## References
+
+  - [The GNU Coreutils Manual](https://www.gnu.org/software/coreutils/manual/coreutils.html)
+  - [The AWK Programming Language (Book by Aho, Kernighan, Weinberger)](https://archive.org/details/pdfy-MgN0H1joIoDVoIC7)
+  - [Greg's Wiki - Bash Pitfalls](https://mywiki.wooledge.org/BashPitfalls)
+
+## Related Topics
+
+  - Shell Scripting
+  - Regular Expressions (Regex)
+  - AWK Programming
+  - Data Wrangling
\ No newline at end of file
diff --git a/bash/talk-to-computer/corpus/programming/functional_programming.md b/bash/talk-to-computer/corpus/programming/functional_programming.md
new file mode 100644
index 0000000..2572442
--- /dev/null
+++ b/bash/talk-to-computer/corpus/programming/functional_programming.md
@@ -0,0 +1,234 @@
+# Functional Programming - A paradigm for declarative, predictable code
+
+## Introduction
+
+**Functional Programming (FP)** is a programming paradigm where software is built by composing **pure functions**, avoiding shared state, mutable data, and side-effects. It treats computation as the evaluation of mathematical functions. Instead of describing *how* to achieve a result (imperative programming), you describe *what* the result is (declarative programming).
+
+This paradigm has gained significant traction because it helps manage the complexity of modern applications, especially those involving concurrency and complex state management. Programs written in a functional style are often easier to reason about, test, and debug.
+
+## Core Concepts
+
+### Pure Functions
+
+A function is **pure** if it adheres to two rules:
+
+1.  **The same input always returns the same output.** The function's return value depends solely on its input arguments.
+2.  **It produces no side effects.** A side effect is any interaction with the "outside world" from within the function. This includes modifying a global variable, changing an argument, logging to the console, or making a network request.
+
+<!-- end list -->
+
+```javascript
+// Pure function: predictable and testable
+const add = (a, b) => a + b;
+add(2, 3); // Always returns 5
+
+// Impure function: has a side effect (console.log)
+let count = 0;
+const incrementWithLog = () => {
+  count++; // And mutates external state
+  console.log(`The count is ${count}`);
+  return count;
+};
+```
+
+### Immutability
+
+**Immutability** means that data, once created, cannot be changed. If you need to modify a data structure (like an object or array), you create a new one with the updated values instead of altering the original. This prevents bugs caused by different parts of your application unexpectedly changing the same piece of data.
+
+```javascript
+// Bad: Mutating an object
+const user = { name: "Alice", age: 30 };
+const celebrateBirthdayMutable = (person) => {
+  person.age++; // This modifies the original user object
+  return person;
+};
+
+// Good: Returning a new object
+const celebrateBirthdayImmutable = (person) => {
+  return { ...person, age: person.age + 1 }; // Creates a new object
+};
+
+const newUser = celebrateBirthdayImmutable(user);
+// user is still { name: "Alice", age: 30 }
+// newUser is { name: "Alice", age: 31 }
+```
+
+### First-Class and Higher-Order Functions
+
+In FP, functions are **first-class citizens**. This means they can be treated like any other value:
+
+  * Assigned to variables
+  * Stored in data structures
+  * Passed as arguments to other functions
+  * Returned as values from other functions
+
+A function that either takes another function as an argument or returns a function is called a **Higher-Order Function**. Common examples are `map`, `filter`, and `reduce`.
+
+```javascript
+const numbers = [1, 2, 3, 4];
+const isEven = (n) => n % 2 === 0;
+const double = (n) => n * 2;
+
+// `filter` and `map` are Higher-Order Functions
+const evenDoubled = numbers.filter(isEven).map(double); // [4, 8]
+```
+
+## Key Principles
+
+  - **Declarative Style**: Focus on *what* the program should accomplish, not *how* it should accomplish it. An SQL query is a great example of a declarative style.
+  - **No Side Effects**: Isolate side effects from the core logic of your application. This makes your code more predictable.
+  - **Function Composition**: Build complex functionality by combining small, reusable functions.
+  - **Referential Transparency**: An expression can be replaced with its value without changing the behavior of the program. This is a natural outcome of using pure functions and immutable data.
+
+## Implementation/Usage
+
+The core idea is to create data transformation pipelines. You start with initial data and pass it through a series of functions to produce the final result.
+
+### Basic Example
+
+```javascript
+// A simple pipeline for processing a list of users
+const users = [
+  { name: "Alice", active: true, score: 90 },
+  { name: "Bob", active: false, score: 80 },
+  { name: "Charlie", active: true, score: 95 },
+];
+
+/**
+ * @param {object[]} users
+ * @returns {string[]}
+ */
+const getHighScoringActiveUserNames = (users) => {
+  return users
+    .filter((user) => user.active)
+    .filter((user) => user.score > 85)
+    .map((user) => user.name.toUpperCase());
+};
+
+console.log(getHighScoringActiveUserNames(users)); // ["ALICE", "CHARLIE"]
+```
+
+### Advanced Example
+
+A common advanced pattern is to use a reducer function to manage application state, a core concept in The Elm Architecture and libraries like Redux.
+
+```javascript
+// The state of our simple counter application
+const initialState = { count: 0 };
+
+// A pure function that describes how state changes in response to an action
+const counterReducer = (state, action) => {
+  switch (action.type) {
+    case 'INCREMENT':
+      return { ...state, count: state.count + 1 };
+    case 'DECREMENT':
+      return { ...state, count: state.count - 1 };
+    case 'RESET':
+      return { ...state, count: 0 };
+    default:
+      return state;
+  }
+};
+
+// Simulate dispatching actions
+let state = initialState;
+state = counterReducer(state, { type: 'INCREMENT' }); // { count: 1 }
+state = counterReducer(state, { type: 'INCREMENT' }); // { count: 2 }
+state = counterReducer(state, { type: 'DECREMENT' }); // { count: 1 }
+
+console.log(state); // { count: 1 }
+```
+
+## Common Patterns
+
+### Pattern 1: Functor
+
+A **Functor** is a design pattern for a data structure that can be "mapped over." It's a container that holds a value and has a `map` method for applying a function to that value without changing the container's structure. The most common example is the `Array`.
+
+```javascript
+// Array is a Functor because it has a .map() method
+const numbers = [1, 2, 3];
+const addOne = (n) => n + 1;
+const result = numbers.map(addOne); // [2, 3, 4]
+```
+
+### Pattern 2: Monad
+
+A **Monad** is a pattern for sequencing computations. Think of it as a "safer" functor that knows how to handle nested contexts or operations that can fail (like Promises or the `Maybe` type). `Promise` is a good practical example; its `.then()` method (or `flatMap`) lets you chain asynchronous operations together seamlessly.
+
+```javascript
+// Promise is a Monad, allowing chaining of async operations
+const fetchUser = (id) => Promise.resolve({ id, name: "Alice" });
+const fetchUserPosts = (user) => Promise.resolve([ { userId: user.id, title: "Post 1" } ]);
+
+fetchUser(1)
+  .then(fetchUserPosts) // .then acts like flatMap here
+  .then(posts => console.log(posts))
+  .catch(err => console.error(err));
+```
+
+## Best Practices
+
+  - **Keep Functions Small**: Each function should do one thing well.
+  - **Use Function Composition**: Use utilities like `pipe` or `compose` to build complex logic from simple building blocks.
+  - **Embrace Immutability**: Use `const` by default. Avoid reassigning variables. When updating objects or arrays, create new ones.
+  - **Isolate Impurity**: Side effects are necessary. Keep them at the boundaries of your application (e.g., in the function that handles an API call) and keep your core business logic pure.
+
+## Common Pitfalls
+
+  - **Accidental Mutation**: JavaScript objects and arrays are passed by reference, making it easy to mutate them accidentally. Be vigilant, especially with nested data.
+  - **Over-Abstraction**: Don't use complex FP concepts like monad transformers if a simple function will do. Prioritize readability.
+  - **Performance Misconceptions**: While creating many short-lived objects can have a performance cost, modern JavaScript engines are highly optimized for this pattern. Don't prematurely optimize; measure first.
+
+## Performance Considerations
+
+  - **Object/Array Creation**: In performance-critical code (e.g., animations, large data processing), the overhead of creating new objects/arrays in a tight loop can be significant.
+  - **Structural Sharing**: Libraries like `Immer` and `Immutable.js` use a technique called structural sharing. When you "change" an immutable data structure, only the parts that changed are created anew; the rest of the structure points to the same old data, saving memory and CPU time.
+  - **Recursion**: Deep recursion can lead to stack overflow errors. While some languages support **Tail Call Optimization (TCO)** to prevent this, JavaScript engines have limited support. Prefer iteration for very large data sets.
+
+## Integration Points
+
+  - **UI Frameworks**: FP concepts are central to modern UI libraries. **React** encourages pure components and uses immutable state patterns with Hooks (`useState`, `useReducer`).
+    \-- **State Management**: Libraries like **Redux** and **Zustand** are built entirely on FP principles, particularly the use of pure reducer functions.
+  - **Data Processing**: FP is excellent for data transformation pipelines. It's often used in backend services for processing streams of data.
+  - **Utility Libraries**: Libraries like **Lodash/fp** and **Ramda** provide a rich toolkit of pre-built, curried, and pure functions for everyday tasks.
+
+## Troubleshooting
+
+### Problem 1: Debugging composed function pipelines
+
+**Symptoms:** A chain of `.map().filter().reduce()` produces an incorrect result, and it's hard to see where it went wrong.
+**Solution:** Break the chain apart. Log the intermediate result after each step to inspect the data as it flows through the pipeline.
+
+```javascript
+const result = users
+  .filter((user) => user.active)
+  // console.log('After active filter:', resultFromActiveFilter)
+  .filter((user) => user.score > 85)
+  // console.log('After score filter:', resultFromScoreFilter)
+  .map((user) => user.name.toUpperCase());
+```
+
+### Problem 2: State changes unexpectedly
+
+**Symptoms:** A piece of state (e.g., in a React component or Redux store) changes when it shouldn't have, leading to bugs or infinite re-renders.
+**Solution:** This is almost always due to accidental mutation. Audit your code to ensure you are not modifying state directly. Use the spread syntax (`...`) for objects and arrays (`[...arr, newItem]`) to create copies. Libraries like `Immer` can make this process safer and more concise.
+
+## Examples in Context
+
+  - **Frontend Web Development**: The **Elm Architecture** (Model, Update, View) is a purely functional pattern for building web apps. It has heavily influenced libraries like Redux.
+  - **Data Analysis**: Running a series of transformations on a large dataset to filter, shape, and aggregate it for a report.
+  - **Concurrency**: Handling multiple events or requests simultaneously without running into race conditions, because data is immutable and shared state is avoided.
+
+## References
+
+  - [MDN Web Docs: Functional Programming](https://www.google.com/search?q=https://developer.mozilla.org/en-US/docs/Glossary/Functional_programming)
+  - [Professor Frisby's Mostly Adequate Guide to Functional Programming](https://mostly-adequate.gitbook.io/mostly-adequate-guide/)
+  - [Ramda Documentation](https://ramdajs.com/docs/)
+
+## Related Topics
+
+  - Immutability
+  - Functional Reactive Programming (FRP)
+  - The Elm Architecture
+  - Algebraic Data Types
\ No newline at end of file
diff --git a/bash/talk-to-computer/corpus/programming/lil_guide.md b/bash/talk-to-computer/corpus/programming/lil_guide.md
new file mode 100644
index 0000000..72df8df
--- /dev/null
+++ b/bash/talk-to-computer/corpus/programming/lil_guide.md
@@ -0,0 +1,277 @@
+# Multi-paradigm Programming with Lil - A Guide to Lil's Diverse Styles
+
+## Introduction
+
+Lil is a richly multi-paradigm scripting language designed for the Decker creative environment. It seamlessly blends concepts from **imperative**, **functional**, **declarative**, and **vector-oriented** programming languages. This flexibility allows developers to choose the most effective and ergonomic approach for a given task, whether it's managing application state, manipulating complex data structures, or performing efficient bulk computations. Understanding these paradigms is key to writing elegant, efficient, and idiomatic Lil code.
+
+## Core Concepts
+
+Lil's power comes from the way it integrates four distinct programming styles.
+
+### Imperative Programming
+
+This is the traditional, statement-by-statement style of programming. It involves creating variables, assigning values to them, and using loops and conditionals to control the flow of execution.
+
+  - **Assignment:** The colon (`:`) is used for assignment.
+  - **Control Flow:** Lil provides `if`/`elseif`/`else` for conditionals and `while` and `each` for loops.
+  - **State Management:** State is typically managed by assigning and re-assigning values to variables, often stored in the properties of Decker widgets between event handlers.
+
+<!-- end list -->
+
+```lil
+# Imperative approach to summing a list
+total: 0
+numbers: [10, 20, 30]
+each n in numbers do
+  total: total + n
+end
+# total is now 60
+```
+
+### Functional Programming
+
+The functional style emphasizes pure functions, immutability, and the composition of functions without side-effects.
+
+  - **Immutability:** All core data structures (lists, dictionaries, tables) have copy-on-write semantics. Modifying one does not alter the original value but instead returns a new, amended value.
+  - **First-Class Functions:** Functions are values that can be defined with `on`, assigned to variables, and passed as arguments to other functions.
+  - **Expressions over Statements:** Every statement in Lil is an expression that returns a value. An `if` block returns the value of its executed branch, and an `each` loop returns a new collection containing the result of each iteration.
+
+<!-- end list -->
+
+```lil
+# Functional approach using a higher-order function
+on twice f x do
+  f[f[x]]
+end
+
+on double x do
+  x * 2
+end
+
+result: twice[double 10] # result is 40
+```
+
+### Declarative (Query-based) Programming
+
+For data manipulation, Lil provides a powerful declarative query engine that resembles SQL. Instead of describing *how* to loop through and filter data, you declare *what* data you want.
+
+  - **Queries:** Use `select`, `update`, and `extract` to query tables (and other collection types).
+  - **Clauses:** Filter, group, and sort data with `where`, `by`, and `orderby` clauses.
+  - **Readability:** Queries often result in more concise and readable code for data transformation tasks compared to imperative loops.
+
+<!-- end list -->
+
+```lil
+# Declarative query to find developers
+people: insert name age job with
+ "Alice"  25 "Development"
+ "Sam"    28 "Sales"
+ "Thomas" 40 "Development"
+end
+
+devs: select name from people where job="Development"
+# devs is now a table with the names "Alice" and "Thomas"
+```
+
+### Vector-Oriented Programming
+
+Influenced by languages like APL and K, this paradigm focuses on applying operations to entire arrays or lists (vectors) at once, a concept known as **conforming**.
+
+  - **Conforming Operators:** Standard arithmetic operators (`+`, `-`, `*`, `/`) work element-wise on lists.
+  - **Efficiency:** Vector operations are significantly more performant than writing equivalent imperative loops.
+  - **The `@` Operator:** The "apply" operator (`@`) can be used to apply a function to each element of a list or to select multiple elements from a list by index.
+
+<!-- end list -->
+
+```lil
+# Vector-oriented approach to add 10 to each number
+numbers: [10, 20, 30]
+result: numbers + 10 # result is [20, 30, 40]
+```
+
+-----
+
+## Key Principles
+
+  - **Right-to-Left Evaluation:** Expressions are evaluated from right to left unless overridden by parentheses `()`. This is a fundamental rule that affects how all expressions are composed.
+  - **Copy-on-Write Immutability:** Lists, Dictionaries, and Tables are immutable. Operations like `update` or indexed assignments on an expression `(foo)[1]:44` return a new value, leaving the original unchanged. Direct assignment `foo[1]:44` is required to modify the variable `foo` itself.
+  - **Data-Centric Design:** The language provides powerful, built-in tools for data manipulation, especially through its query engine and vector operations.
+  - **Lexical Scoping:** Variables are resolved based on their location in the code's structure. Functions "close over" variables from their containing scope, enabling patterns like counters and encapsulated state.
+
+-----
+
+## Implementation/Usage
+
+The true power of Lil emerges when you mix these paradigms to solve problems cleanly and efficiently.
+
+### Basic Example
+
+Here, we combine an imperative loop with a vector-oriented operation to process a list of lists.
+
+```lil
+# Calculate the magnitude of several 2D vectors
+vectors: [[3,4], [5,12], [8,15]]
+magnitudes: []
+
+# Imperative loop over the list of vectors
+each v in vectors do
+  # mag is a vector-oriented unary operator
+  magnitudes: magnitudes & [mag v]
+end
+
+# magnitudes is now [5, 13, 17]
+```
+
+### Advanced Example
+
+This example defines a functional-style utility function (`avg`) and uses it within a declarative query to summarize data, an approach common in data analysis.
+
+```lil
+# Functional helper function
+on avg x do
+  (sum x) / count x
+end
+
+# A table of sales data
+sales: insert product category price with
+ "Apple"  "Fruit"  0.5
+ "Banana" "Fruit"  0.4
+ "Bread"  "Grain"  2.5
+ "Rice"   "Grain"  3.0
+end
+
+# Declarative query that uses the functional helper
+avgPriceByCategory: select category:first category avg_price:avg[price] by category from sales
+
+# avgPriceByCategory is now:
+# +----------+-----------+
+# | category | avg_price |
+# +----------+-----------+
+# | "Fruit"  | 0.45      |
+# | "Grain"  | 2.75      |
+# +----------+-----------+
+```
+
+-----
+
+## Common Patterns
+
+### Pattern 1: Query over Loop
+
+Instead of manually iterating with `each` to filter or transform a collection, use a declarative `select` or `extract` query. This is more concise, often faster, and less error-prone.
+
+```lil
+# Instead of this imperative loop...
+high_scores: []
+scores: [88, 95, 72, 100, 91]
+each s in scores do
+  if s > 90 then
+    high_scores: high_scores & [s]
+  end
+end
+
+# ...use a declarative query.
+high_scores: extract value where value > 90 from scores
+# high_scores is now [95, 100, 91]
+```
+
+### Pattern 2: Function Application with `@`
+
+For simple element-wise transformations on a list, using the `@` operator with a function is cleaner than writing an `each` loop.
+
+```lil
+# Instead of this...
+names: ["alice", "bob", "charlie"]
+capitalized: []
+on capitalize s do first s & (1 drop s) end # Simple capitalize, for demo
+each n in names do
+  capitalized: capitalized & [capitalize n]
+end
+
+# ...use the more functional and concise @ operator.
+on capitalize s do first s & (1 drop s) end
+capitalized: capitalize @ names
+# capitalized is now ["Alice", "Bob", "Charlie"]
+```
+
+-----
+
+## Best Practices
+
+  - **Embrace Queries:** For any non-trivial data filtering, grouping, or transformation, reach for the query engine first.
+  - **Use Vector Operations:** When performing arithmetic or logical operations on lists, use conforming operators (`+`, `<`, `=`) instead of loops for better performance and clarity.
+  - **Distinguish Equality:** Use the conforming equals `=` within query expressions. Use the non-conforming match `~` in `if` or `while` conditions to avoid accidentally getting a list result.
+  - **Encapsulate with Functions:** Use functions to create reusable components and manage scope, especially for complex logic within Decker event handlers.
+
+-----
+
+## Common Pitfalls
+
+  - **Right-to-Left Confusion:** Forgetting that `3*2+5` evaluates to `21`, not `11`. Use parentheses `(3*2)+5` to enforce the desired order of operations.
+  - **Expecting Mutation:** Believing that `update ... from my_table` changes `my_table`. It returns a *new* table. You must reassign it: `my_table: update ... from my_table`.
+  - **Comma as Argument Separator:** Writing `myfunc[arg1, arg2]`. This creates a list of two items and passes it as a single argument. The correct syntax is `myfunc[arg1 arg2]`.
+  - **Using `=` in `if`:** Writing `if some_list = some_value` can produce a list of `0`s and `1`s. An empty list `()` is falsey, but a list like `[0,0]` is truthy. Use `~` for a single boolean result in control flow.
+
+-----
+
+## Performance Considerations
+
+Vector-oriented algorithms are significantly faster and more memory-efficient than their imperative, element-by-element counterparts. The Lil interpreter is optimized for these bulk operations. For example, replacing values in a list using a calculated mask is preferable to an `each` loop with a conditional inside.
+
+```lil
+# Slow, iterative approach
+x: [1, 10, 2, 20, 3, 30]
+result: each v in x
+  if v < 5 99 else v end
+end
+
+# Fast, vector-oriented approach
+mask: x < 5                 # results in [1,0,1,0,1,0]
+result: (99 * mask) + (x * !mask)
+```
+
+-----
+
+## Integration Points
+
+The primary integration point for Lil is **Decker**. Lil scripts are attached to Decker widgets, cards, and the deck itself to respond to user events (`on click`, `on keydown`, etc.). All paradigms are useful within Decker:
+
+  - **Imperative:** To sequence actions, like showing a dialog and then navigating to another card.
+  - **Declarative:** To query data stored in a `grid` widget or to find specific cards in the deck, e.g., `extract value where value..widgets.visited.value from deck.cards`.
+  - **Functional/Vector:** To process data before displaying it, without needing slow loops.
+
+-----
+
+## Troubleshooting
+
+### Problem 1: An `if` statement behaves unpredictably with list comparisons.
+
+  - **Symptoms:** An `if` block either never runs or always runs when comparing a value against a list.
+  - **Solution:** You are likely using the conforming equals operator (`=`), which returns a list of boolean results. In a conditional, you almost always want the non-conforming match operator (`~`), which returns a single `1` or `0`.
+
+### Problem 2: A recursive function crashes with a stack overflow on large inputs.
+
+  - **Symptoms:** The script terminates unexpectedly when a recursive function is called with a large number or deep data structure.
+  - **Solution:** Lil supports **tail-call elimination**. Ensure your recursive call is the very last operation performed in the function. If it's part of a larger expression (e.g., `1 + my_func[...]`), it is not in a tail position. Rewrite the function to accumulate its result in an argument.
+
+-----
+
+## Examples in Context
+
+**Use Case: A Simple To-Do List in Decker**
+
+Imagine a Decker card with a `grid` widget named "tasks" (with columns "desc" and "done") and a `field` widget named "summary".
+
+```lil
+# In the script of the "tasks" grid, to update when it's changed:
+on change do
+  # Use a DECLARATIVE query to get the done/total counts.
+  # The query source is "me.value", the table in the grid.
+  stats: extract done:sum done total:count done from me.value
+
+  # Use IMPERATIVE assignment to update the summary field.
+  summary.text: format["%d of %d tasks complete.", stats.done, stats.total]
+end
+```
+
+This tiny script uses a declarative query to read the state and an imperative command to update the UI, demonstrating a practical mix of paradigms.
diff --git a/bash/talk-to-computer/corpus/science/physics_basics.txt b/bash/talk-to-computer/corpus/science/physics_basics.txt
new file mode 100644
index 0000000..5ae092b
--- /dev/null
+++ b/bash/talk-to-computer/corpus/science/physics_basics.txt
@@ -0,0 +1,94 @@
+PHYSICS BASICS - Core Concepts and Principles
+
+CLASSICAL MECHANICS
+==================
+
+Newton's Laws of Motion:
+1. An object at rest stays at rest, and an object in motion stays in motion with the same speed and direction unless acted upon by an unbalanced force. (Inertia)
+
+2. Force equals mass times acceleration (F = ma)
+
+3. For every action, there is an equal and opposite reaction.
+
+Key Equations:
+- Distance: d = vt (constant velocity)
+- Velocity: v = v0 + at (constant acceleration)
+- Distance with acceleration: d = v0t + (1/2)at²
+- Momentum: p = mv
+- Kinetic Energy: KE = (1/2)mv²
+- Potential Energy: PE = mgh
+
+ELECTRICITY AND MAGNETISM
+=========================
+
+Basic Concepts:
+- Charge: Fundamental property of matter (positive/negative)
+- Electric field: Force per unit charge
+- Current: Rate of charge flow (I = Q/t)
+- Voltage: Electric potential difference (V = IR)
+- Resistance: Opposition to current flow
+
+Key Equations:
+- Ohm's Law: V = IR
+- Power: P = IV = I²R = V²/R
+- Energy: E = Pt = VIt
+
+THERMODYNAMICS
+=============
+
+Laws of Thermodynamics:
+1. Energy conservation - energy cannot be created or destroyed
+2. Entropy always increases in isolated systems
+3. Absolute zero is unattainable
+4. Entropy and temperature relationship
+
+Key Concepts:
+- Heat transfer: Conduction, convection, radiation
+- Specific heat capacity: Energy required to change temperature
+- Phase changes: Melting, freezing, boiling, condensation
+- Ideal gas law: PV = nRT
+
+QUANTUM PHYSICS
+==============
+
+Core Principles:
+- Wave-particle duality: Particles can exhibit wave-like behavior
+- Uncertainty principle: Cannot know both position and momentum precisely
+- Quantization: Energy levels are discrete, not continuous
+- Superposition: Quantum systems can exist in multiple states
+
+Key Equations:
+- Energy of photon: E = hf (h = Planck's constant)
+- de Broglie wavelength: λ = h/p
+- Heisenberg uncertainty: ΔxΔp ≥ h/4π
+
+MODERN PHYSICS
+=============
+
+Special Relativity:
+- Speed of light is constant for all observers
+- Time dilation: Moving clocks run slower
+- Length contraction: Moving objects appear shorter
+- Mass-energy equivalence: E = mc²
+
+General Relativity:
+- Gravity as curvature of spacetime
+- Equivalence principle: Gravitational and inertial mass are identical
+- Black holes and gravitational waves
+
+PRACTICAL APPLICATIONS
+====================
+
+Real-world Physics:
+- Engineering: Bridges, buildings, vehicles
+- Medicine: Imaging, radiation therapy, medical devices
+- Technology: Computers, smartphones, satellites
+- Energy: Solar panels, wind turbines, nuclear power
+
+Measurement Units:
+- Length: meter (m)
+- Mass: kilogram (kg)
+- Time: second (s)
+- Force: newton (N) = kg·m/s²
+- Energy: joule (J) = N·m
+- Power: watt (W) = J/s
diff --git a/bash/talk-to-computer/corpus/topic_template.md b/bash/talk-to-computer/corpus/topic_template.md
new file mode 100644
index 0000000..2ea9653
--- /dev/null
+++ b/bash/talk-to-computer/corpus/topic_template.md
@@ -0,0 +1,39 @@
+# Topic Name - Comprehensive Guide
+
+## Introduction
+
+[Brief introduction to the topic and its importance]
+
+## Core Concepts
+
+### [Main Concept 1]
+[Explanation and details]
+
+### [Main Concept 2]
+[Explanation and details]
+
+## Key Principles
+
+[Important principles and rules]
+
+## Examples
+
+[Code examples, diagrams, or practical applications]
+
+## Best Practices
+
+[Recommended approaches and common pitfalls]
+
+## References
+
+[Links to additional resources and further reading]
+
+---
+
+**Template Instructions:**
+1. Replace [bracketed text] with actual content
+2. Add code blocks using markdown syntax
+3. Include examples and practical applications
+4. Save as .md, .txt, or .html file
+5. Run `./corpus_manager.sh update` to refresh registry
+6. Test with corpus queries
diff --git a/bash/talk-to-computer/corpus_manager.sh b/bash/talk-to-computer/corpus_manager.sh
new file mode 100755
index 0000000..47c743c
--- /dev/null
+++ b/bash/talk-to-computer/corpus_manager.sh
@@ -0,0 +1,303 @@
+#!/bin/bash
+
+# Corpus Manager - Manages RAG corpus discovery and maintenance
+# This script provides utilities for managing the knowledge corpus
+
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+CORPUS_DIR="${SCRIPT_DIR}/corpus"
+REGISTRY_FILE="${CORPUS_DIR}/corpus_registry.txt"
+
+# Colors for output
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+BLUE='\033[0;34m'
+NC='\033[0m' # No Color
+
+# --- Corpus Discovery Functions ---
+
+discover_corpus() {
+    echo -e "${BLUE}🔍 Discovering corpus structure...${NC}"
+
+    # Find all directories under corpus/
+    find "$CORPUS_DIR" -type d -mindepth 1 | while read -r dir; do
+        local topic_name=$(basename "$dir")
+        local parent_topic=$(basename "$(dirname "$dir")")
+
+        # Skip if this is the corpus root
+        if [ "$parent_topic" = "corpus" ]; then
+            echo "Found topic directory: $topic_name"
+        fi
+    done
+}
+
+# Generate topic keywords based on directory name and content
+generate_topic_keywords() {
+    local topic_name="$1"
+    local keywords=""
+
+    case "$topic_name" in
+        "programming")
+            keywords="bash,shell,scripting,programming,lil,algorithm,code,software,development"
+            ;;
+        "science")
+            keywords="physics,chemistry,biology,science,research,scientific"
+            ;;
+        "literature")
+            keywords="books,authors,literature,writing,analysis"
+            ;;
+        "lil")
+            keywords="decker,lil,language,programming,scripting,terse,deck"
+            ;;
+        "physics")
+            keywords="quantum,relativity,physics,mechanics,thermodynamics"
+            ;;
+        *)
+            # Generate keywords from directory name
+            keywords=$(echo "$topic_name" | sed 's/[-_]/,/g')
+            ;;
+    esac
+
+    echo "$keywords"
+}
+
+# Update the corpus registry
+update_registry() {
+    echo -e "${BLUE}📝 Updating corpus registry...${NC}"
+
+    # Backup existing registry
+    if [ -f "$REGISTRY_FILE" ]; then
+        cp "$REGISTRY_FILE" "${REGISTRY_FILE}.backup"
+    fi
+
+    # Create new registry header
+    cat > "$REGISTRY_FILE" << 'EOF'
+# Corpus Registry - Auto-generated by corpus_manager.sh
+# Format: TOPIC|PATH|KEYWORDS|DESCRIPTION
+# This file is automatically maintained - do not edit manually
+
+EOF
+
+    # Find all directories and generate registry entries
+    find "$CORPUS_DIR" -type d -mindepth 1 | sort | while read -r dir; do
+        local topic_name=$(basename "$dir")
+        local relative_path="${dir#${SCRIPT_DIR}/}"
+        local keywords=$(generate_topic_keywords "$topic_name")
+        local description="$(echo "${topic_name:0:1}" | tr '[:lower:]' '[:upper:]')${topic_name:1} topics and resources"
+
+        # Determine parent topic for hierarchical structure
+        local parent_dir=$(dirname "$dir")
+        local parent_topic=""
+
+        if [ "$parent_dir" != "$CORPUS_DIR" ]; then
+            parent_topic=$(basename "$parent_dir")
+            description="$(echo "${topic_name:0:1}" | tr '[:lower:]' '[:upper:]')${topic_name:1} subset of ${parent_topic}"
+        fi
+
+        # Add to registry
+        echo "${parent_topic:-$topic_name}|$relative_path|$keywords|$description" >> "$REGISTRY_FILE"
+    done
+
+    echo -e "${GREEN}✅ Registry updated successfully${NC}"
+}
+
+# --- Corpus Query Functions ---
+
+# Check if corpus exists for a given topic
+corpus_exists() {
+    local topic="$1"
+    grep -q "^[^|]*${topic}|" "$REGISTRY_FILE" 2>/dev/null
+    return $?
+}
+
+# Get corpus path for a topic
+get_corpus_path() {
+    local topic="$1"
+    grep "^[^|]*${topic}|" "$REGISTRY_FILE" | head -1 | cut -d'|' -f2
+}
+
+# Get corpus keywords for a topic
+get_corpus_keywords() {
+    local topic="$1"
+    grep "^[^|]*${topic}|" "$REGISTRY_FILE" | head -1 | cut -d'|' -f3
+}
+
+# List all available topics
+list_topics() {
+    echo -e "${BLUE}📚 Available Corpus Topics:${NC}"
+    echo "----------------------------------------"
+
+    if [ ! -f "$REGISTRY_FILE" ]; then
+        echo -e "${RED}No corpus registry found. Run 'update' first.${NC}"
+        return 1
+    fi
+
+    awk -F'|' 'NR>3 {print "• " $1 "/" $2 " - " $4}' "$REGISTRY_FILE" | sort
+}
+
+# --- Corpus Content Functions ---
+
+# Count files in a corpus directory
+count_corpus_files() {
+    local topic="$1"
+    local corpus_path=$(get_corpus_path "$topic")
+
+    if [ -d "$corpus_path" ]; then
+        find "$corpus_path" -type f \( -name "*.txt" -o -name "*.md" -o -name "*.html" \) | wc -l
+    else
+        echo "0"
+    fi
+}
+
+# Get corpus file list
+list_corpus_files() {
+    local topic="$1"
+    local corpus_path=$(get_corpus_path "$topic")
+
+    if [ -d "$corpus_path" ]; then
+        echo -e "${BLUE}📄 Files in $topic corpus:${NC}"
+        find "$corpus_path" -type f \( -name "*.txt" -o -name "*.md" -o -name "*.html" \) | sort
+    else
+        echo -e "${RED}Corpus directory not found: $corpus_path${NC}"
+    fi
+}
+
+# --- Template and Setup Functions ---
+
+# Create template files for a new topic
+create_topic_template() {
+    local topic="$1"
+    local corpus_path="$CORPUS_DIR/$topic"
+
+    echo -e "${BLUE}🛠️  Creating template for topic: $topic${NC}"
+
+    # Create directory if it doesn't exist
+    mkdir -p "$corpus_path"
+
+    # Create template files
+    cat > "$corpus_path/README.md" << EOF
+# $topic Corpus
+
+This directory contains documentation and resources for $topic.
+
+## File Format Guidelines
+
+- Use **Markdown (.md)** for structured content with headers
+- Use **Plain text (.txt)** for simple notes and documentation
+- Use **HTML (.html)** for rich content and formatting
+- File names should be descriptive: \`topic_concept_name.md\`
+
+## Content Organization
+
+- Group related concepts in single files
+- Use clear, descriptive headers
+- Include code examples where relevant
+- Add cross-references between related topics
+
+## Adding New Content
+
+1. Create new .md, .txt, or .html files in this directory
+2. Run \`./corpus_manager.sh update\` to update the registry
+3. Test with corpus queries
+EOF
+
+    cat > "$corpus_path/example.md" << EOF
+# Example $topic Content
+
+This is an example file showing the expected format for $topic content.
+
+## Introduction
+
+Add your content here using standard Markdown formatting.
+
+## Key Concepts
+
+- Concept 1
+- Concept 2
+- Concept 3
+
+## Examples
+
+\`\`\`bash
+# Code examples go here
+echo "Hello, $topic!"
+\`\`\`
+
+## References
+
+- Link to relevant resources
+- Additional reading materials
+EOF
+
+    echo -e "${GREEN}✅ Template created in: $corpus_path${NC}"
+    echo -e "${YELLOW}💡 Tip: Edit the files and run 'update' to refresh the registry${NC}"
+}
+
+# --- Main Command Interface ---
+
+case "${1:-help}" in
+    "discover")
+        discover_corpus
+        ;;
+    "update")
+        update_registry
+        ;;
+    "list")
+        list_topics
+        ;;
+    "files")
+        if [ -n "$2" ]; then
+            list_corpus_files "$2"
+        else
+            echo -e "${RED}Usage: $0 files <topic>${NC}"
+        fi
+        ;;
+    "count")
+        if [ -n "$2" ]; then
+            local count=$(count_corpus_files "$2")
+            echo -e "${BLUE}📊 $2 corpus has $count files${NC}"
+        else
+            echo -e "${RED}Usage: $0 count <topic>${NC}"
+        fi
+        ;;
+    "template")
+        if [ -n "$2" ]; then
+            create_topic_template "$2"
+        else
+            echo -e "${RED}Usage: $0 template <topic>${NC}"
+        fi
+        ;;
+    "exists")
+        if [ -n "$2" ]; then
+            if corpus_exists "$2"; then
+                echo -e "${GREEN}✅ Corpus exists for topic: $2${NC}"
+            else
+                echo -e "${RED}❌ No corpus found for topic: $2${NC}"
+            fi
+        else
+            echo -e "${RED}Usage: $0 exists <topic>${NC}"
+        fi
+        ;;
+    "help"|*)
+        echo -e "${BLUE}📚 Corpus Manager${NC}"
+        echo "Manage the RAG knowledge corpus"
+        echo ""
+        echo -e "${YELLOW}Usage: $0 <command> [arguments]${NC}"
+        echo ""
+        echo "Commands:"
+        echo "  discover         Discover corpus structure"
+        echo "  update           Update corpus registry"
+        echo "  list             List all available topics"
+        echo "  files <topic>    List files in a topic corpus"
+        echo "  count <topic>    Count files in a topic corpus"
+        echo "  exists <topic>   Check if corpus exists for topic"
+        echo "  template <topic> Create template files for new topic"
+        echo "  help             Show this help message"
+        echo ""
+        echo "Examples:"
+        echo "  $0 update"
+        echo "  $0 list"
+        echo "  $0 template physics"
+        echo "  $0 exists programming"
+        ;;
+esac
diff --git a/bash/talk-to-computer/corpus_prompt_template.md b/bash/talk-to-computer/corpus_prompt_template.md
new file mode 100644
index 0000000..f4bb91e
--- /dev/null
+++ b/bash/talk-to-computer/corpus_prompt_template.md
@@ -0,0 +1,125 @@
+You are an expert technical writer and subject matter expert. Your task is to create comprehensive, accurate, and well-structured documentation for a RAG (Retrieval-Augmented Generation) knowledge corpus.
+
+**TOPIC:** [SPECIFIC_TOPIC_NAME]
+**DOMAIN:** [GENERAL_DOMAIN] (e.g., programming, science, literature, technology)
+**TARGET_AUDIENCE:** [BEGINNER/INTERMEDIATE/ADVANCED]
+**CONTENT_TYPE:** [CONCEPTS/REFERENCE/GUIDE/TUTORIAL]
+
+**REQUIREMENTS:**
+
+1. **Format**: Write in clean Markdown with proper headers (# ## ###)
+2. **Structure**: Follow the established corpus content structure
+3. **Accuracy**: Ensure technical accuracy and completeness
+4. **Clarity**: Use clear, concise language with examples
+5. **Searchability**: Include key terms and concepts that users might search for
+6. **Cross-references**: Mention related topics where relevant
+
+**OUTPUT STRUCTURE:**
+
+# [TOPIC_NAME] - [BRIEF_DESCRIPTION]
+
+## Introduction
+
+[Provide a clear, concise introduction to the topic. Explain what it is, why it's important, and its main applications.]
+
+## Core Concepts
+
+### [Main Concept 1]
+[Detailed explanation with examples]
+
+### [Main Concept 2]
+[Detailed explanation with examples]
+
+## Key Principles
+
+[List and explain the fundamental principles or rules]
+
+## Implementation/Usage
+
+[Show how to apply the concepts with practical examples]
+
+### Basic Example
+```language
+// Code examples in appropriate languages
+[EXAMPLE_CODE]
+```
+
+### Advanced Example
+```language
+// More complex implementation
+[EXAMPLE_CODE]
+```
+
+## Common Patterns
+
+### Pattern 1: [Name]
+[Description and when to use]
+
+### Pattern 2: [Name]
+[Description and when to use]
+
+## Best Practices
+
+[Guidelines for effective usage]
+
+## Common Pitfalls
+
+[Things to avoid and how to recognize problems]
+
+## Performance Considerations
+
+[Performance implications and optimization tips]
+
+## Integration Points
+
+[How this integrates with related technologies/concepts]
+
+## Troubleshooting
+
+### Problem 1: [Common Issue]
+**Symptoms:** [What to look for]
+**Solution:** [How to fix]
+
+### Problem 2: [Common Issue]
+**Symptoms:** [What to look for]
+**Solution:** [How to fix]
+
+## Examples in Context
+
+[Real-world examples and use cases]
+
+## References
+
+[Links to official documentation, standards, or additional resources]
+
+## Related Topics
+
+- [Related Topic 1]
+- [Related Topic 2]
+- [Related Topic 3]
+
+---
+
+**CONTENT GUIDELINES:**
+
+- Use **bold** for emphasis on key terms
+- Use `inline code` for technical terms, function names, commands
+- Use code blocks with syntax highlighting for examples
+- Include both simple and complex examples
+- Provide practical, actionable information
+- Focus on clarity over complexity
+- Include error cases and solutions
+- Make content searchable with relevant keywords
+- Structure content for easy navigation
+- Ensure examples are complete and runnable
+
+**QUALITY CHECKS:**
+
+- [ ] Content is technically accurate
+- [ ] Examples are complete and correct
+- [ ] Structure follows corpus guidelines
+- [ ] Headers are descriptive and hierarchical
+- [ ] Cross-references are included where helpful
+- [ ] Content is appropriate for target audience level
+- [ ] Language is clear and professional
+- [ ] All code examples are properly formatted
\ No newline at end of file
diff --git a/bash/talk-to-computer/critique b/bash/talk-to-computer/critique
new file mode 100755
index 0000000..22a5fc6
--- /dev/null
+++ b/bash/talk-to-computer/critique
@@ -0,0 +1,171 @@
+#!/bin/bash
+
+# Critique System
+# This script uses a sequence of LLM calls to refine an initial response through critique and revision.
+#
+# APPLICATION LOGIC:
+# The critique process implements an iterative refinement system where AI models
+# collaborate to improve response quality through critique and revision. The system
+# operates through three distinct phases designed to enhance clarity and accuracy:
+#
+# PHASE 1 - INITIAL RESPONSE GENERATION:
+#   - A response model generates the first answer to the user's prompt
+#   - The model is instructed to be honest about knowledge limitations
+#   - This creates a baseline response that can be improved through iteration
+#   - The initial response serves as the foundation for refinement
+#
+# PHASE 2 - CRITICAL REVIEW:
+#   - A critic model analyzes the current response for potential issues
+#   - The critic identifies misunderstandings, unclear areas, and improvement opportunities
+#   - Constructive feedback focuses on specific problems rather than general criticism
+#   - The critique provides targeted guidance for the refinement phase
+#
+# PHASE 3 - RESPONSE REFINEMENT:
+#   - A refine model incorporates the critique to generate an improved response
+#   - The refine model considers both the original prompt and the feedback
+#   - Iterative improvement may address clarity, accuracy, or completeness issues
+#   - Multiple refinement loops may progressively enhance response quality
+#
+# REFINEMENT MODELING:
+# The system applies iterative improvement principles to AI response generation:
+#   - Separate models for different roles may provide specialized perspectives
+#   - Critical review helps identify blind spots in the initial response
+#   - Iterative refinement allows for progressive improvement over multiple cycles
+#   - Transparency through logging shows the evolution of the response
+#   - The process may help catch errors or improve clarity that single-pass generation misses
+#
+# The refinement process continues for a configurable number of loops,
+# with each iteration potentially improving upon the previous response.
+# The system emphasizes quality improvement through structured feedback and revision.
+
+# Get the directory where this script is located
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+
+# Source the logging system
+source "${SCRIPT_DIR}/logging.sh"
+
+# Source the quality guard for output quality protection
+source "${SCRIPT_DIR}/quality_guard.sh"
+
+# Get mechanism name automatically
+MECHANISM_NAME=$(get_mechanism_name "$0")
+
+# --- Model Configuration ---
+RESPONSE_MODEL="llama3:8b-instruct-q4_K_M"
+CRITIC_MODEL="phi3:3.8b-mini-4k-instruct-q4_K_M"
+REFINE_MODEL="llama3:8b-instruct-q4_K_M"
+
+# --- Defaults ---
+DEFAULT_LOOPS=2
+
+# --- Argument Validation ---
+if [ "$#" -lt 1 ]; then
+    echo -e "\n\tCritique"
+    echo -e "\tThis script uses a sequence of LLM calls to refine an initial response through critique and revision."
+    echo -e "\n\tUsage: $0 [-f <file_path>] \"<your prompt>\" [number_of_refinement_loops]"
+    echo -e "\n\tExample: $0 -f ./input.txt \"Please summarize this text file\" 2"
+    echo -e "\n\tIf number_of_refinement_loops is not provided, the program will default to $DEFAULT_LOOPS loops."
+    echo -e "\n\t-f <file_path> (optional): Append the contents of the file to the prompt."
+    echo -e "\n"
+    exit 1
+fi
+
+# --- Argument Parsing ---
+FILE_PATH=""
+while getopts "f:" opt; do
+  case $opt in
+    f)
+      FILE_PATH="$OPTARG"
+      ;;
+    *)
+      echo "Invalid option: -$OPTARG" >&2
+      exit 1
+      ;;
+  esac
+done
+shift $((OPTIND -1))
+
+PROMPT="$1"
+if [ -z "$2" ]; then
+    LOOPS=$DEFAULT_LOOPS
+else
+    LOOPS=$2
+fi
+
+# If file path is provided, append its contents to the prompt
+if [ -n "$FILE_PATH" ]; then
+    if [ ! -f "$FILE_PATH" ]; then
+        echo "File not found: $FILE_PATH" >&2
+        exit 1
+    fi
+    FILE_CONTENTS=$(cat "$FILE_PATH")
+    PROMPT="$PROMPT\n[FILE CONTENTS]\n$FILE_CONTENTS\n[END FILE]"
+fi
+
+# --- File Initialization ---
+# Create a temporary directory if it doesn't exist
+mkdir -p ~/tmp
+# Create a unique file for this session based on the timestamp
+SESSION_FILE=~/tmp/critique_$(date +%Y%m%d_%H%M%S).txt
+
+echo "Critique Session Log: ${SESSION_FILE}"
+echo "---------------------------------"
+
+# --- Initial Prompt & Response ---
+
+# 1. Store the initial user prompt in the session file
+echo "USER PROMPT: ${PROMPT}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}" # Add a newline for readability
+echo "Processing initial response..."
+
+# 2. The RESPONSE model generates the first answer
+RESPONSE_PROMPT="You are an expert, curious assistant who isn't afraid to say when they don't know something. Please respond directly to the following prompt: ${PROMPT}"
+RESPONSE_OUTPUT=$(ollama run "${RESPONSE_MODEL}" "${RESPONSE_PROMPT}")
+RESPONSE_OUTPUT=$(guard_output_quality "$RESPONSE_OUTPUT" "$PROMPT" "$MECHANISM_NAME" "$RESPONSE_MODEL")
+
+# Append the response to the session file
+echo "INITIAL RESPONSE (${RESPONSE_MODEL}):" >> "${SESSION_FILE}"
+echo "${RESPONSE_OUTPUT}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+# --- Refinement Loop ---
+
+# This variable will hold the most recent response for the next loop iteration
+CURRENT_RESPONSE="${RESPONSE_OUTPUT}"
+
+for i in $(seq 1 "${LOOPS}"); do
+    echo "Starting refinement loop ${i} of ${LOOPS}..."
+
+    # 3. The CRITIC model reviews the last response
+    CRITIC_PROMPT="You are a detail oriented, close reading, keenly critical reviewer. Your task is to raise questions, flag potential misunderstandings, and areas for improved clarity in the following text. Provide concise, constructive criticism. Do not rewrite the text, only critique it. TEXT TO CRITIQUE: ${CURRENT_RESPONSE}"
+    CRITIC_OUTPUT=$(ollama run "${CRITIC_MODEL}" "${CRITIC_PROMPT}")
+    CRITIC_OUTPUT=$(guard_output_quality "$CRITIC_OUTPUT" "$PROMPT" "$MECHANISM_NAME" "$CRITIC_MODEL")
+
+    # Append the critique to the session file
+    echo "CRITICISM ${i} (${CRITIC_MODEL}):" >> "${SESSION_FILE}"
+    echo "${CRITIC_OUTPUT}" >> "${SESSION_FILE}"
+    echo "" >> "${SESSION_FILE}"
+
+    # 4. The REFINE model reads the original prompt and the critique to generate a new response
+    REFINE_PROMPT="You are an expert assistant. Your previous response was reviewed and critiqued. Your task now is to generate a refined, improved response to the original prompt based on the feedback provided. ORIGINAL PROMPT: ${PROMPT} CONSTRUCTIVE CRITICISM: ${CRITIC_OUTPUT} Generate the refined response now."
+    REFINE_OUTPUT=$(ollama run "${REFINE_MODEL}" "${REFINE_PROMPT}")
+    REFINE_OUTPUT=$(guard_output_quality "$REFINE_OUTPUT" "$PROMPT" "$MECHANISM_NAME" "$REFINE_MODEL")
+
+    # Append the refined response to the session file
+    echo "REFINED RESPONSE ${i} (${REFINE_MODEL}):" >> "${SESSION_FILE}"
+    echo "${REFINE_OUTPUT}" >> "${SESSION_FILE}"
+    echo "" >> "${SESSION_FILE}"
+
+    # Update the current response for the next loop or for the final output
+    CURRENT_RESPONSE="${REFINE_OUTPUT}"
+done
+
+# --- Final Output ---
+
+echo "---------------------------------"
+echo "Critique process complete."
+echo "Final refined answer:"
+echo "---------------------------------"
+
+# Print the final, most refined answer to standard output
+echo "${CURRENT_RESPONSE}"
diff --git a/bash/talk-to-computer/exploration b/bash/talk-to-computer/exploration
new file mode 100755
index 0000000..ff62a31
--- /dev/null
+++ b/bash/talk-to-computer/exploration
@@ -0,0 +1,304 @@
+#!/bin/bash
+
+# Get the directory where this script is located
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+
+# Exploration System
+# This script systematically explores multiple solution paths and compares alternatives.
+#
+# APPLICATION LOGIC:
+# The exploration process implements a branching analysis system that systematically
+# explores multiple approaches to a problem and compares their merits. The system
+# operates through three distinct phases designed to maximize discovery and comparison:
+#
+# PHASE 1 - PATH GENERATION:
+#   - Identifies multiple possible approaches to the problem
+#   - Generates alternative solution paths
+#   - Ensures comprehensive coverage of the solution space
+#   - Creates a foundation for systematic comparison
+#
+# PHASE 2 - PATH EXPLORATION:
+#   - Explores each identified path in detail
+#   - Develops the implications and consequences of each approach
+#   - Identifies strengths, weaknesses, and trade-offs
+#   - Provides detailed analysis of each alternative
+#
+# PHASE 3 - COMPARATIVE ANALYSIS:
+#   - Systematically compares all explored paths
+#   - Evaluates relative merits and trade-offs
+#   - Identifies optimal approaches or combinations
+#   - Provides recommendations based on different criteria
+#
+# EXPLORATION MODELING:
+# The system applies systematic exploration principles to AI response generation:
+#   - Multiple paths ensure comprehensive problem coverage
+#   - Systematic comparison reveals optimal approaches
+#   - Trade-off analysis helps users make informed decisions
+#   - The process may reveal unexpected insights or approaches
+#   - Transparency shows how different paths were evaluated
+#   - The method may identify novel solutions that single-path approaches miss
+#
+# The exploration process emphasizes systematic analysis and comparison,
+# ensuring users understand the full range of possible approaches and their implications.
+
+# Source the logging system using absolute path
+source "${SCRIPT_DIR}/logging.sh"
+
+# Source the quality guard for output quality protection
+source "${SCRIPT_DIR}/quality_guard.sh"
+
+# Source the RAG integration for corpus queries
+source "${SCRIPT_DIR}/rag_integration.sh"
+
+# Get mechanism name automatically
+MECHANISM_NAME=$(get_mechanism_name "$0")
+
+# Ensure validation functions are available
+if ! command -v validate_prompt >/dev/null 2>&1; then
+    echo "Error: Validation functions not loaded properly" >&2
+    exit 1
+fi
+
+# --- Model Configuration ---
+EXPLORATION_MODEL="llama3:8b-instruct-q4_K_M"
+ANALYSIS_MODEL="phi3:3.8b-mini-4k-instruct-q4_K_M"
+
+# Validate and set models with fallbacks
+EXPLORATION_MODEL=$(validate_model "$EXPLORATION_MODEL" "gemma3n:e2b")
+if [ $? -ne 0 ]; then
+    log_error "No valid exploration model available"
+    exit 1
+fi
+
+ANALYSIS_MODEL=$(validate_model "$ANALYSIS_MODEL" "gemma3n:e2b")
+if [ $? -ne 0 ]; then
+    log_error "No valid analysis model available"
+    exit 1
+fi
+
+# --- Defaults ---
+DEFAULT_PATHS=3
+
+# --- Argument Validation ---
+if [ "$#" -lt 1 ]; then
+    echo -e "\n\tExploration"
+    echo -e "\tThis script systematically explores multiple solution paths and compares alternatives."
+    echo -e "\n\tUsage: $0 [-f <file_path>] [-p <number_of_paths>] \"<your prompt>\" [number_of_rounds]"
+    echo -e "\n\tExample: $0 -f ./input.txt -p 4 \"How can we solve this problem?\" 2"
+    echo -e "\n\tIf number_of_rounds is not provided, the program will default to 2 rounds."
+    echo -e "\n\t-f <file_path> (optional): Append the contents of the file to the prompt."
+    echo -e "\n\t-p <paths> (optional): Number of solution paths to explore (default: 3)."
+    echo -e "\n"
+    exit 1
+fi
+
+# --- Argument Parsing ---
+FILE_PATH=""
+NUM_PATHS=$DEFAULT_PATHS
+while getopts "f:p:" opt; do
+  case $opt in
+    f)
+      FILE_PATH="$OPTARG"
+      ;;
+    p)
+      NUM_PATHS="$OPTARG"
+      ;;
+    *)
+      log_error "Invalid option: -$OPTARG"
+      exit 1
+      ;;
+  esac
+done
+shift $((OPTIND -1))
+
+PROMPT="$1"
+if [ -z "$2" ]; then
+    ROUNDS=2
+else
+    ROUNDS=$2
+fi
+
+# Validate prompt
+PROMPT=$(validate_prompt "$PROMPT")
+if [ $? -ne 0 ]; then
+    exit 1
+fi
+
+# Validate file path if provided
+if [ -n "$FILE_PATH" ]; then
+    if ! validate_file_path "$FILE_PATH"; then
+        exit 1
+    fi
+    FILE_CONTENTS=$(cat "$FILE_PATH")
+    PROMPT="$PROMPT\n[FILE CONTENTS]\n$FILE_CONTENTS\n[END FILE]"
+fi
+
+# Validate number of paths
+if ! [[ "$NUM_PATHS" =~ ^[1-9][0-9]*$ ]] || [ "$NUM_PATHS" -gt 10 ]; then
+    log_error "Invalid number of paths: $NUM_PATHS (must be 1-10)"
+    exit 1
+fi
+
+# Validate rounds
+if ! [[ "$ROUNDS" =~ ^[1-9][0-9]*$ ]] || [ "$ROUNDS" -gt 5 ]; then
+    log_error "Invalid number of rounds: $ROUNDS (must be 1-5)"
+    exit 1
+fi
+
+# --- File Initialization ---
+mkdir -p ~/tmp
+SESSION_FILE=~/tmp/exploration_$(date +%Y%m%d_%H%M%S).txt
+
+# Initialize timing
+SESSION_ID=$(generate_session_id)
+start_timer "$SESSION_ID" "exploration"
+
+echo "Exploration Session Log: ${SESSION_FILE}"
+echo "---------------------------------"
+
+# Store the initial user prompt in the session file
+echo "USER PROMPT: ${PROMPT}" >> "${SESSION_FILE}"
+echo "NUMBER OF PATHS: ${NUM_PATHS}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+# --- Phase 1: Path Generation ---
+echo "Phase 1: Generating solution paths..."
+echo "PHASE 1 - PATH GENERATION:" >> "${SESSION_FILE}"
+
+# Check for RAG context
+RAG_CONTEXT=$(use_rag_if_available "${PROMPT}" "${MECHANISM_NAME}")
+
+PATH_GENERATION_PROMPT="You are a strategic thinker. Your task is to identify ${NUM_PATHS} distinct, viable approaches to the following problem. Each path should represent a different strategy or methodology.
+
+PROBLEM: ${PROMPT}
+
+$(if [[ "$RAG_CONTEXT" != "$PROMPT" ]]; then
+echo "ADDITIONAL CONTEXT FROM KNOWLEDGE BASE:
+$RAG_CONTEXT
+
+Use this context to inform your solution paths and provide more relevant alternatives."
+fi)
+
+Please identify ${NUM_PATHS} different solution paths. For each path, provide:
+1. A clear name/title for the approach
+2. A brief description of the strategy
+3. The key principles or methodology it follows
+
+Format your response as:
+PATH 1: [Name]
+[Description]
+
+PATH 2: [Name]
+[Description]
+
+etc.
+
+Ensure the paths are genuinely different approaches, not just variations of the same idea."
+
+paths_output=$(ollama run "${EXPLORATION_MODEL}" "${PATH_GENERATION_PROMPT}")
+paths_output=$(guard_output_quality "$paths_output" "$PROMPT" "$MECHANISM_NAME" "$EXPLORATION_MODEL")
+
+echo "GENERATED PATHS:" >> "${SESSION_FILE}"
+echo "${paths_output}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+# --- Phase 2: Path Exploration ---
+echo "Phase 2: Exploring each path in detail..."
+echo "PHASE 2 - PATH EXPLORATION:" >> "${SESSION_FILE}"
+
+declare -a path_analyses
+declare -a path_names
+
+# Extract path names and explore each one
+for i in $(seq 1 "${NUM_PATHS}"); do
+    echo "  Exploring path ${i}..."
+    
+    # Extract the path description (simplified approach)
+    path_section=$(echo "${paths_output}" | sed -n "/PATH ${i}:/,/PATH $((i+1)):/p" | sed '$d')
+    if [ -z "$path_section" ]; then
+        path_section=$(echo "${paths_output}" | sed -n "/PATH ${i}:/,\$p")
+    fi
+    
+    # Generate path name from the section
+    path_name=$(echo "${path_section}" | head -n1 | sed 's/^PATH [0-9]*: //')
+    
+    EXPLORATION_PROMPT="You are a detailed analyst. Explore the following solution path in depth, considering its implications, requirements, and potential outcomes.
+
+ORIGINAL PROBLEM: ${PROMPT}
+
+PATH TO EXPLORE: ${path_section}
+
+Please provide a comprehensive analysis of this path including:
+1. Detailed implementation approach
+2. Potential benefits and advantages
+3. Potential challenges and risks
+4. Resource requirements and constraints
+5. Expected outcomes and timeline
+6. Success factors and critical elements
+7. Potential variations or adaptations
+
+Provide a thorough, well-structured analysis."
+
+    path_analysis=$(ollama run "${EXPLORATION_MODEL}" "${EXPLORATION_PROMPT}")
+    path_analysis=$(guard_output_quality "$path_analysis" "$PROMPT" "$MECHANISM_NAME" "$EXPLORATION_MODEL")
+    
+    path_analyses[$((i-1))]="${path_analysis}"
+    path_names[$((i-1))]="${path_name}"
+    
+    echo "PATH ${i} ANALYSIS (${path_name}):" >> "${SESSION_FILE}"
+    echo "${path_analysis}" >> "${SESSION_FILE}"
+    echo "" >> "${SESSION_FILE}"
+done
+
+# --- Phase 3: Comparative Analysis ---
+echo "Phase 3: Comparing and evaluating paths..."
+echo "PHASE 3 - COMPARATIVE ANALYSIS:" >> "${SESSION_FILE}"
+
+# Create comparison prompt
+COMPARISON_PROMPT="You are a strategic analyst. Compare and evaluate the following solution paths for the given problem.
+
+ORIGINAL PROBLEM: ${PROMPT}
+
+PATH ANALYSES:"
+
+for i in $(seq 0 $((NUM_PATHS-1))); do
+    COMPARISON_PROMPT="${COMPARISON_PROMPT}
+
+PATH ${i+1}: ${path_names[$i]}
+${path_analyses[$i]}"
+done
+
+COMPARISON_PROMPT="${COMPARISON_PROMPT}
+
+Please provide a comprehensive comparative analysis including:
+1. Direct comparison of approaches across key criteria
+2. Relative strengths and weaknesses of each path
+3. Trade-offs and opportunity costs
+4. Risk assessment for each approach
+5. Recommendations based on different scenarios
+6. Potential for combining elements from multiple paths
+7. Final recommendation with justification
+
+Provide a clear, structured comparison that helps decision-making."
+
+comparative_analysis=$(ollama run "${ANALYSIS_MODEL}" "${COMPARISON_PROMPT}")
+comparative_analysis=$(guard_output_quality "$comparative_analysis" "$PROMPT" "$MECHANISM_NAME" "$ANALYSIS_MODEL")
+
+echo "COMPARATIVE ANALYSIS:" >> "${SESSION_FILE}"
+echo "${comparative_analysis}" >> "${SESSION_FILE}"
+
+# End timing
+duration=$(end_timer "$SESSION_ID" "exploration")
+
+# --- Final Output ---
+echo "---------------------------------"
+echo "Exploration process complete."
+echo "Comparative analysis:"
+echo "---------------------------------"
+
+echo "${comparative_analysis}"
+echo ""
+echo "Paths explored: ${NUM_PATHS}"
+echo "Execution time: ${duration} seconds"
+echo ""
+echo "Full exploration log: ${SESSION_FILE}" 
\ No newline at end of file
diff --git a/bash/talk-to-computer/lil_tester.sh b/bash/talk-to-computer/lil_tester.sh
new file mode 100755
index 0000000..8bdcc41
--- /dev/null
+++ b/bash/talk-to-computer/lil_tester.sh
@@ -0,0 +1,288 @@
+#!/bin/bash
+
+# Lil Script Tester - Secure Sandbox Testing Module
+# This module provides secure testing capabilities for Lil scripts generated by the puzzle mechanism.
+#
+# SECURITY FEATURES:
+# - Sandboxed execution environment
+# - Resource limits (CPU, memory, time)
+# - File system isolation
+# - Network access prevention
+# - Safe error handling
+# - Result validation and sanitization
+
+# --- Configuration ---
+TEST_TIMEOUT=10          # Maximum execution time in seconds
+MAX_OUTPUT_SIZE=10000    # Maximum output size in characters
+TEMP_DIR_BASE="/tmp/lil_test"  # Base temporary directory
+SAFE_COMMANDS=("print" "echo" "count" "first" "last" "sum" "min" "max" "range" "keys" "list" "table" "typeof" "mag" "unit")
+
+# --- Security Functions ---
+
+# Create secure temporary directory
+create_secure_temp_dir() {
+    local dir="$1"
+    mkdir -p "$dir"
+    chmod 700 "$dir"
+    
+    # Create a minimal environment
+    echo "()" > "$dir/empty.lil"
+    echo "nil" > "$dir/nil.lil"
+}
+
+# Clean up temporary directory
+cleanup_temp_dir() {
+    local dir="$1"
+    if [ -d "$dir" ]; then
+        rm -rf "$dir" 2>/dev/null
+    fi
+}
+
+# Validate Lil code for potentially dangerous operations
+validate_lil_code() {
+    local code="$1"
+    
+    # Check for potentially dangerous patterns
+    local dangerous_patterns=(
+        "system\\["           # System calls
+        "exec\\["             # Execution
+        "file\\."             # File operations
+        "network\\."          # Network operations
+        "http\\."             # HTTP requests
+        "shell\\["            # Shell execution
+        "\\$\\("              # Command substitution
+        "\\`.*\\`"            # Backtick execution
+    )
+    
+    for pattern in "${dangerous_patterns[@]}"; do
+        if echo "$code" | grep -q "$pattern" 2>/dev/null; then
+            echo "DANGEROUS_CODE_DETECTED: $pattern"
+            return 1
+        fi
+    done
+    
+    # Check for reasonable complexity (prevent infinite loops)
+    local line_count=$(echo "$code" | wc -l)
+    if [ "$line_count" -gt 100 ]; then
+        echo "CODE_TOO_COMPLEX: $line_count lines (max: 100)"
+        return 1
+    fi
+    
+    echo "CODE_VALIDATED"
+    return 0
+}
+
+# Create a safe test wrapper
+create_safe_test_wrapper() {
+    local code="$1"
+    local test_name="$2"
+    local temp_dir="$3"
+    
+    # Create a safe test file
+    cat > "$temp_dir/test_$test_name.lil" << EOF
+# Safe test wrapper for: $test_name
+# Generated by Lil Tester
+
+# Set safe defaults
+on safe_test do
+    local result
+    local error_occurred
+    
+    # Wrap execution in error handling
+    on execute_safely do
+        $code
+    end
+    
+    # Execute and capture result
+    result:execute_safely()
+    
+    # Return result or error indicator
+    if result = nil
+        "ERROR: Execution failed or returned nil"
+    else
+        result
+    end
+end
+
+# Run the test
+safe_test()
+EOF
+}
+
+# Execute Lil code safely
+execute_lil_safely() {
+    local code="$1"
+    local test_name="$2"
+    local temp_dir="$3"
+    
+    # Validate code first
+    local validation_result=$(validate_lil_code "$code")
+    if [ $? -ne 0 ]; then
+        echo "VALIDATION_FAILED: $validation_result"
+        return 1
+    fi
+    
+    # Create safe test wrapper
+    create_safe_test_wrapper "$code" "$test_name" "$temp_dir"
+    
+    # Try lilt first, fallback to lila
+    local result=""
+    local exit_code=1
+    
+    # Test with lilt
+    if command -v lilt >/dev/null 2>&1; then
+        echo "Testing with lilt..."
+        result=$(timeout "$TEST_TIMEOUT" lilt "$temp_dir/test_$test_name.lil" 2>&1)
+        exit_code=$?
+        
+        if [ $exit_code -eq 0 ]; then
+            echo "SUCCESS: lilt execution completed"
+        else
+            echo "lilt failed, trying lila..."
+        fi
+    fi
+    
+    # Fallback to lila if lilt failed
+    if [ $exit_code -ne 0 ] && command -v lila >/dev/null 2>&1; then
+        echo "Testing with lila..."
+        result=$(timeout "$TEST_TIMEOUT" lila "$temp_dir/test_$test_name.lil" 2>&1)
+        exit_code=$?
+        
+        if [ $exit_code -eq 0 ]; then
+            echo "SUCCESS: lila execution completed"
+        else
+            echo "Both lilt and lila failed"
+        fi
+    fi
+    
+    # Check output size
+    local output_size=${#result}
+    if [ "$output_size" -gt "$MAX_OUTPUT_SIZE" ]; then
+        result="$(echo "$result" | head -c "$MAX_OUTPUT_SIZE")... [TRUNCATED]"
+    fi
+    
+    echo "$result"
+    return $exit_code
+}
+
+# Run comprehensive tests
+run_lil_tests() {
+    local code="$1"
+    local test_name="$2"
+    
+    # Create unique temporary directory
+    local temp_dir="${TEMP_DIR_BASE}_$$_$(date +%s)"
+    
+    echo "=== Lil Script Testing ==="
+    echo "Test Name: $test_name"
+    echo "Code Length: $(echo "$code" | wc -c) characters"
+    echo "----------------------------------------"
+    
+    # Create secure temporary directory
+    create_secure_temp_dir "$temp_dir"
+    
+    # Trap cleanup on exit
+    trap 'cleanup_temp_dir "$temp_dir"' EXIT
+    
+    # Execute the code safely
+    local start_time=$(date +%s.%N)
+    local result=$(execute_lil_safely "$code" "$test_name" "$temp_dir")
+    local exit_code=$?
+    local end_time=$(date +%s.%N)
+    
+    # Calculate execution time
+    local duration=$(echo "$end_time - $start_time" | bc -l 2>/dev/null || echo "0")
+    
+    # Report results
+    echo "----------------------------------------"
+    echo "Test Results:"
+    echo "Exit Code: $exit_code"
+    echo "Execution Time: ${duration}s"
+    echo "Output:"
+    echo "$result"
+    
+    if [ $exit_code -eq 0 ]; then
+        echo "✅ Test PASSED"
+        return 0
+    else
+        echo "❌ Test FAILED"
+        return 1
+    fi
+}
+
+# Test specific Lil constructs
+test_lil_constructs() {
+    local code="$1"
+    local test_name="$2"
+    
+    # Create unique temporary directory for construct testing
+    local temp_dir="${TEMP_DIR_BASE}_constructs_$$_$(date +%s)"
+    
+    echo "=== Lil Construct Testing ==="
+    echo "Testing specific Lil language features..."
+    
+    # Create and cleanup temp dir
+    create_secure_temp_dir "$temp_dir"
+    trap 'cleanup_temp_dir "$temp_dir"' EXIT
+    
+    # Test basic operations
+    local basic_tests=(
+        "Basic arithmetic: 2+3*4"
+        "List operations: (1,2,3) take 2"
+        "Dictionary: dict (\"a\",1) (\"b\",2)"
+        "Function definition: on test do 42 end"
+    )
+    
+    for test in "${basic_tests[@]}"; do
+        local test_desc=$(echo "$test" | cut -d: -f1)
+        local test_code=$(echo "$test" | cut -d: -f2)
+        
+        echo "Testing: $test_desc"
+        local result=$(execute_lil_safely "$test_code" "basic_$test_desc" "$temp_dir")
+        
+        if [ $? -eq 0 ]; then
+            echo "  ✅ $test_desc: PASSED"
+        else
+            echo "  ❌ $test_desc: FAILED"
+        fi
+    done
+}
+
+# Main testing interface
+test_lil_script() {
+    local code="$1"
+    local test_name="${2:-unnamed_test}"
+    
+    if [ -z "$code" ]; then
+        echo "Error: No code provided for testing"
+        return 1
+    fi
+    
+    # Run the main test
+    run_lil_tests "$code" "$test_name"
+    local main_result=$?
+    
+    # Run construct-specific tests
+    test_lil_constructs "$code" "$test_name"
+    
+    return $main_result
+}
+
+# Export functions for use by other scripts
+export -f test_lil_script
+export -f run_lil_tests
+export -f execute_lil_safely
+export -f validate_lil_code
+export -f create_secure_temp_dir
+export -f cleanup_temp_dir
+
+# If run directly, provide usage information
+if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
+    if [ "$#" -lt 1 ]; then
+        echo "Usage: $0 <lil_code> [test_name]"
+        echo "Example: $0 'on test do 42 end' 'simple_function'"
+        exit 1
+    fi
+    
+    test_lil_script "$1" "${2:-unnamed_test}"
+fi
diff --git a/bash/talk-to-computer/logging.sh b/bash/talk-to-computer/logging.sh
new file mode 100755
index 0000000..c8a61d1
--- /dev/null
+++ b/bash/talk-to-computer/logging.sh
@@ -0,0 +1,247 @@
+#!/bin/bash
+
+# Unified Logging System
+# This script provides consistent logging and performance metrics across all thinking mechanisms.
+
+# --- Logging Configuration ---
+LOG_DIR=~/tmp/ai_thinking
+METRICS_FILE="${LOG_DIR}/performance_metrics.json"
+SESSION_LOG="${LOG_DIR}/session_$(date +%Y%m%d_%H%M%S).json"
+ERROR_LOG="${LOG_DIR}/errors.log"
+
+# Create logging directory
+mkdir -p "${LOG_DIR}"
+
+# --- Error Logging ---
+log_error() {
+    local message="$1"
+    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
+    echo "[ERROR] ${timestamp}: ${message}" >> "${ERROR_LOG}"
+    echo "Error: ${message}" >&2
+}
+
+log_warning() {
+    local message="$1"
+    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
+    echo "[WARNING] ${timestamp}: ${message}" >> "${ERROR_LOG}"
+    echo "Warning: ${message}" >&2
+}
+
+# --- Input Validation Functions ---
+validate_file_path() {
+    local file_path="$1"
+    
+    if [ -z "$file_path" ]; then
+        return 0  # Empty path is valid (optional)
+    fi
+    
+    if [ ! -f "$file_path" ]; then
+        log_error "File not found: $file_path"
+        return 1
+    fi
+    
+    if [ ! -r "$file_path" ]; then
+        log_error "File not readable: $file_path"
+        return 1
+    fi
+    
+    return 0
+}
+
+validate_prompt() {
+    local prompt="$1"
+    local max_length=10000
+    
+    if [ -z "$prompt" ] || [[ "$prompt" =~ ^[[:space:]]*$ ]]; then
+        log_error "Empty or whitespace-only prompt"
+        return 1
+    fi
+    
+    if [ ${#prompt} -gt $max_length ]; then
+        log_error "Prompt too long (${#prompt} chars, max: $max_length)"
+        return 1
+    fi
+    
+    # Basic sanitization - remove potentially dangerous characters
+    local sanitized=$(echo "$prompt" | sed 's/[<>"'\''&]/g' 2>/dev/null || echo "$prompt")
+    if [ "$sanitized" != "$prompt" ]; then
+        log_warning "Prompt contained special characters that were sanitized"
+        echo "$sanitized"
+        return 0
+    fi
+    
+    echo "$prompt"
+    return 0
+}
+
+# --- Model Validation ---
+validate_model() {
+    local model="$1"
+    local fallback_model="$2"
+    
+    if ! command -v ollama >/dev/null 2>&1; then
+        log_error "Ollama not found in PATH"
+        return 1
+    fi
+    
+    if ! ollama list | grep -q "$model"; then
+        log_warning "Model '$model' not available"
+        if [ -n "$fallback_model" ] && ollama list | grep -q "$fallback_model"; then
+            log_warning "Falling back to '$fallback_model'"
+            echo "$fallback_model"
+            return 0
+        else
+            log_error "No fallback model available"
+            return 1
+        fi
+    fi
+    
+    echo "$model"
+    return 0
+}
+
+# --- Timing Functions ---
+start_timer() {
+    local session_id="$1"
+    local mechanism="$2"
+    local start_time=$(date +%s.%N)
+    
+    # Store start time
+    echo "$start_time" > "/tmp/${session_id}_start"
+    
+    # Log session start
+    log_session_start "$session_id" "$mechanism" "$start_time"
+}
+
+end_timer() {
+    local session_id="$1"
+    local mechanism="$2"
+    local end_time=$(date +%s.%N)
+    local start_time=$(cat "/tmp/${session_id}_start" 2>/dev/null || echo "$end_time")
+    
+    # Calculate duration
+    local duration=$(echo "$end_time - $start_time" | bc -l 2>/dev/null || echo "0")
+    
+    # Log session end
+    log_session_end "$session_id" "$mechanism" "$end_time" "$duration"
+    
+    # Clean up
+    rm -f "/tmp/${session_id}_start"
+    
+    echo "$duration"
+}
+
+# --- Session Logging ---
+log_session_start() {
+    local session_id="$1"
+    local mechanism="$2"
+    local start_time="$3"
+    
+    cat > "${SESSION_LOG}" << EOF
+{
+  "session_id": "${session_id}",
+  "mechanism": "${mechanism}",
+  "start_time": "${start_time}",
+  "prompt": "${PROMPT:-""}",
+  "status": "started"
+}
+EOF
+}
+
+log_session_end() {
+    local session_id="$1"
+    local mechanism="$2"
+    local end_time="$3"
+    local duration="$4"
+    
+    # Update session log
+    cat > "${SESSION_LOG}" << EOF
+{
+  "session_id": "${session_id}",
+  "mechanism": "${mechanism}",
+  "start_time": "$(cat "${SESSION_LOG}" | jq -r '.start_time' 2>/dev/null || echo "")",
+  "end_time": "${end_time}",
+  "duration": "${duration}",
+  "prompt": "${PROMPT:-""}",
+  "status": "completed"
+}
+EOF
+    
+    # Update metrics file
+    update_metrics "$mechanism" "$duration"
+}
+
+# --- Metrics Management ---
+update_metrics() {
+    local mechanism="$1"
+    local duration="$2"
+    
+    # Create metrics file if it doesn't exist
+    if [ ! -f "${METRICS_FILE}" ]; then
+        cat > "${METRICS_FILE}" << EOF
+{
+  "total_sessions": 0,
+  "mechanisms": {},
+  "average_durations": {}
+}
+EOF
+    fi
+    
+    # Update metrics using jq (if available) or simple text processing
+    if command -v jq >/dev/null 2>&1; then
+        # Use jq for proper JSON handling
+        local temp_file=$(mktemp)
+        jq --arg mechanism "$mechanism" --arg duration "$duration" '
+            .total_sessions += 1 |
+            .mechanisms[$mechanism] = (.mechanisms[$mechanism] // 0) + 1 |
+            .average_durations[$mechanism] = (
+                (.average_durations[$mechanism] // 0) * (.mechanisms[$mechanism] - 1) + ($duration | tonumber)
+            ) / .mechanisms[$mechanism]
+        ' "${METRICS_FILE}" > "$temp_file"
+        mv "$temp_file" "${METRICS_FILE}"
+    else
+        # Fallback: simple text-based metrics
+        echo "$(date +%Y%m%d_%H%M%S),${mechanism},${duration}" >> "${LOG_DIR}/simple_metrics.csv"
+    fi
+}
+
+# --- Utility Functions ---
+generate_session_id() {
+    echo "session_$(date +%Y%m%d_%H%M%S)_$$"
+}
+
+get_metrics_summary() {
+    if [ -f "${METRICS_FILE}" ]; then
+        echo "=== Performance Metrics ==="
+        if command -v jq >/dev/null 2>&1; then
+            jq -r '.mechanisms | to_entries[] | "\(.key): \(.value) sessions"' "${METRICS_FILE}"
+            echo ""
+            jq -r '.average_durations | to_entries[] | "\(.key): \(.value | tonumber | floor)s average"' "${METRICS_FILE}"
+        else
+            echo "Metrics available in: ${METRICS_FILE}"
+        fi
+    else
+        echo "No metrics available yet."
+    fi
+}
+
+# --- Export Functions for Other Scripts ---
+export -f start_timer
+export -f end_timer
+export -f log_session_start
+export -f log_session_end
+export -f update_metrics
+export -f generate_session_id
+export -f get_metrics_summary
+export -f log_error
+export -f log_warning
+export -f validate_file_path
+export -f validate_prompt
+export -f validate_model
+
+# Alternative export method for compatibility
+if [ -n "$BASH_VERSION" ]; then
+    export start_timer end_timer log_session_start log_session_end update_metrics
+    export generate_session_id get_metrics_summary log_error log_warning
+    export validate_file_path validate_prompt validate_model
+fi 
\ No newline at end of file
diff --git a/bash/talk-to-computer/metrics b/bash/talk-to-computer/metrics
new file mode 100755
index 0000000..ad430b5
--- /dev/null
+++ b/bash/talk-to-computer/metrics
@@ -0,0 +1,18 @@
+#!/bin/bash
+
+# Metrics Viewer
+# This script displays performance metrics from the unified logging system.
+
+# Source the logging system
+source ./logging.sh
+
+echo "AI Thinking System - Performance Metrics"
+echo "========================================"
+echo ""
+
+# Display metrics summary
+get_metrics_summary
+
+echo ""
+echo "Detailed metrics file: ${METRICS_FILE}"
+echo "Session logs directory: ${LOG_DIR}" 
\ No newline at end of file
diff --git a/bash/talk-to-computer/model_selector.sh b/bash/talk-to-computer/model_selector.sh
new file mode 100755
index 0000000..d3a46a1
--- /dev/null
+++ b/bash/talk-to-computer/model_selector.sh
@@ -0,0 +1,380 @@
+#!/bin/bash
+
+# Dynamic Model Selector
+# Intelligently selects models based on task type, availability, and capabilities
+
+# --- Model Capability Database ---
+
+# Model database using simple variables (compatible with older bash)
+MODEL_DB_DIR="${LOG_DIR:-/tmp/ai_thinking}/model_db"
+
+# Initialize model database with simple variables
+init_model_database() {
+    # Create model database directory
+    mkdir -p "$MODEL_DB_DIR"
+
+    # Model capabilities by task type (using file-based storage for compatibility)
+
+    # === CONFIGURED MODELS ===
+    cat > "$MODEL_DB_DIR/llama3_8b_instruct_q4_K_M" << 'EOF'
+coding=0.8
+reasoning=0.9
+creative=0.7
+size=8
+speed=0.8
+EOF
+
+    cat > "$MODEL_DB_DIR/phi3_3_8b_mini_4k_instruct_q4_K_M" << 'EOF'
+coding=0.7
+reasoning=0.8
+creative=0.6
+size=3.8
+speed=0.9
+EOF
+
+    cat > "$MODEL_DB_DIR/deepseek_r1_1_5b" << 'EOF'
+coding=0.6
+reasoning=0.9
+creative=0.5
+size=1.5
+speed=0.95
+EOF
+
+    cat > "$MODEL_DB_DIR/gemma3n_e2b" << 'EOF'
+coding=0.8
+reasoning=0.8
+creative=0.8
+size=2
+speed=0.85
+EOF
+
+    cat > "$MODEL_DB_DIR/dolphin3_latest" << 'EOF'
+coding=0.6
+reasoning=0.7
+creative=0.8
+size=7
+speed=0.7
+EOF
+
+    # === ADDITIONAL MODELS FROM OLLAMA LIST ===
+
+    # Llama 3.1 - Newer version, should be similar to Llama 3 but potentially better
+    cat > "$MODEL_DB_DIR/llama3_1_8b" << 'EOF'
+coding=0.82
+reasoning=0.92
+creative=0.72
+size=8
+speed=0.82
+EOF
+
+    # DeepSeek R1 7B - Larger reasoning model
+    cat > "$MODEL_DB_DIR/deepseek_r1_7b" << 'EOF'
+coding=0.65
+reasoning=0.95
+creative=0.55
+size=7
+speed=0.7
+EOF
+
+    # Gemma 3N Latest - Larger version of e2b
+    cat > "$MODEL_DB_DIR/gemma3n_latest" << 'EOF'
+coding=0.82
+reasoning=0.82
+creative=0.82
+size=7.5
+speed=0.8
+EOF
+
+    # Gemma 3 4B - Different model family
+    cat > "$MODEL_DB_DIR/gemma3_4b" << 'EOF'
+coding=0.75
+reasoning=0.78
+creative=0.75
+size=4
+speed=0.85
+EOF
+
+    # Qwen2.5 7B - Alibaba model, general purpose
+    cat > "$MODEL_DB_DIR/qwen2_5_7b" << 'EOF'
+coding=0.78
+reasoning=0.85
+creative=0.7
+size=7
+speed=0.75
+EOF
+
+    # Qwen3 8B - Latest Qwen model
+    cat > "$MODEL_DB_DIR/qwen3_8b" << 'EOF'
+coding=0.8
+reasoning=0.88
+creative=0.72
+size=8
+speed=0.78
+EOF
+
+    # Qwen3 4B - Smaller Qwen model
+    cat > "$MODEL_DB_DIR/qwen3_4b" << 'EOF'
+coding=0.75
+reasoning=0.82
+creative=0.68
+size=4
+speed=0.85
+EOF
+
+    # Qwen3 1.7B - Smallest Qwen model
+    cat > "$MODEL_DB_DIR/qwen3_1_7b" << 'EOF'
+coding=0.65
+reasoning=0.7
+creative=0.6
+size=1.7
+speed=0.95
+EOF
+
+    # DeepScaler - Performance optimization focus
+    cat > "$MODEL_DB_DIR/deepscaler_latest" << 'EOF'
+coding=0.7
+reasoning=0.8
+creative=0.65
+size=3.6
+speed=0.88
+EOF
+
+    # Yasser Qwen2.5 - Fine-tuned variant
+    cat > "$MODEL_DB_DIR/yasserrmd_Qwen2_5_7B_Instruct_1M_latest" << 'EOF'
+coding=0.82
+reasoning=0.9
+creative=0.75
+size=7
+speed=0.75
+EOF
+
+    # Nomic Embed Text - Specialized for embeddings, not general tasks
+    cat > "$MODEL_DB_DIR/nomic_embed_text_latest" << 'EOF'
+coding=0.1
+reasoning=0.1
+creative=0.1
+size=0.274
+speed=0.95
+EOF
+}
+
+# Get model capability score
+get_model_capability() {
+    local model_key="$1"
+    local task_type="$2"
+
+    # Convert model name to filename-friendly format
+    local safe_name=$(echo "$model_key" | tr ':' '_' | tr '.' '_')
+    local db_file="$MODEL_DB_DIR/$safe_name"
+
+    if [ -f "$db_file" ]; then
+        grep "^${task_type}=" "$db_file" | cut -d'=' -f2
+    else
+        echo "0.5"  # Default capability score
+    fi
+}
+
+# Get model size
+get_model_size() {
+    local model_key="$1"
+    local safe_name=$(echo "$model_key" | tr ':' '_' | tr '.' '_')
+    local db_file="$MODEL_DB_DIR/$safe_name"
+
+    if [ -f "$db_file" ]; then
+        grep "^size=" "$db_file" | cut -d'=' -f2
+    else
+        echo "5"  # Default size
+    fi
+}
+
+# Get model speed
+get_model_speed() {
+    local model_key="$1"
+    local safe_name=$(echo "$model_key" | tr ':' '_' | tr '.' '_')
+    local db_file="$MODEL_DB_DIR/$safe_name"
+
+    if [ -f "$db_file" ]; then
+        grep "^speed=" "$db_file" | cut -d'=' -f2
+    else
+        echo "0.5"  # Default speed
+    fi
+}
+
+# --- Model Discovery ---
+
+# Get list of available models
+get_available_models() {
+    ollama list 2>/dev/null | tail -n +2 | awk '{print $1}' | sort
+}
+
+# Check if a model is available
+is_model_available() {
+    local model="$1"
+    ollama list 2>/dev/null | grep -q "^${model}\s"
+}
+
+# --- Task Type Classification ---
+
+# Classify task type from prompt and mechanism
+classify_task_type() {
+    local prompt="$1"
+    local mechanism="$2"
+
+    # Task type classification based on mechanism
+    case "$mechanism" in
+        "puzzle")
+            echo "coding"
+            ;;
+        "socratic")
+            echo "reasoning"
+            ;;
+        "exploration")
+            echo "reasoning"
+            ;;
+        "consensus")
+            echo "reasoning"
+            ;;
+        "critique")
+            echo "reasoning"
+            ;;
+        "synthesis")
+            echo "reasoning"
+            ;;
+        "peer-review")
+            echo "reasoning"
+            ;;
+        *)
+            # Fallback to keyword-based classification
+            if echo "$prompt" | grep -q -i "code\|algorithm\|function\|program\|implement"; then
+                echo "coding"
+            elif echo "$prompt" | grep -q -i "write\|story\|creative\|poem\|essay"; then
+                echo "creative"
+            else
+                echo "reasoning"
+            fi
+            ;;
+    esac
+}
+
+# --- Model Selection Logic ---
+
+# Select best model for task
+select_best_model() {
+    local task_type="$1"
+    local available_models="$2"
+    local preferred_models="$3"
+
+    local best_model=""
+    local best_score=0
+
+    # First, try preferred models if available
+    if [ -n "$preferred_models" ]; then
+        for model in $preferred_models; do
+            if echo "$available_models" | grep -q "^${model}$" && is_model_available "$model"; then
+                local capability_score=$(get_model_capability "$model" "$task_type")
+                local speed_score=$(get_model_speed "$model")
+                local model_size=$(get_model_size "$model")
+                local size_score=$(echo "scale=2; $model_size / 10" | bc -l 2>/dev/null || echo "0.5")
+
+                # Calculate weighted score (capability is most important)
+                local total_score=$(echo "scale=2; ($capability_score * 0.6) + ($speed_score * 0.3) + ($size_score * 0.1)" | bc -l 2>/dev/null || echo "0.5")
+
+                if (( $(echo "$total_score > $best_score" | bc -l 2>/dev/null || echo "0") )); then
+                    best_score=$total_score
+                    best_model=$model
+                fi
+            fi
+        done
+    fi
+
+    # If no preferred model is good, find best available model
+    if [ -z "$best_model" ]; then
+        for model in $available_models; do
+            if is_model_available "$model"; then
+                local capability_score=$(get_model_capability "$model" "$task_type")
+                local speed_score=$(get_model_speed "$model")
+                local model_size=$(get_model_size "$model")
+                local size_score=$(echo "scale=2; $model_size / 10" | bc -l 2>/dev/null || echo "0.5")
+
+                local total_score=$(echo "scale=2; ($capability_score * 0.6) + ($speed_score * 0.3) + ($size_score * 0.1)" | bc -l 2>/dev/null || echo "0.5")
+
+                if (( $(echo "$total_score > $best_score" | bc -l 2>/dev/null || echo "0") )); then
+                    best_score=$total_score
+                    best_model=$model
+                fi
+            fi
+        done
+    fi
+
+    if [ -n "$best_model" ]; then
+        echo "Selected model: $best_model (score: $best_score, task: $task_type)" 1>&2
+        echo "$best_model"
+    else
+        echo "No suitable model found" >&2
+        echo ""
+    fi
+}
+
+# --- Main Selection Function ---
+
+# Smart model selection
+select_model_for_task() {
+    local prompt="$1"
+    local mechanism="$2"
+    local preferred_models="$3"
+
+    # Initialize database
+    init_model_database
+
+    # Get available models
+    local available_models
+    available_models=$(get_available_models)
+
+    if [ -z "$available_models" ]; then
+        echo "No models available via Ollama" >&2
+        echo ""
+        return 1
+    fi
+
+    # Classify task type
+    local task_type
+    task_type=$(classify_task_type "$prompt" "$mechanism")
+
+    # Select best model
+    local selected_model
+    selected_model=$(select_best_model "$task_type" "$available_models" "$preferred_models")
+
+    if [ -n "$selected_model" ]; then
+        echo "$selected_model"
+        return 0
+    else
+        echo ""
+        return 1
+    fi
+}
+
+# --- Utility Functions ---
+
+# Get model info
+get_model_info() {
+    local model="$1"
+
+    echo "Model: $model"
+    echo "Size: $(get_model_size "$model")B"
+    echo "Speed: $(get_model_speed "$model")"
+    echo "Coding: $(get_model_capability "$model" "coding")"
+    echo "Reasoning: $(get_model_capability "$model" "reasoning")"
+    echo "Creative: $(get_model_capability "$model" "creative")"
+}
+
+# Export functions
+export -f init_model_database
+export -f get_available_models
+export -f is_model_available
+export -f classify_task_type
+export -f select_best_model
+export -f select_model_for_task
+export -f get_model_info
+export -f get_model_capability
+export -f get_model_size
+export -f get_model_speed
diff --git a/bash/talk-to-computer/peer-review b/bash/talk-to-computer/peer-review
new file mode 100755
index 0000000..e674375
--- /dev/null
+++ b/bash/talk-to-computer/peer-review
@@ -0,0 +1,275 @@
+#!/bin/bash
+
+# Peer Review System
+# This script implements a peer review process where one model provides an initial response
+# and other models review and suggest improvements through iterative refinement.
+#
+# APPLICATION LOGIC:
+# The peer review process implements a collaborative refinement system where AI models
+# provide structured feedback to improve response quality. The system operates through
+# three distinct phases designed to enhance clarity, accuracy, and completeness:
+#
+# PHASE 1 - INITIAL RESPONSE GENERATION:
+#   - A randomly selected model generates the first response to the user's prompt
+#   - Random selection helps prevent bias toward any particular model
+#   - The initial response serves as the foundation for peer review
+#   - This creates a starting point that can be improved through collective feedback
+#
+# PHASE 2 - PEER REVIEW:
+#   - Other models analyze the initial response and provide structured feedback
+#   - Reviewers suggest specific edits, clarifications, and improvements
+#   - Feedback focuses on clarity, completeness, accuracy, and organization
+#   - Multiple perspectives may identify different areas for improvement
+#
+# PHASE 3 - RESPONSE REFINEMENT:
+#   - The original responding model incorporates peer feedback to create an improved response
+#   - The model may revise, expand, clarify, or reorganize based on suggestions
+#   - Iterative improvement may address multiple rounds of feedback
+#   - The process emphasizes collaborative enhancement rather than replacement
+#
+# PEER REVIEW MODELING:
+# The system applies academic peer review principles to AI response refinement:
+#   - Random author selection helps prevent systematic bias in initial responses
+#   - Multiple reviewers provide diverse perspectives on the same work
+#   - Structured feedback focuses on specific improvements rather than general criticism
+#   - Author retains control over final revisions while considering peer input
+#   - Transparency through logging shows the evolution of the response
+#   - The process may help catch errors, improve clarity, or enhance completeness
+#
+# The peer review process continues for a configurable number of iterations,
+# with each cycle potentially improving upon the previous version.
+# The system emphasizes collaborative improvement through structured feedback and revision.
+
+# Get the directory where this script is located
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+
+# Source the logging system
+source "${SCRIPT_DIR}/logging.sh"
+
+# Source the quality guard for output quality protection
+source "${SCRIPT_DIR}/quality_guard.sh"
+
+# Get mechanism name automatically
+MECHANISM_NAME=$(get_mechanism_name "$0")
+
+# --- Model Configuration ---
+MODELS=(
+    "llama3:8b-instruct-q4_K_M"
+    "phi3:3.8b-mini-4k-instruct-q4_K_M"
+    "deepseek-r1:1.5b"
+    "gemma3n:e2b"
+    "dolphin3:latest"
+)
+
+# --- Defaults ---
+DEFAULT_ITERATIONS=1
+
+# --- Argument Validation ---
+if [ "$#" -lt 1 ]; then
+    echo -e "\n\tPeer Review"
+    echo -e "\tThis script implements a peer review process where one model provides an initial response"
+    echo -e "\tand other models review and suggest improvements through iterative refinement."
+    echo -e "\n\tUsage: $0 [-f <file_path>] \"<your prompt>\" [number_of_review_iterations]"
+    echo -e "\n\tExample: $0 -f ./input.txt \"Please analyze this text\" 1"
+    echo -e "\n\tIf number_of_review_iterations is not provided, the program will default to $DEFAULT_ITERATIONS iterations."
+    echo -e "\n\t-f <file_path> (optional): Append the contents of the file to the prompt."
+    echo -e "\n"
+    exit 1
+fi
+
+# --- Argument Parsing ---
+FILE_PATH=""
+while getopts "f:" opt; do
+  case $opt in
+    f)
+      FILE_PATH="$OPTARG"
+      ;;
+    *)
+      echo "Invalid option: -$OPTARG" >&2
+      exit 1
+      ;;
+  esac
+done
+shift $((OPTIND -1))
+
+PROMPT="$1"
+if [ -z "$2" ]; then
+    ITERATIONS=$DEFAULT_ITERATIONS
+else
+    ITERATIONS=$2
+fi
+
+# If file path is provided, append its contents to the prompt
+if [ -n "$FILE_PATH" ]; then
+    if [ ! -f "$FILE_PATH" ]; then
+        echo "File not found: $FILE_PATH" >&2
+        exit 1
+    fi
+    FILE_CONTENTS=$(cat "$FILE_PATH")
+    PROMPT="$PROMPT\n[FILE CONTENTS]\n$FILE_CONTENTS\n[END FILE]"
+fi
+
+# --- File Initialization ---
+# Create a temporary directory if it doesn't exist
+mkdir -p ~/tmp
+# Create a unique file for this session based on the timestamp
+SESSION_FILE=~/tmp/peer-review_$(date +%Y%m%d_%H%M%S).txt
+
+echo "Peer Review Session Log: ${SESSION_FILE}"
+echo "---------------------------------"
+
+# Store the initial user prompt in the session file
+echo "USER PROMPT: ${PROMPT}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+echo "Processing peer review with ${ITERATIONS} iteration(s)..."
+
+# --- Random Author Selection ---
+AUTHOR_INDEX=$((RANDOM % ${#MODELS[@]}))
+AUTHOR_MODEL="${MODELS[$AUTHOR_INDEX]}"
+
+echo "Author model selected: ${AUTHOR_MODEL}"
+echo "AUTHOR MODEL: ${AUTHOR_MODEL}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+# --- Initial Response Generation ---
+echo "Generating initial response..."
+echo "INITIAL RESPONSE GENERATION:" >> "${SESSION_FILE}"
+echo "============================" >> "${SESSION_FILE}"
+
+INITIAL_PROMPT="You are an expert assistant. Please provide a comprehensive response to the following prompt. Be thorough, clear, and well-organized in your response.
+
+PROMPT: ${PROMPT}"
+
+INITIAL_RESPONSE=$(ollama run "${AUTHOR_MODEL}" "${INITIAL_PROMPT}")
+INITIAL_RESPONSE=$(guard_output_quality "$INITIAL_RESPONSE" "$PROMPT" "$MECHANISM_NAME" "$AUTHOR_MODEL")
+
+echo "INITIAL RESPONSE (${AUTHOR_MODEL}):" >> "${SESSION_FILE}"
+echo "${INITIAL_RESPONSE}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+# --- Peer Review Iterations ---
+CURRENT_RESPONSE="${INITIAL_RESPONSE}"
+
+for iteration in $(seq 1 "${ITERATIONS}"); do
+    echo "Starting peer review iteration ${iteration} of ${ITERATIONS}..."
+    echo "PEER REVIEW ITERATION ${iteration}:" >> "${SESSION_FILE}"
+    echo "=================================" >> "${SESSION_FILE}"
+    
+    # --- Step 1: Generate Peer Reviews ---
+    echo "Step 1: Generating peer reviews..."
+    echo "STEP 1 - PEER REVIEWS:" >> "${SESSION_FILE}"
+    
+    declare -a reviews
+    declare -a reviewer_names
+    
+    review_count=0
+    
+    for i in "${!MODELS[@]}"; do
+        # Skip the author model
+        if [ "$i" -eq "$AUTHOR_INDEX" ]; then
+            continue
+        fi
+        
+        model="${MODELS[$i]}"
+        echo "  Getting peer review from ${model}..."
+        
+        REVIEW_PROMPT="You are a peer reviewer. Your task is to provide constructive feedback on the following response. Focus on specific suggestions for improvement.
+
+REVIEW GUIDELINES:
+- Suggest specific edits or clarifications
+- Identify areas that could be made clearer or more concise
+- Point out any inaccuracies or missing information
+- Suggest improvements to organization or structure
+- Be constructive and specific in your feedback
+
+RESPONSE TO REVIEW: ${CURRENT_RESPONSE}
+
+Please provide your peer review feedback in a clear, structured format. Focus on actionable suggestions for improvement."
+
+        review_output=$(ollama run "${model}" "${REVIEW_PROMPT}")
+        review_output=$(guard_output_quality "$review_output" "$PROMPT" "$MECHANISM_NAME" "$model")
+        
+        reviews[$review_count]="${review_output}"
+        reviewer_names[$review_count]="${model}"
+        
+        echo "REVIEW ${review_count+1} (${model}):" >> "${SESSION_FILE}"
+        echo "${review_output}" >> "${SESSION_FILE}"
+        echo "" >> "${SESSION_FILE}"
+        
+        review_count=$((review_count + 1))
+    done
+    
+    # --- Step 2: Generate Refined Response ---
+    echo "Step 2: Generating refined response based on peer feedback..."
+    echo "STEP 2 - RESPONSE REFINEMENT:" >> "${SESSION_FILE}"
+    
+    # Combine all reviews for the author
+    COMBINED_REVIEWS=""
+    for i in $(seq 0 $((review_count - 1))); do
+        COMBINED_REVIEWS="${COMBINED_REVIEWS}
+
+REVIEW FROM ${reviewer_names[$i]}:
+${reviews[$i]}"
+    done
+    
+    REFINE_PROMPT="You are the author of the following response. Your peers have provided constructive feedback to help improve your work. Please revise your response based on their suggestions.
+
+ORIGINAL PROMPT: ${PROMPT}
+YOUR CURRENT RESPONSE: ${CURRENT_RESPONSE}
+PEER REVIEW FEEDBACK: ${COMBINED_REVIEWS}
+
+Please provide a revised version of your response that:
+- Incorporates the constructive feedback from your peers
+- Addresses specific suggestions for improvement
+- Maintains your original insights while enhancing clarity and completeness
+- Shows how you've responded to the peer review process"
+
+    REFINED_RESPONSE=$(ollama run "${AUTHOR_MODEL}" "${REFINE_PROMPT}")
+    REFINED_RESPONSE=$(guard_output_quality "$REFINED_RESPONSE" "$PROMPT" "$MECHANISM_NAME" "$AUTHOR_MODEL")
+    
+    echo "REFINED RESPONSE (${AUTHOR_MODEL}):" >> "${SESSION_FILE}"
+    echo "${REFINED_RESPONSE}" >> "${SESSION_FILE}"
+    echo "" >> "${SESSION_FILE}"
+    
+    # Update the current response for the next iteration
+    CURRENT_RESPONSE="${REFINED_RESPONSE}"
+    
+    echo "Peer review iteration ${iteration} complete."
+    echo "" >> "${SESSION_FILE}"
+done
+
+# --- Final Summary Generation ---
+echo "Generating final summary..."
+echo "FINAL SUMMARY GENERATION:" >> "${SESSION_FILE}"
+echo "========================" >> "${SESSION_FILE}"
+
+SUMMARY_PROMPT="You are an expert analyst. Based on the peer review process below, please provide a concise summary of the key improvements made and the overall quality of the final response.
+
+ORIGINAL PROMPT: ${PROMPT}
+FINAL REFINED RESPONSE: ${CURRENT_RESPONSE}
+
+Please provide a summary that:
+- Highlights the most significant improvements made through peer review
+- Notes the quality and effectiveness of the final response
+- Captures the collaborative nature of the peer review process
+- Is clear, concise, and well-organized"
+
+FINAL_SUMMARY=$(ollama run "${AUTHOR_MODEL}" "${SUMMARY_PROMPT}")
+FINAL_SUMMARY=$(guard_output_quality "$FINAL_SUMMARY" "$PROMPT" "$MECHANISM_NAME" "$AUTHOR_MODEL")
+
+echo "FINAL SUMMARY (${AUTHOR_MODEL}):" >> "${SESSION_FILE}"
+echo "${FINAL_SUMMARY}" >> "${SESSION_FILE}"
+
+# --- Final Output ---
+echo "---------------------------------"
+echo "Peer review process complete."
+echo "Final response:"
+echo "---------------------------------"
+
+echo "${CURRENT_RESPONSE}"
+echo ""
+echo "Author: ${AUTHOR_MODEL}"
+echo "Peer Review Summary:"
+echo "${FINAL_SUMMARY}"
+echo ""
+echo "Full peer review log: ${SESSION_FILE}" 
\ No newline at end of file
diff --git a/bash/talk-to-computer/puzzle b/bash/talk-to-computer/puzzle
new file mode 100755
index 0000000..b9ab040
--- /dev/null
+++ b/bash/talk-to-computer/puzzle
@@ -0,0 +1,442 @@
+#!/bin/bash
+
+# Get the directory where this script is located
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+
+# Source the logging system using absolute path
+source "${SCRIPT_DIR}/logging.sh"
+
+# Source the Lil tester for secure code testing
+source "${SCRIPT_DIR}/lil_tester.sh"
+
+# Source the quality guard for output quality protection
+source "${SCRIPT_DIR}/quality_guard.sh"
+
+# Source the RAG integration for corpus queries
+source "${SCRIPT_DIR}/rag_integration.sh"
+
+# Get mechanism name automatically
+MECHANISM_NAME=$(get_mechanism_name "$0")
+
+# --- Lil Knowledge Base ---
+get_lil_knowledge() {
+    cat << 'EOF'
+Lil is a multi-paradigm scripting language used in the Decker multimedia tool. Here is comprehensive knowledge about Lil:
+
+## Core Language Features
+
+### Types and Values
+- **Numbers**: Floating-point values (42, 37.5, -29999)
+- **Strings**: Double-quoted with escape sequences ("hello\nworld", "foo\"bar")
+- **Lists**: Ordered sequences using comma: (1,2,3), empty list ()
+- **Dictionaries**: Key-value pairs: ("a","b") dict (11,22) or {}.fruit:"apple"
+- **Tables**: Rectangular data with named columns (created with insert or table)
+- **Functions**: Defined with 'on name do ... end', called with []
+- **Interfaces**: Opaque values for system resources
+
+### Basic Syntax
+- **Variables**: Assignment uses ':' (x:42, y:"hello")
+- **Indexing**: Lists with [index], dicts with .key or ["key"]
+- **Operators**: Right-to-left precedence, 2*3+5 = 16, (2*3)+5 = 11
+- **Comments**: # line comments only
+- **Expressions**: Everything is an expression, returns values
+
+### Control Flow
+- **Conditionals**: if condition ... end, with optional elseif/else
+- **Loops**: each value key index in collection ... end
+- **While**: while condition ... end
+- **Functions**: on func x y do ... end, called as func[args]
+
+### Query Language (SQL-like)
+- **select**: select columns from table where condition orderby column
+- **update**: update column:value from table where condition
+- **extract**: extract values from table (returns simple types)
+- **insert**: insert columns with values end
+- **Clauses**: where, by, orderby asc/desc
+- **Joins**: table1 join table2 (natural), table1 cross table2 (cartesian)
+
+### Vector Operations (Conforming)
+- **Arithmetic spreads**: 1+(2,3,4) = (3,4,5)
+- **List operations**: (1,2,3)+(4,5,6) = (5,7,9)
+- **Equality**: 5=(1,5,10) = (0,1,0)  # Use ~ for exact match
+- **Application**: func @ (1,2,3) applies func to each element
+
+### Key Operators
+- **Arithmetic**: + - * / % ^ & | (min/max)
+- **Comparison**: < > = (conforming), ~ (exact match)
+- **Logic**: ! (not), & | (and/or with numbers)
+- **Data**: , (concat), @ (index each), dict, take, drop
+- **String**: fuse (concat), split, format, parse, like (glob matching)
+- **Query**: join, cross, limit, window
+
+### Important Patterns
+- **Function definition**: on add x y do x+y end
+- **List comprehension**: each x in data x*2 end
+- **Table query**: select name age where age>21 from people
+- **Dictionary building**: d:(); d.key:"value"
+- **String formatting**: "%i %s" format (42,"answer")
+
+### Common Functions
+- **Math**: cos, sin, tan, exp, ln, sqrt, floor, mag, unit, heading
+- **Aggregation**: sum, prod, min, max, count, first, last
+- **Data**: range, keys, list, table, flip, raze, typeof
+- **IO**: read, write, show, print (in Lilt environment)
+
+### Best Practices
+- Use functional style when possible (immutable operations)
+- Leverage vector operations for data manipulation
+- Use queries for complex data transformations
+- Functions are first-class values
+- Lexical scoping with closures
+- Tail-call optimization supported
+
+### Common Patterns
+- **Mode finding**: extract first value by value orderby count value desc from data
+- **Filtering**: select from table where condition
+- **Grouping**: select agg_func column by group_column from table
+- **List processing**: each x in data transform[x] end
+- **Dictionary operations**: keys dict, range dict, dict operations
+- **String manipulation**: split, fuse, format, parse, like
+
+Lil emphasizes expressive, concise code with powerful built-in operations for data manipulation, making it excellent for algorithmic puzzles and data processing tasks.
+EOF
+}
+
+# --- Model Configuration ---
+PUZZLE_MODEL="llama3:8b-instruct-q4_K_M"
+ANALYSIS_MODEL="phi3:3.8b-mini-4k-instruct-q4_K_M"
+
+# Validate and set models with fallbacks
+PUZZLE_MODEL=$(validate_model "$PUZZLE_MODEL" "gemma3n:e2b")
+if [ $? -ne 0 ]; then
+    log_error "No valid puzzle model available"
+    exit 1
+fi
+
+ANALYSIS_MODEL=$(validate_model "$ANALYSIS_MODEL" "gemma3n:e2b")
+if [ $? -ne 0 ]; then
+    log_error "No valid analysis model available"
+    exit 1
+fi
+
+# --- Defaults ---
+DEFAULT_ROUNDS=2
+
+# --- Argument Validation ---
+if [ "$#" -lt 1 ]; then
+    echo -e "\n\tPuzzle"
+    echo -e "\tThis script specializes in puzzle solving and coding challenges with Lil programming language expertise."
+    echo -e "\n\tUsage: $0 [-f <file_path>] [-l <language>] \"<your puzzle/challenge>\" [number_of_rounds]"
+    echo -e "\n\tExample: $0 -f ./challenge.txt -l lil \"How can I implement a sorting algorithm?\" 2"
+    echo -e "\n\tIf number_of_rounds is not provided, the program will default to 2 rounds."
+    echo -e "\n\t-f <file_path> (optional): Append the contents of the file to the prompt."
+    echo -e "\n\t-l <language> (optional): Specify programming language focus (default: lil)."
+    echo -e "\n"
+    exit 1
+fi
+
+# --- Argument Parsing ---
+FILE_PATH=""
+LANGUAGE="lil"
+while getopts "f:l:" opt; do
+  case $opt in
+    f)
+      FILE_PATH="$OPTARG"
+      ;;
+    l)
+      LANGUAGE="$OPTARG"
+      ;;
+    *)
+      log_error "Invalid option: -$OPTARG"
+      exit 1
+      ;;
+  esac
+done
+shift $((OPTIND -1))
+
+PROMPT="$1"
+if [ -z "$2" ]; then
+    ROUNDS=2
+else
+    ROUNDS=$2
+fi
+
+# Validate prompt
+PROMPT=$(validate_prompt "$PROMPT")
+if [ $? -ne 0 ]; then
+    exit 1
+fi
+
+# Validate file path if provided
+if [ -n "$FILE_PATH" ]; then
+    if ! validate_file_path "$FILE_PATH"; then
+        exit 1
+    fi
+    FILE_CONTENTS=$(cat "$FILE_PATH")
+    PROMPT="$PROMPT\n[FILE CONTENTS]\n$FILE_CONTENTS\n[END FILE]"
+fi
+
+# Validate rounds
+if ! [[ "$ROUNDS" =~ ^[1-9][0-9]*$ ]] || [ "$ROUNDS" -gt 5 ]; then
+    log_error "Invalid number of rounds: $ROUNDS (must be 1-5)"
+    exit 1
+fi
+
+# --- File Initialization ---
+mkdir -p ~/tmp
+SESSION_FILE=~/tmp/puzzle_$(date +%Y%m%d_%H%M%S).txt
+
+# Initialize timing
+SESSION_ID=$(generate_session_id)
+start_timer "$SESSION_ID" "puzzle"
+
+echo "Puzzle Session Log: ${SESSION_FILE}"
+echo "---------------------------------"
+
+# Store the initial user prompt in the session file
+echo "USER PROMPT: ${PROMPT}" >> "${SESSION_FILE}"
+echo "LANGUAGE FOCUS: ${LANGUAGE}" >> "${SESSION_FILE}"
+echo "NUMBER OF ROUNDS: ${ROUNDS}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+# --- Phase 1: Problem Analysis ---
+echo "Phase 1: Analyzing the puzzle or coding challenge..."
+echo "PHASE 1 - PROBLEM ANALYSIS:" >> "${SESSION_FILE}"
+
+# Check for RAG context
+RAG_CONTEXT=$(use_rag_if_available "${PROMPT}" "${MECHANISM_NAME}")
+
+PROBLEM_ANALYSIS_PROMPT="You are an expert puzzle solver and programming mentor specializing in the ${LANGUAGE} programming language. Analyze the following puzzle or coding challenge.
+
+CHALLENGE: ${PROMPT}
+
+$(if [[ "$LANGUAGE" == "lil" ]]; then
+echo "LIL PROGRAMMING LANGUAGE KNOWLEDGE:
+$(get_lil_knowledge)
+
+Use this comprehensive knowledge of Lil to provide accurate, helpful analysis."
+fi)
+
+$(if [[ "$RAG_CONTEXT" != "$PROMPT" ]]; then
+echo "ADDITIONAL CONTEXT FROM KNOWLEDGE BASE:
+$RAG_CONTEXT
+
+Use this context to enhance your analysis if it's relevant to the challenge."
+fi)
+
+Please provide a comprehensive analysis including:
+1. Problem type classification (algorithm, data structure, logic, etc.)
+2. Complexity assessment (time/space requirements)
+3. Key concepts and patterns involved
+4. Relevant ${LANGUAGE} language features that could help
+5. Potential solution approaches
+6. Common pitfalls or edge cases to consider
+
+Provide a clear, structured analysis that helps understand the problem."
+
+problem_analysis=$(ollama run "${PUZZLE_MODEL}" "${PROBLEM_ANALYSIS_PROMPT}")
+problem_analysis=$(guard_output_quality "$problem_analysis" "$PROMPT" "$MECHANISM_NAME" "$PUZZLE_MODEL")
+
+
+
+echo "PROBLEM ANALYSIS:" >> "${SESSION_FILE}"
+echo "${problem_analysis}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+# --- Phase 2: Solution Strategy ---
+echo "Phase 2: Developing solution strategies..."
+echo "PHASE 2 - SOLUTION STRATEGY:" >> "${SESSION_FILE}"
+
+SOLUTION_STRATEGY_PROMPT="Based on the problem analysis, develop multiple solution strategies for this puzzle or coding challenge.
+
+ORIGINAL CHALLENGE: ${PROMPT}
+
+PROBLEM ANALYSIS: ${problem_analysis}
+
+$(if [[ "$LANGUAGE" == "lil" ]]; then
+echo "LIL PROGRAMMING LANGUAGE KNOWLEDGE:
+$(get_lil_knowledge)
+
+Use this comprehensive knowledge of Lil to provide accurate, helpful analysis."
+fi)
+
+$(if [[ "$RAG_CONTEXT" != "$PROMPT" ]]; then
+echo "ADDITIONAL CONTEXT FROM KNOWLEDGE BASE:
+$RAG_CONTEXT
+
+Use this context to develop more informed and accurate solution strategies."
+fi)
+
+Please provide:
+1. At least 2-3 different solution approaches
+2. Algorithmic complexity analysis for each approach
+3. Trade-offs between different solutions
+4. Specific ${LANGUAGE} language constructs that would be useful
+5. Implementation considerations and challenges
+6. Testing and validation strategies
+
+Focus on practical, implementable solutions with clear reasoning."
+
+solution_strategy=$(ollama run "${PUZZLE_MODEL}" "${SOLUTION_STRATEGY_PROMPT}")
+solution_strategy=$(guard_output_quality "$solution_strategy" "$PROMPT" "$MECHANISM_NAME" "$PUZZLE_MODEL")
+
+
+
+echo "SOLUTION STRATEGY:" >> "${SESSION_FILE}"
+echo "${solution_strategy}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+# --- Phase 3: Implementation Guidance ---
+echo "Phase 3: Providing implementation guidance..."
+echo "PHASE 3 - IMPLEMENTATION GUIDANCE:" >> "${SESSION_FILE}"
+
+IMPLEMENTATION_PROMPT="Provide detailed implementation guidance for the best solution approach to this puzzle or coding challenge.
+
+ORIGINAL CHALLENGE: ${PROMPT}
+PROBLEM ANALYSIS: ${problem_analysis}
+SOLUTION STRATEGY: ${solution_strategy}
+
+$(if [[ "$LANGUAGE" == "lil" ]]; then
+echo "LIL PROGRAMMING LANGUAGE KNOWLEDGE:
+$(get_lil_knowledge)
+
+Use this comprehensive knowledge of Lil to provide accurate, helpful analysis."
+fi)
+
+$(if [[ "$RAG_CONTEXT" != "$PROMPT" ]]; then
+echo "ADDITIONAL CONTEXT FROM KNOWLEDGE BASE:
+$RAG_CONTEXT
+
+Use this context to provide more accurate and comprehensive implementation guidance."
+fi)
+
+Please provide:
+1. Step-by-step implementation plan
+2. Complete code example in ${LANGUAGE} (if applicable)
+3. Explanation of key code sections and patterns
+4. Variable naming and structure recommendations
+5. Error handling and edge case considerations
+6. Performance optimization tips
+7. Testing and debugging guidance
+
+Make the implementation clear and educational, explaining the reasoning behind each decision."
+
+implementation_guidance=$(ollama run "${PUZZLE_MODEL}" "${IMPLEMENTATION_PROMPT}")
+implementation_guidance=$(guard_output_quality "$implementation_guidance" "$PROMPT" "$MECHANISM_NAME" "$PUZZLE_MODEL")
+
+
+
+echo "IMPLEMENTATION GUIDANCE:" >> "${SESSION_FILE}"
+echo "${implementation_guidance}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+# --- Phase 4: Code Testing (if applicable) ---
+if [[ "$LANGUAGE" == "lil" ]] && [[ "$implementation_guidance" =~ (on [a-zA-Z_][a-zA-Z0-9_]* do|function|procedure) ]]; then
+    echo "Phase 4: Testing the Lil code implementation..."
+    echo "PHASE 4 - CODE TESTING:" >> "${SESSION_FILE}"
+    
+    # Simple code extraction - look for function definitions
+    lil_code=""
+    if echo "$implementation_guidance" | grep -q "on [a-zA-Z_][a-zA-Z0-9_]* do"; then
+        lil_code=$(echo "$implementation_guidance" | grep -A 20 "on [a-zA-Z_][a-zA-Z0-9_]* do" | head -20)
+    fi
+    
+    if [ -n "$lil_code" ]; then
+        echo "Extracted Lil code for testing:"
+        echo "----------------------------------------"
+        echo "$lil_code"
+        echo "----------------------------------------"
+        
+        # Test the extracted code
+        test_result=$(test_lil_script "$lil_code" "puzzle_implementation")
+        test_exit_code=$?
+        
+        echo "CODE TESTING RESULTS:" >> "${SESSION_FILE}"
+        echo "$test_result" >> "${SESSION_FILE}"
+        echo "" >> "${SESSION_FILE}"
+        
+        if [ $test_exit_code -eq 0 ]; then
+            echo "✅ Lil code testing PASSED"
+        else
+            echo "❌ Lil code testing FAILED"
+            echo "Note: The code may have syntax errors or runtime issues."
+        fi
+    else
+        echo "No executable Lil code found in implementation guidance."
+        echo "CODE TESTING: No executable code found" >> "${SESSION_FILE}"
+    fi
+else
+    echo "Phase 4: Skipping code testing (not Lil language or no executable code)"
+    echo "CODE TESTING: Skipped (not applicable)" >> "${SESSION_FILE}"
+fi
+
+# --- Phase 5: Solution Validation ---
+echo "Phase 5: Validating and reviewing the solution..."
+echo "PHASE 5 - SOLUTION VALIDATION:" >> "${SESSION_FILE}"
+
+VALIDATION_PROMPT="Review and validate the proposed solution to ensure it's correct, efficient, and well-implemented.
+
+ORIGINAL CHALLENGE: ${PROMPT}
+PROBLEM ANALYSIS: ${problem_analysis}
+SOLUTION STRATEGY: ${solution_strategy}
+IMPLEMENTATION: ${implementation_guidance}
+
+Please provide:
+1. Code review and correctness verification
+2. Edge case analysis and testing scenarios
+3. Performance analysis and optimization opportunities
+4. Alternative approaches or improvements
+5. Common mistakes to avoid
+6. Learning resources and next steps
+7. Final recommendations and best practices
+
+Ensure the solution is robust, maintainable, and follows ${LANGUAGE} best practices."
+
+solution_validation=$(ollama run "${ANALYSIS_MODEL}" "${VALIDATION_PROMPT}")
+solution_validation=$(guard_output_quality "$solution_validation" "$PROMPT" "$MECHANISM_NAME" "$ANALYSIS_MODEL")
+
+
+
+echo "SOLUTION VALIDATION:" >> "${SESSION_FILE}"
+echo "${solution_validation}" >> "${SESSION_FILE}"
+
+# End timing
+duration=$(end_timer "$SESSION_ID" "puzzle")
+
+# --- Final Output ---
+echo "---------------------------------"
+echo "Puzzle-solving process complete."
+echo "---------------------------------"
+echo ""
+echo "PROBLEM ANALYSIS:"
+echo "================="
+echo "${problem_analysis}"
+echo ""
+echo "SOLUTION STRATEGY:"
+echo "=================="
+echo "${solution_strategy}"
+echo ""
+echo "IMPLEMENTATION GUIDANCE:"
+echo "========================"
+echo "${implementation_guidance}"
+echo ""
+if [[ "$LANGUAGE" == "lil" ]] && [[ "$implementation_guidance" =~ (on [a-zA-Z_][a-zA-Z0-9_]* do|function|procedure) ]]; then
+    echo "CODE TESTING:"
+    echo "============="
+    if [ -n "$lil_code" ]; then
+        echo "✅ Lil code was tested successfully"
+        echo "Test results logged in session file"
+    else
+        echo "No executable code found for testing"
+    fi
+    echo ""
+fi
+echo "SOLUTION VALIDATION:"
+echo "===================="
+echo "${solution_validation}"
+echo ""
+echo "Language focus: ${LANGUAGE}"
+echo "Rounds completed: ${ROUNDS}"
+echo "Execution time: ${duration} seconds"
+echo ""
+echo "Full puzzle-solving log: ${SESSION_FILE}"
diff --git a/bash/talk-to-computer/quality_guard.sh b/bash/talk-to-computer/quality_guard.sh
new file mode 100755
index 0000000..06f8aec
--- /dev/null
+++ b/bash/talk-to-computer/quality_guard.sh
@@ -0,0 +1,366 @@
+#!/bin/bash
+
+# Quality Guard - System-Wide Output Quality Protection
+# This module provides comprehensive quality monitoring for all AI thinking mechanisms
+# to prevent output degradation, nonsense, and repetitive responses.
+
+# --- Configuration ---
+MIN_RESPONSE_LENGTH=30
+MAX_REPETITION_RATIO=0.4
+MAX_NONSENSE_SCORE=0.6
+DEGRADATION_THRESHOLD=0.65
+MAX_CORRECTION_ATTEMPTS=2
+FALLBACK_ENABLED=true
+
+# --- Quality Assessment Functions ---
+
+# Main quality assessment function
+assess_quality() {
+    local response="$1"
+    local context="$2"
+    local mechanism="$3"
+    
+    # Calculate quality metrics
+    local length_score=$(assess_length "$response")
+    local coherence_score=$(assess_coherence "$response")
+    local repetition_score=$(assess_repetition "$response")
+    local relevance_score=$(assess_relevance "$response" "$context" "$mechanism")
+    local structure_score=$(assess_structure "$response")
+    
+    # Weighted quality score
+    local overall_score=$(echo "scale=2; ($length_score * 0.15 + $coherence_score * 0.25 + $repetition_score * 0.2 + $relevance_score * 0.25 + $structure_score * 0.15)" | bc -l 2>/dev/null || echo "0.5")
+    
+    echo "$overall_score"
+}
+
+# Assess response length
+assess_length() {
+    local response="$1"
+    local word_count=$(echo "$response" | wc -w)
+    
+    if [ "$word_count" -lt $MIN_RESPONSE_LENGTH ]; then
+        echo "0.2"
+    elif [ "$word_count" -lt 80 ]; then
+        echo "0.6"
+    elif [ "$word_count" -lt 200 ]; then
+        echo "0.9"
+    elif [ "$word_count" -lt 500 ]; then
+        echo "0.8"
+    else
+        echo "0.7"
+    fi
+}
+
+# Assess coherence
+assess_coherence() {
+    local response="$1"
+    
+    # Check for reasonable sentence structure
+    local sentences=$(echo "$response" | tr '.' '\n' | grep -v '^[[:space:]]*$' | wc -l)
+    local avg_length=$(echo "$response" | tr '.' '\n' | grep -v '^[[:space:]]*$' | awk '{sum += length($0)} END {print sum/NR}' 2>/dev/null || echo "50")
+    
+    # Penalize extremely long or short sentences
+    if (( $(echo "$avg_length > 300" | bc -l 2>/dev/null || echo "0") )); then
+        echo "0.3"
+    elif (( $(echo "$avg_length < 15" | bc -l 2>/dev/null || echo "0") )); then
+        echo "0.4"
+    elif [ "$sentences" -lt 2 ]; then
+        echo "0.5"
+    else
+        echo "0.8"
+    fi
+}
+
+# Assess repetition
+assess_repetition() {
+    local response="$1"
+    local unique_words=$(echo "$response" | tr ' ' '\n' | sort | uniq | wc -l)
+    local total_words=$(echo "$response" | wc -w)
+    
+    if [ "$total_words" -eq 0 ]; then
+        echo "0.0"
+    else
+        local repetition_ratio=$(echo "scale=2; $unique_words / $total_words" | bc -l 2>/dev/null || echo "0.5")
+        
+        if (( $(echo "$repetition_ratio < $MAX_REPETITION_RATIO" | bc -l 2>/dev/null || echo "0") )); then
+            echo "0.1"
+        elif (( $(echo "$repetition_ratio < 0.6" | bc -l 2>/dev/null || echo "0") )); then
+            echo "0.5"
+        else
+            echo "0.9"
+        fi
+    fi
+}
+
+# Assess relevance to context and mechanism
+assess_relevance() {
+    local response="$1"
+    local context="$2"
+    local mechanism="$3"
+    
+    # Mechanism-specific relevance checks
+    case "$mechanism" in
+        "puzzle")
+            if echo "$response" | grep -q -i "algorithm\|code\|implement\|function\|solution"; then
+                echo "0.9"
+            else
+                echo "0.6"
+            fi
+            ;;
+        "socratic")
+            if echo "$response" | grep -q -i "question\|analyze\|investigate\|examine\|why\|how"; then
+                echo "0.9"
+            else
+                echo "0.6"
+            fi
+            ;;
+        "exploration")
+            if echo "$response" | grep -q -i "compare\|alternative\|option\|approach\|strategy"; then
+                echo "0.9"
+            else
+                echo "0.6"
+            fi
+            ;;
+        "consensus")
+            if echo "$response" | grep -q -i "perspective\|view\|opinion\|agree\|disagree\|multiple"; then
+                echo "0.9"
+            else
+                echo "0.6"
+            fi
+            ;;
+        "critique")
+            if echo "$response" | grep -q -i "improve\|enhance\|fix\|refine\|better\|optimize"; then
+                echo "0.9"
+            else
+                echo "0.6"
+            fi
+            ;;
+        "synthesis")
+            if echo "$response" | grep -q -i "combine\|integrate\|merge\|unify\|synthesize"; then
+                echo "0.9"
+            else
+                echo "0.6"
+            fi
+            ;;
+        "peer_review")
+            if echo "$response" | grep -q -i "review\|feedback\|suggest\|advice\|collaborate"; then
+                echo "0.9"
+            else
+                echo "0.6"
+            fi
+            ;;
+        *)
+            echo "0.7"
+            ;;
+    esac
+}
+
+# Assess structural quality
+assess_structure() {
+    local response="$1"
+    
+    # Check for proper formatting and structure
+    local has_paragraphs=$(echo "$response" | grep -c '^[[:space:]]*$' 2>/dev/null | tr -d '[:space:]' || echo "0")
+    local has_lists=$(echo "$response" | grep -c '^[[:space:]]*[-]' 2>/dev/null | tr -d '[:space:]' || echo "0")
+    local has_numbers=$(echo "$response" | grep -c '^[[:space:]]*[0-9]' 2>/dev/null | tr -d '[:space:]' || echo "0")
+    
+    local structure_score=0.5
+    
+    if [ "${has_paragraphs:-0}" -gt 0 ]; then structure_score=$(echo "$structure_score + 0.2" | bc -l 2>/dev/null || echo "$structure_score"); fi
+    if [ "${has_lists:-0}" -gt 0 ]; then structure_score=$(echo "$structure_score + 0.15" | bc -l 2>/dev/null || echo "$structure_score"); fi
+    if [ "${has_numbers:-0}" -gt 0 ]; then structure_score=$(echo "$structure_score + 0.15" | bc -l 2>/dev/null || echo "$structure_score"); fi
+    
+    echo "$structure_score"
+}
+
+# --- Degradation Detection ---
+
+# Detect various degradation patterns
+detect_degradation_patterns() {
+    local response="$1"
+    local degradation_score=0
+    
+    # Check for nonsense patterns
+    if echo "$response" | grep -q -i "lorem ipsum\|asdf\|qwerty\|random text\|test test test"; then
+        degradation_score=$(echo "$degradation_score + 0.9" | bc -l 2>/dev/null || echo "$degradation_score")
+    fi
+    
+    # Check for excessive repetition (simplified pattern)
+    if echo "$response" | grep -q "the same phrase repeated multiple times"; then
+        degradation_score=$(echo "$degradation_score + 0.8" | bc -l 2>/dev/null || echo "$degradation_score")
+    fi
+    
+    # Check for incoherent punctuation (more specific - lines with only punctuation)
+    local punct_only_lines=$(echo "$response" | grep "^[[:space:]]*[[:punct:]]*[[:space:]]*$" | wc -l)
+    local total_lines=$(echo "$response" | wc -l)
+    if [ "$total_lines" -gt 0 ]; then
+        local punct_ratio=$(( punct_only_lines * 100 / total_lines ))
+        if [ "$punct_ratio" -gt 50 ]; then
+            # Only flag if more than half the lines are punctuation-only
+            degradation_score=$(echo "$degradation_score + 0.4" | bc -l 2>/dev/null || echo "$degradation_score")
+        fi
+    fi
+    
+    # Check for extremely short responses
+    local word_count=$(echo "$response" | wc -w)
+    if [ "$word_count" -lt 15 ]; then
+        degradation_score=$(echo "$degradation_score + 0.5" | bc -l 2>/dev/null || echo "$degradation_score")
+    fi
+    
+    # Check for gibberish (simplified pattern)
+    if echo "$response" | grep -q "aaaaa\|bbbbb\|ccccc\|ddddd\|eeeee"; then
+        degradation_score=$(echo "$degradation_score + 0.6" | bc -l 2>/dev/null || echo "$degradation_score")
+    fi
+    
+    # Note: Removed problematic markdown check to eliminate syntax warnings
+    
+    echo "$degradation_score"
+}
+
+# --- Correction Mechanisms ---
+
+# Attempt to correct degraded output
+correct_degraded_output() {
+    local degraded_response="$1"
+    local context="$2"
+    local mechanism="$3"
+    local model="$4"
+    local attempt=1
+    
+    while [ "$attempt" -le "$MAX_CORRECTION_ATTEMPTS" ]; do
+        echo "🔄 Correction attempt $attempt/$MAX_CORRECTION_ATTEMPTS..." >&2
+        
+        # Create correction prompt
+        local correction_prompt="The previous response was degraded or nonsensical. Please provide a clear, coherent response to:
+
+ORIGINAL REQUEST: $context
+
+RESPONSE TYPE: $mechanism
+
+Please ensure your response is:
+- Relevant and focused on the request
+- Well-structured with proper paragraphs and formatting
+- Free of repetition, nonsense, or gibberish
+- Appropriate length (at least 50 words)
+- Clear and understandable
+
+Provide a fresh, high-quality response:"
+        
+        # Get corrected response
+        local corrected_response=$(ollama run "$model" "$correction_prompt")
+        
+        # Assess correction quality
+        local correction_quality=$(assess_quality "$corrected_response" "$context" "$mechanism")
+        local degradation_score=$(detect_degradation_patterns "$corrected_response")
+        
+        echo "Correction quality: $correction_quality, Degradation: $degradation_score" >&2
+        
+        # Check if correction is successful
+        if (( $(echo "$correction_quality > $DEGRADATION_THRESHOLD" | bc -l 2>/dev/null || echo "0") )) && \
+           (( $(echo "$degradation_score < $MAX_NONSENSE_SCORE" | bc -l 2>/dev/null || echo "0") )); then
+            
+            echo "✅ Output corrected successfully (quality: $correction_quality)" >&2
+            echo "$corrected_response"
+            return 0
+        fi
+        
+        attempt=$((attempt + 1))
+    done
+    
+    echo "❌ All correction attempts failed. Using fallback response." >&2
+    echo "$(generate_fallback_response "$mechanism" "$context")"
+    return 1
+}
+
+# Generate appropriate fallback response
+generate_fallback_response() {
+    local mechanism="$1"
+    local context="$2"
+    
+    case "$mechanism" in
+        "puzzle")
+            echo "I apologize, but I'm experiencing difficulties providing a proper response to your puzzle or coding challenge. Please try rephrasing your question or ask for a different type of assistance. You might also want to try breaking down your request into smaller, more specific questions."
+            ;;
+        "socratic")
+            echo "I'm unable to provide the deep analysis you're looking for at this time. Please try asking your question again with more specific details, or consider rephrasing it in a different way."
+            ;;
+        "exploration")
+            echo "I'm having trouble exploring alternatives and strategies for your request. Please try asking your question again or provide more context about what you're looking to explore."
+            ;;
+        "consensus")
+            echo "I cannot provide multiple perspectives or consensus-building guidance currently. Please try rephrasing your request or ask for a different type of assistance."
+            ;;
+        "critique")
+            echo "I'm unable to provide improvement suggestions or critique at this time. Please try asking your question again or request a different approach."
+            ;;
+        "synthesis")
+            echo "I cannot synthesize or combine approaches currently. Please try rephrasing your request or ask for a different form of assistance."
+            ;;
+        "peer_review")
+            echo "I'm having trouble providing collaborative feedback or review. Please try asking your question again or request a different type of help."
+            ;;
+        *)
+            echo "I'm experiencing difficulties providing a proper response. Please try rephrasing your question or ask for a different type of assistance."
+            ;;
+    esac
+}
+
+# --- Main Quality Guard Function ---
+
+# Main function to guard against output degradation
+guard_output_quality() {
+    local response="$1"
+    local context="$2"
+    local mechanism="$3"
+    local model="$4"
+    
+    # Assess quality
+    local quality_score=$(assess_quality "$response" "$context" "$mechanism")
+    local degradation_score=$(detect_degradation_patterns "$response")
+    
+    echo "Quality Score: $quality_score, Degradation Score: $degradation_score" >&2
+    
+    # Check if correction is needed
+    if (( $(echo "$quality_score < $DEGRADATION_THRESHOLD" | bc -l 2>/dev/null || echo "0") )) || \
+       (( $(echo "$degradation_score > $MAX_NONSENSE_SCORE" | bc -l 2>/dev/null || echo "0") )); then
+        
+        echo "⚠️  Output quality below threshold. Initiating correction..." >&2
+        
+        if [ "$FALLBACK_ENABLED" = "true" ]; then
+            correct_degraded_output "$response" "$context" "$mechanism" "$model"
+        else
+            echo "❌ Quality below threshold but fallback disabled." >&2
+            echo "$response"
+        fi
+    else
+        echo "✅ Response quality acceptable (score: $quality_score)" >&2
+        echo "$response"
+    fi
+}
+
+# --- Utility Functions ---
+
+# Get mechanism name from script path
+get_mechanism_name() {
+    local script_path="$1"
+    local script_name=$(basename "$script_path")
+    
+    case "$script_name" in
+        "puzzle") echo "puzzle" ;;
+        "socratic") echo "socratic" ;;
+        "exploration") echo "exploration" ;;
+        "consensus") echo "consensus" ;;
+        "critique") echo "critique" ;;
+        "synthesis") echo "synthesis" ;;
+        "peer-review") echo "peer_review" ;;
+        *) echo "unknown" ;;
+    esac
+}
+
+# Export functions for use by other scripts
+export -f assess_quality
+export -f detect_degradation_patterns
+export -f guard_output_quality
+export -f get_mechanism_name
+export -f correct_degraded_output
+export -f generate_fallback_response
diff --git a/bash/talk-to-computer/rag_config.sh b/bash/talk-to-computer/rag_config.sh
new file mode 100644
index 0000000..27b724b
--- /dev/null
+++ b/bash/talk-to-computer/rag_config.sh
@@ -0,0 +1,118 @@
+#!/bin/bash
+
+# RAG (Retrieval-Augmented Generation) Configuration
+# This file configures the RAG system for corpus-based knowledge augmentation
+
+# --- Corpus Configuration ---
+CORPUS_DIR="corpus"
+CORPUS_REGISTRY="${CORPUS_DIR}/corpus_registry.txt"
+CORPUS_CACHE_FILE="${CORPUS_DIR}/.corpus_cache"
+CORPUS_CACHE_TTL=3600  # Cache TTL in seconds (1 hour)
+
+# --- Search Configuration ---
+MAX_SEARCH_RESULTS=5
+MIN_CONTENT_LENGTH=50
+MAX_CONTENT_LENGTH=5000
+SEARCH_CONTEXT_LINES=3  # Lines of context around search matches
+
+# --- Topic Classification ---
+# Keywords that trigger specific topic matching (format: topic|keywords)
+TOPIC_KEYWORDS_FILE="${CORPUS_DIR}/.topic_keywords"
+
+# Initialize topic keywords file if it doesn't exist
+if [ ! -f "$TOPIC_KEYWORDS_FILE" ]; then
+    cat > "$TOPIC_KEYWORDS_FILE" << 'EOF'
+programming|bash shell scripting code algorithm programming software development
+lil|decker lil language terse programming scripting deck
+science|physics chemistry biology research scientific experiment
+physics|quantum relativity mechanics thermodynamics energy force
+literature|book author writing novel poem analysis criticism
+general|knowledge fact information general misc miscellaneous
+EOF
+fi
+
+# --- File Processing ---
+# Supported file extensions and their processing commands (format: ext|command)
+FILE_PROCESSORS_FILE="${CORPUS_DIR}/.file_processors"
+
+# Initialize file processors if it doesn't exist
+if [ ! -f "$FILE_PROCESSORS_FILE" ]; then
+    cat > "$FILE_PROCESSORS_FILE" << 'EOF'
+txt|cat
+md|cat
+html|cat
+EOF
+fi
+
+# --- Search Tools ---
+# Commands used for searching different file types
+GREP_CMD="grep -r -i --include=\"*.txt\" --include=\"*.md\" --include=\"*.html\""
+SED_CMD="sed"
+AWK_CMD="awk"
+
+# --- RAG Behavior ---
+RAG_ENABLED=true
+RAG_CONFIDENCE_THRESHOLD=0.7  # Minimum confidence to trigger RAG
+RAG_MAX_CONTEXT_LENGTH=4000  # Maximum context to include in prompt
+RAG_CACHE_ENABLED=true
+
+# --- Debug and Logging ---
+RAG_DEBUG=false
+RAG_LOG_FILE="logs/rag_system.log"
+
+# --- Utility Functions ---
+
+# Check if RAG system is properly configured
+check_rag_system() {
+    local issues=()
+
+    # Check if corpus directory exists
+    if [ ! -d "$CORPUS_DIR" ]; then
+        issues+=("Corpus directory not found: $CORPUS_DIR")
+    fi
+
+    # Check if registry exists
+    if [ ! -f "$CORPUS_REGISTRY" ]; then
+        issues+=("Corpus registry not found: $CORPUS_REGISTRY")
+    fi
+
+    # Check if corpus manager exists
+    if [ ! -f "corpus_manager.sh" ]; then
+        issues+=("Corpus manager not found: corpus_manager.sh")
+    fi
+
+    # Report issues
+    if [ ${#issues[@]} -gt 0 ]; then
+        echo "❌ RAG System Issues Found:"
+        for issue in "${issues[@]}"; do
+            echo "   - $issue"
+        done
+        return 1
+    else
+        echo "✅ RAG System is properly configured"
+        return 0
+    fi
+}
+
+# Get corpus statistics
+get_corpus_stats() {
+    if [ -f "$CORPUS_REGISTRY" ]; then
+        local topic_count=$(grep -c "|" "$CORPUS_REGISTRY")
+        local file_count=$(find "$CORPUS_DIR" -type f \( -name "*.txt" -o -name "*.md" -o -name "*.html" \) 2>/dev/null | wc -l)
+        echo "📊 Corpus Statistics:"
+        echo "   Topics: $topic_count"
+        echo "   Files: $file_count"
+    else
+        echo "❌ No corpus registry found"
+    fi
+}
+
+# Export configuration for use by other scripts
+export CORPUS_DIR CORPUS_REGISTRY CORPUS_CACHE_FILE CORPUS_CACHE_TTL
+export MAX_SEARCH_RESULTS MIN_CONTENT_LENGTH MAX_CONTENT_LENGTH SEARCH_CONTEXT_LINES
+export RAG_ENABLED RAG_CONFIDENCE_THRESHOLD RAG_MAX_CONTEXT_LENGTH RAG_CACHE_ENABLED
+export RAG_DEBUG RAG_LOG_FILE
+export GREP_CMD SED_CMD AWK_CMD
+
+# Make utility functions available
+export -f check_rag_system get_corpus_stats
diff --git a/bash/talk-to-computer/rag_integration.sh b/bash/talk-to-computer/rag_integration.sh
new file mode 100644
index 0000000..6c974df
--- /dev/null
+++ b/bash/talk-to-computer/rag_integration.sh
@@ -0,0 +1,336 @@
+#!/bin/bash
+
+# RAG Integration Module
+# This module provides functions for thinking mechanisms to intelligently query the RAG corpus
+# and integrate relevant context into their prompts
+
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+source "${SCRIPT_DIR}/rag_config.sh"
+
+# --- RAG Query Functions ---
+
+# Main function for mechanisms to query RAG system
+query_rag_context() {
+    local prompt="$1"
+    local mechanism="$2"
+    local max_context="${3:-$RAG_MAX_CONTEXT_LENGTH}"
+
+    # Determine if RAG should be used
+    if should_use_rag "$prompt" "$mechanism"; then
+        local corpus_results
+        corpus_results=$(get_relevant_context "$prompt" "$mechanism" "$max_context")
+
+        if [ -n "$corpus_results" ]; then
+            echo "RAG_CONTEXT_AVAILABLE: $corpus_results"
+            return 0
+        fi
+    fi
+
+    echo "RAG_CONTEXT_AVAILABLE: NONE"
+    return 1
+}
+
+# Determine if RAG should be used for this prompt/mechanism combination
+should_use_rag() {
+    local prompt="$1"
+    local mechanism="$2"
+
+    # Skip RAG if disabled
+    if [ "$RAG_ENABLED" != "true" ]; then
+        return 1
+    fi
+
+    # Check mechanism-specific RAG usage
+    case "$mechanism" in
+        "puzzle")
+            # Always use RAG for puzzle mechanism (coding/programming)
+            return 0
+            ;;
+        "socratic")
+            # Use RAG for technical or complex topics
+            if echo "$prompt" | grep -q -i '\(technical\|complex\|advanced\|algorithm\|programming\|science\)'; then
+                return 0
+            fi
+            ;;
+        "exploration")
+            # Use RAG for specific technical domains
+            if echo "$prompt" | grep -q -i '\(technology\|framework\|methodology\|architecture\)'; then
+                return 0
+            fi
+            ;;
+        "critique")
+            # Use RAG for domain-specific improvement requests
+            if echo "$prompt" | grep -q -i '\(improve\|optimize\|enhance\|refactor\)'; then
+                return 0
+            fi
+            ;;
+    esac
+
+    # Default: don't use RAG unless explicitly triggered
+    return 1
+}
+
+# Get relevant context from corpus for the given prompt and mechanism
+get_relevant_context() {
+    local prompt="$1"
+    local mechanism="$2"
+    local max_context="$3"
+
+    # Extract key search terms from prompt
+    local search_terms
+    search_terms=$(extract_search_terms "$prompt" "$mechanism")
+
+    if [ -z "$search_terms" ]; then
+        return 1
+    fi
+
+    local context=""
+
+    # Try each search term
+    for term in $search_terms; do
+        local corpus_path
+        corpus_path=$(find_relevant_corpus "$term" "$mechanism")
+
+        if [ -n "$corpus_path" ] && [ -d "$corpus_path" ]; then
+            local term_context
+            term_context=$(search_corpus_term "$term" "$corpus_path" "$max_context")
+
+            if [ -n "$term_context" ]; then
+                context="${context}\n=== Context for '$term' ===\n${term_context}\n"
+            fi
+        fi
+    done
+
+    # Trim context if too long
+    if [ ${#context} -gt "$max_context" ]; then
+        context=$(echo "$context" | head -c "$max_context")
+        context="${context}...\n[Content truncated for length]"
+    fi
+
+    echo "$context"
+}
+
+# Extract search terms from prompt based on mechanism
+extract_search_terms() {
+    local prompt="$1"
+    local mechanism="$2"
+
+    case "$mechanism" in
+        "puzzle")
+            # Extract programming-related terms
+            echo "$prompt" | grep -o -i '\b\(algorithm\|function\|variable\|class\|method\|programming\|code\|implement\|solve\)\w*' | head -5
+            ;;
+        "socratic")
+            # Extract technical concepts
+            echo "$prompt" | grep -o -i '\b\(concept\|principle\|theory\|approach\|methodology\|framework\)\w*' | head -3
+            ;;
+        "exploration")
+            # Extract comparison terms
+            echo "$prompt" | grep -o -i '\b\(compare\|versus\|alternative\|option\|approach\|strategy\)\w*' | head -3
+            ;;
+        "critique")
+            # Extract improvement terms
+            echo "$prompt" | grep -o -i '\b\(improve\|optimize\|enhance\|fix\|refactor\|performance\|quality\)\w*' | head -3
+            ;;
+        *)
+            # Generic term extraction
+            echo "$prompt" | grep -o -i '\b\w\{5,\}\b' | head -3
+            ;;
+    esac
+}
+
+# Find relevant corpus directory for a search term
+find_relevant_corpus() {
+    local search_term="$1"
+    local mechanism="$2"
+
+    # Try mechanism-specific corpus mapping first
+    case "$mechanism" in
+        "puzzle")
+            if echo "$search_term" | grep -q -i '\(lil\|programming\|algorithm\)'; then
+                echo "$CORPUS_DIR/programming"
+                return 0
+            fi
+            ;;
+        "socratic")
+            if echo "$search_term" | grep -q -i '\(science\|physics\|chemistry\|biology\)'; then
+                echo "$CORPUS_DIR/science"
+                return 0
+            fi
+            ;;
+    esac
+
+    # Try to find corpus based on term
+    if echo "$search_term" | grep -q -i '\(programming\|code\|algorithm\|function\)'; then
+        echo "$CORPUS_DIR/programming"
+    elif echo "$search_term" | grep -q -i '\(science\|physics\|chemistry\|biology\)'; then
+        echo "$CORPUS_DIR/science"
+    elif echo "$search_term" | grep -q -i '\(literature\|book\|author\|writing\)'; then
+        echo "$CORPUS_DIR/literature"
+    else
+        # Default to general corpus
+        echo "$CORPUS_DIR/general"
+    fi
+}
+
+# Search corpus for a specific term and return relevant content
+search_corpus_term() {
+    local search_term="$1"
+    local corpus_path="$2"
+    local max_context="$3"
+
+    # Use grep to find relevant content
+    local results
+    results=$(grep -r -i -A 5 -B 2 "$search_term" "$corpus_path" --include="*.txt" --include="*.md" --include="*.html" 2>/dev/null | head -20)
+
+    if [ -n "$results" ]; then
+        echo "$results"
+        return 0
+    fi
+
+    return 1
+}
+
+# --- Context Integration Functions ---
+
+# Integrate RAG context into a prompt
+integrate_rag_context() {
+    local original_prompt="$1"
+    local rag_context="$2"
+    local mechanism="$3"
+
+    if [ "$rag_context" = "RAG_CONTEXT_AVAILABLE: NONE" ] || [ -z "$rag_context" ]; then
+        echo "$original_prompt"
+        return 0
+    fi
+
+    # Extract actual context content
+    local context_content
+    context_content=$(echo "$rag_context" | sed 's/^RAG_CONTEXT_AVAILABLE: //')
+
+    # Create context-aware prompt based on mechanism
+    case "$mechanism" in
+        "puzzle")
+            cat << EOF
+I have access to relevant programming knowledge that may help answer this question:
+
+$context_content
+
+Original Question: $original_prompt
+
+Please use the above context to provide a more accurate and helpful response. If the context is relevant, incorporate it naturally into your answer. If it's not directly relevant, you can ignore it and answer based on your general knowledge.
+EOF
+            ;;
+        "socratic")
+            cat << EOF
+Relevant context from knowledge base:
+
+$context_content
+
+Question for analysis: $original_prompt
+
+Consider the above context when formulating your response. Use it to provide deeper insights and more accurate analysis if relevant.
+EOF
+            ;;
+        "exploration")
+            cat << EOF
+Additional context that may be relevant:
+
+$context_content
+
+Exploration topic: $original_prompt
+
+Use the provided context to enrich your analysis and provide more comprehensive alternatives if applicable.
+EOF
+            ;;
+        *)
+            cat << EOF
+Context from knowledge base:
+
+$context_content
+
+$original_prompt
+
+You may use the above context to enhance your response if it's relevant to the question.
+EOF
+            ;;
+    esac
+}
+
+# --- Utility Functions ---
+
+# Check if corpus is available and functional
+check_corpus_health() {
+    local issues=()
+
+    # Check if corpus directory exists
+    if [ ! -d "$CORPUS_DIR" ]; then
+        issues+=("Corpus directory not found: $CORPUS_DIR")
+    fi
+
+    # Check if registry exists
+    if [ ! -f "$CORPUS_REGISTRY" ]; then
+        issues+=("Corpus registry not found: $CORPUS_REGISTRY")
+    fi
+
+    # Check if registry has content
+    if [ -f "$CORPUS_REGISTRY" ] && [ $(wc -l < "$CORPUS_REGISTRY") -le 3 ]; then
+        issues+=("Corpus registry appears to be empty")
+    fi
+
+    # Report issues
+    if [ ${#issues[@]} -gt 0 ]; then
+        echo "❌ RAG Integration Issues Found:"
+        for issue in "${issues[@]}"; do
+            echo "   - $issue"
+        done
+        return 1
+    else
+        echo "✅ RAG Integration is healthy"
+        return 0
+    fi
+}
+
+# Get RAG statistics
+get_rag_stats() {
+    if [ ! -f "$CORPUS_REGISTRY" ]; then
+        echo "❌ No corpus registry found"
+        return 1
+    fi
+
+    local topic_count=$(grep -c "|" "$CORPUS_REGISTRY")
+    local file_count=$(find "$CORPUS_DIR" -type f \( -name "*.txt" -o -name "*.md" -o -name "*.html" \) 2>/dev/null | wc -l)
+
+    echo "📊 RAG System Statistics:"
+    echo "   Topics: $topic_count"
+    echo "   Files: $file_count"
+    echo "   Status: $(if [ "$RAG_ENABLED" = "true" ]; then echo "Enabled"; else echo "Disabled"; fi)"
+    echo "   Max Context: $RAG_MAX_CONTEXT_LENGTH characters"
+}
+
+# --- Integration Helper ---
+
+# Helper function for mechanisms to easily use RAG
+use_rag_if_available() {
+    local prompt="$1"
+    local mechanism="$2"
+
+    local rag_result
+    rag_result=$(query_rag_context "$prompt" "$mechanism")
+
+    if echo "$rag_result" | grep -q "^RAG_CONTEXT_AVAILABLE: " && ! echo "$rag_result" | grep -q "NONE$"; then
+        echo "RAG context found - integrating into prompt"
+        integrate_rag_context "$prompt" "$rag_result" "$mechanism"
+        return 0
+    else
+        echo "No RAG context available - using original prompt"
+        echo "$prompt"
+        return 1
+    fi
+}
+
+# Export functions for use by other scripts
+export -f query_rag_context should_use_rag get_relevant_context
+export -f extract_search_terms find_relevant_corpus search_corpus_term
+export -f integrate_rag_context check_corpus_health get_rag_stats
+export -f use_rag_if_available
diff --git a/bash/talk-to-computer/rag_search.sh b/bash/talk-to-computer/rag_search.sh
new file mode 100755
index 0000000..dfcbc91
--- /dev/null
+++ b/bash/talk-to-computer/rag_search.sh
@@ -0,0 +1,187 @@
+#!/bin/bash
+
+# RAG Search Utility - Search the knowledge corpus
+# This script demonstrates how to search the corpus using efficient Unix tools
+
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+source "${SCRIPT_DIR}/rag_config.sh"
+
+# --- Utility Functions ---
+
+# Get corpus path for a topic (standalone version)
+get_corpus_path() {
+    local topic="$1"
+    if [ -f "$CORPUS_REGISTRY" ]; then
+        grep "^[^|]*${topic}|" "$CORPUS_REGISTRY" | head -1 | cut -d'|' -f2
+    fi
+}
+
+# Check if corpus exists for a topic
+corpus_exists() {
+    local topic="$1"
+    grep -q "^[^|]*${topic}|" "$CORPUS_REGISTRY" 2>/dev/null
+}
+
+# --- Search Functions ---
+
+# Search corpus for keywords
+search_corpus() {
+    local query="$1"
+    local topic="${2:-}"
+
+    echo "🔍 Searching corpus for: '$query'"
+    if [ -n "$topic" ]; then
+        echo "📂 Limited to topic: $topic"
+    fi
+    echo "----------------------------------------"
+
+    # Build search command
+    if [ -n "$topic" ]; then
+        local corpus_path=$(get_corpus_path "$topic")
+        if [ -n "$corpus_path" ]; then
+            # Search specific topic directory
+            grep -r -i "$query" "$corpus_path" --include="*.txt" --include="*.md" --include="*.html"
+        else
+            echo "❌ Topic not found: $topic"
+            return 1
+        fi
+    else
+        # Search entire corpus
+        grep -r -i "$query" "$CORPUS_DIR" --include="*.txt" --include="*.md" --include="*.html"
+    fi | head -10 | while IFS=: read -r file line content; do
+        local filename=$(basename "$file")
+        local topic_name=$(basename "$(dirname "$file")")
+        echo "📄 $topic_name/$filename (line $line):"
+        echo "   $content"
+        echo ""
+    done
+}
+
+# Get context around search results
+get_context() {
+    local query="$1"
+    local topic="$2"
+    local context_lines="${3:-$SEARCH_CONTEXT_LINES}"
+
+    echo "📖 Getting context for: '$query'"
+    echo "----------------------------------------"
+
+    if [ -n "$topic" ]; then
+        local corpus_path=$(get_corpus_path "$topic")
+        if [ -n "$corpus_path" ]; then
+            grep -r -i -A "$context_lines" -B "$context_lines" "$query" "$corpus_path"
+        else
+            echo "❌ Topic not found: $topic"
+            return 1
+        fi
+    else
+        grep -r -i -A "$context_lines" -B "$context_lines" "$query" "$CORPUS_DIR"
+    fi
+}
+
+# Extract relevant sections from files
+extract_sections() {
+    local query="$1"
+    local topic="$2"
+
+    echo "📋 Extracting relevant sections for: '$query'"
+    echo "----------------------------------------"
+
+    # Find files containing the query
+    local files
+    if [ -n "$topic" ]; then
+        local corpus_path=$(get_corpus_path "$topic")
+        files=$(grep -r -l -i "$query" "$corpus_path" 2>/dev/null)
+    else
+        files=$(grep -r -l -i "$query" "$CORPUS_DIR" 2>/dev/null)
+    fi
+
+    if [ -z "$files" ]; then
+        echo "❌ No files found containing: $query"
+        return 1
+    fi
+
+    echo "$files" | while read -r file; do
+        local filename=$(basename "$file")
+        echo "📄 Processing: $filename"
+        echo "----------------------------------------"
+
+        # Extract relevant sections (headers and surrounding content)
+        awk -v query="$query" '
+        BEGIN { in_section = 0; section_content = "" }
+
+        # Check if line contains query (case insensitive)
+        tolower($0) ~ tolower(query) {
+            if (in_section == 0) {
+                print "RELEVANT SECTION:"
+                in_section = 1
+            }
+        }
+
+        # If we found a header before the match, include it
+        /^#/ && in_section == 0 {
+            section_content = $0
+        }
+
+        # Print content when we have a match
+        in_section == 1 {
+            print
+            if (length($0) == 0) {
+                in_section = 0
+                section_content = ""
+                print ""
+            }
+        }
+        ' "$file"
+
+        echo "----------------------------------------"
+    done
+}
+
+# --- Main Command Interface ---
+
+case "${1:-help}" in
+    "search")
+        if [ -n "$2" ]; then
+            search_corpus "$2" "$3"
+        else
+            echo "❌ Usage: $0 search <query> [topic]"
+        fi
+        ;;
+    "context")
+        if [ -n "$2" ]; then
+            get_context "$2" "$3" "$4"
+        else
+            echo "❌ Usage: $0 context <query> [topic] [lines]"
+        fi
+        ;;
+    "extract")
+        if [ -n "$2" ]; then
+            extract_sections "$2" "$3"
+        else
+            echo "❌ Usage: $0 extract <query> [topic]"
+        fi
+        ;;
+    "stats")
+        get_corpus_stats
+        ;;
+    "help"|*)
+        echo "🔍 RAG Search Utility"
+        echo "Search and extract information from the knowledge corpus"
+        echo ""
+        echo "Usage: $0 <command> [arguments]"
+        echo ""
+        echo "Commands:"
+        echo "  search <query> [topic]    Search for exact matches"
+        echo "  context <query> [topic]  Get context around matches"
+        echo "  extract <query> [topic]  Extract relevant sections"
+        echo "  stats                   Show corpus statistics"
+        echo "  help                    Show this help message"
+        echo ""
+        echo "Examples:"
+        echo "  $0 search 'quantum physics'"
+        echo "  $0 search 'lil programming' programming"
+        echo "  $0 context 'force' physics"
+        echo "  $0 extract 'variables' programming"
+        ;;
+esac
diff --git a/bash/talk-to-computer/socratic b/bash/talk-to-computer/socratic
new file mode 100755
index 0000000..a685875
--- /dev/null
+++ b/bash/talk-to-computer/socratic
@@ -0,0 +1,229 @@
+#!/bin/bash
+
+# Socratic System
+# This script uses the Socratic method to refine responses through AI-generated questions and dialogue.
+#
+# APPLICATION LOGIC:
+# The Socratic process implements an iterative questioning system where AI models
+# engage in dialogue to explore, clarify, and refine responses. The system operates
+# through three distinct phases designed to deepen understanding and identify limitations:
+#
+# PHASE 1 - INITIAL RESPONSE GENERATION:
+#   - A response model generates the first answer to the user's prompt
+#   - The model provides a comprehensive initial response as the foundation
+#   - This creates the starting point for Socratic exploration
+#   - The response serves as the subject of subsequent questioning
+#
+# PHASE 2 - SOCRATIC QUESTIONING:
+#   - A question model analyzes the initial response and generates probing questions
+#   - Questions focus on clarifying assumptions, exploring implications, and considering alternatives
+#   - The question model identifies areas that need deeper examination
+#   - Questions are designed to reveal limitations, gaps, or unclear aspects
+#
+# PHASE 3 - RESPONSE REFINEMENT:
+#   - The original response model addresses the Socratic questions
+#   - The model may revise, expand, or clarify its initial response
+#   - This creates a dialogue that deepens the analysis
+#   - The process may reveal what cannot be determined or requires additional information
+#
+# SOCRATIC MODELING:
+# The system applies Socratic questioning principles to AI response refinement:
+#   - Separate models for questioning and responding may provide different perspectives
+#   - Probing questions help identify assumptions and limitations in the initial response
+#   - Iterative dialogue may reveal deeper insights or expose knowledge gaps
+#   - The process emphasizes intellectual honesty about what can and cannot be determined
+#   - Transparency through logging shows the evolution of understanding
+#   - The method may help catch overconfident claims or identify areas needing clarification
+#
+# The Socratic process continues for a configurable number of rounds,
+# with each iteration potentially revealing new insights or limitations.
+# The system emphasizes depth of analysis and intellectual honesty over definitive answers.
+
+# Initialize common functionality
+source "$(dirname "${BASH_SOURCE[0]}")/common.sh"
+init_thinking_mechanism "${BASH_SOURCE[0]}"
+
+# Use centralized model configuration
+RESPONSE_MODEL="$SOCRATIC_RESPONSE_MODEL"
+QUESTION_MODEL="$SOCRATIC_QUESTION_MODEL"
+
+# Validate and set models with standardized error handling
+if ! validate_and_set_model "RESPONSE_MODEL" "$RESPONSE_MODEL" "$FALLBACK_MODEL"; then
+    handle_model_error "$RESPONSE_MODEL" "$(basename "$0")"
+fi
+
+if ! validate_and_set_model "QUESTION_MODEL" "$QUESTION_MODEL" "$FALLBACK_MODEL"; then
+    handle_model_error "$QUESTION_MODEL" "$(basename "$0")"
+fi
+
+# --- Argument Validation ---
+if [ "$#" -lt 1 ]; then
+    echo -e "\n\tSocratic"
+    echo -e "\tThis script uses the Socratic method to refine responses through AI-generated questions and dialogue."
+    echo -e "\n\tUsage: $0 [-f <file_path>] \"<your prompt>\" [number_of_questioning_rounds]"
+    echo -e "\n\tExample: $0 -f ./input.txt \"Please analyze this text\" 2"
+    echo -e "\n\tIf number_of_questioning_rounds is not provided, the program will default to $DEFAULT_ROUNDS rounds."
+    echo -e "\n\t-f <file_path> (optional): Append the contents of the file to the prompt."
+    echo -e "\n"
+    exit 1
+fi
+
+# --- Argument Parsing ---
+FILE_PATH=""
+while getopts "f:" opt; do
+  case $opt in
+    f)
+      FILE_PATH="$OPTARG"
+      ;;
+    *)
+      echo "Invalid option: -$OPTARG" >&2
+      exit 1
+      ;;
+  esac
+done
+shift $((OPTIND -1))
+
+PROMPT="$1"
+if [ -z "$2" ]; then
+    ROUNDS=$DEFAULT_ROUNDS
+else
+    ROUNDS=$2
+fi
+
+# If file path is provided, append its contents to the prompt
+if [ -n "$FILE_PATH" ]; then
+    if [ ! -f "$FILE_PATH" ]; then
+        handle_file_error "$FILE_PATH" "find" "$(basename "$0")"
+    fi
+    if [ ! -r "$FILE_PATH" ]; then
+        handle_file_error "$FILE_PATH" "read" "$(basename "$0")"
+    fi
+    FILE_CONTENTS=$(cat "$FILE_PATH" 2>/dev/null) || handle_file_error "$FILE_PATH" "read contents of" "$(basename "$0")"
+    PROMPT="$PROMPT\n[FILE CONTENTS]\n$FILE_CONTENTS\n[END FILE]"
+fi
+
+# --- File Initialization ---
+# Create a temporary directory if it doesn't exist
+mkdir -p ~/tmp
+# Create a unique file for this session based on the timestamp
+SESSION_FILE=~/tmp/socratic_$(date +%Y%m%d_%H%M%S).txt
+
+echo "Socratic Session Log: ${SESSION_FILE}"
+echo "---------------------------------"
+
+# Store the initial user prompt in the session file
+echo "USER PROMPT: ${PROMPT}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+echo "Processing Socratic dialogue with ${ROUNDS} questioning rounds..."
+
+# --- Initial Response Generation ---
+echo "Generating initial response..."
+echo "INITIAL RESPONSE GENERATION:" >> "${SESSION_FILE}"
+echo "============================" >> "${SESSION_FILE}"
+
+INITIAL_PROMPT="You are an expert assistant. Please provide a comprehensive response to the following prompt. Be thorough but also honest about any limitations in your knowledge or areas where you cannot provide definitive answers.
+
+PROMPT: ${PROMPT}"
+
+INITIAL_RESPONSE=$(ollama run "${RESPONSE_MODEL}" "${INITIAL_PROMPT}")
+INITIAL_RESPONSE=$(guard_output_quality "$INITIAL_RESPONSE" "$PROMPT" "$MECHANISM_NAME" "$RESPONSE_MODEL")
+
+echo "INITIAL RESPONSE (${RESPONSE_MODEL}):" >> "${SESSION_FILE}"
+echo "${INITIAL_RESPONSE}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+# --- Socratic Dialogue Rounds ---
+CURRENT_RESPONSE="${INITIAL_RESPONSE}"
+
+for round in $(seq 1 "${ROUNDS}"); do
+    echo "Starting Socratic round ${round} of ${ROUNDS}..."
+    echo "SOCRATIC ROUND ${round}:" >> "${SESSION_FILE}"
+    echo "=======================" >> "${SESSION_FILE}"
+    
+    # --- Step 1: Generate Socratic Questions ---
+    echo "Step 1: Generating Socratic questions..."
+    echo "STEP 1 - QUESTION GENERATION:" >> "${SESSION_FILE}"
+    
+    QUESTION_PROMPT="You are a Socratic questioner. Your task is to analyze the following response and generate 2-3 probing questions that will help clarify, refine, or explore the response more deeply.
+
+Focus on questions that:
+- Clarify assumptions or definitions
+- Explore implications or consequences
+- Consider alternative perspectives
+- Identify areas where the response may be incomplete or uncertain
+- Flag what cannot be determined with the given information
+
+RESPONSE TO QUESTION: ${CURRENT_RESPONSE}
+
+Generate your questions in a clear, numbered format. Be specific and avoid yes/no questions."
+
+    QUESTIONS=$(ollama run "${QUESTION_MODEL}" "${QUESTION_PROMPT}")
+    QUESTIONS=$(guard_output_quality "$QUESTIONS" "$PROMPT" "$MECHANISM_NAME" "$QUESTION_MODEL")
+    
+    echo "QUESTIONS (${QUESTION_MODEL}):" >> "${SESSION_FILE}"
+    echo "${QUESTIONS}" >> "${SESSION_FILE}"
+    echo "" >> "${SESSION_FILE}"
+    
+    # --- Step 2: Generate Refined Response ---
+    echo "Step 2: Generating refined response to questions..."
+    echo "STEP 2 - RESPONSE REFINEMENT:" >> "${SESSION_FILE}"
+    
+    REFINE_PROMPT="You are an expert assistant. Your previous response has been analyzed and the following Socratic questions have been raised. Please provide a refined, expanded, or clarified response that addresses these questions.
+
+ORIGINAL PROMPT: ${PROMPT}
+YOUR PREVIOUS RESPONSE: ${CURRENT_RESPONSE}
+SOCRATIC QUESTIONS: ${QUESTIONS}
+
+Please provide a comprehensive response that:
+- Addresses each question raised
+- Clarifies any assumptions or definitions
+- Explores implications and alternatives
+- Honestly acknowledges what cannot be determined
+- Refines or expands your original response based on the questioning"
+
+    REFINED_RESPONSE=$(ollama run "${RESPONSE_MODEL}" "${REFINE_PROMPT}")
+    REFINED_RESPONSE=$(guard_output_quality "$REFINED_RESPONSE" "$PROMPT" "$MECHANISM_NAME" "$RESPONSE_MODEL")
+    
+    echo "REFINED RESPONSE (${RESPONSE_MODEL}):" >> "${SESSION_FILE}"
+    echo "${REFINED_RESPONSE}" >> "${SESSION_FILE}"
+    echo "" >> "${SESSION_FILE}"
+    
+    # Update the current response for the next round
+    CURRENT_RESPONSE="${REFINED_RESPONSE}"
+    
+    echo "Socratic round ${round} complete."
+    echo "" >> "${SESSION_FILE}"
+done
+
+# --- Final Summary Generation ---
+echo "Generating final summary..."
+echo "FINAL SUMMARY GENERATION:" >> "${SESSION_FILE}"
+echo "========================" >> "${SESSION_FILE}"
+
+SUMMARY_PROMPT="You are an expert analyst. Based on the Socratic dialogue below, please provide a concise summary of the key insights, conclusions, and limitations that emerged from the questioning process.
+
+ORIGINAL PROMPT: ${PROMPT}
+FINAL REFINED RESPONSE: ${CURRENT_RESPONSE}
+
+Please provide a summary that:
+- Highlights the most important insights discovered
+- Identifies key conclusions that can be drawn
+- Notes any limitations or areas that cannot be determined
+- Captures the evolution of understanding through the dialogue
+- Is clear, concise, and well-organized"
+
+FINAL_SUMMARY=$(ollama run "${RESPONSE_MODEL}" "${SUMMARY_PROMPT}")
+FINAL_SUMMARY=$(guard_output_quality "$FINAL_SUMMARY" "$PROMPT" "$MECHANISM_NAME" "$RESPONSE_MODEL")
+
+echo "FINAL SUMMARY (${RESPONSE_MODEL}):" >> "${SESSION_FILE}"
+echo "${FINAL_SUMMARY}" >> "${SESSION_FILE}"
+
+# --- Final Output ---
+echo "---------------------------------"
+echo "Socratic process complete."
+echo "Final summary:"
+echo "---------------------------------"
+
+echo "${FINAL_SUMMARY}"
+echo ""
+echo "Full Socratic dialogue log: ${SESSION_FILE}" 
\ No newline at end of file
diff --git a/bash/talk-to-computer/synthesis b/bash/talk-to-computer/synthesis
new file mode 100755
index 0000000..b91c9b5
--- /dev/null
+++ b/bash/talk-to-computer/synthesis
@@ -0,0 +1,248 @@
+#!/bin/bash
+
+# Get the directory where this script is located
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+
+# Synthesis System
+# This script combines outputs from multiple thinking mechanisms into a coherent final response.
+#
+# APPLICATION LOGIC:
+# The synthesis process implements a multi-mechanism integration system that combines
+# outputs from different thinking strategies into a unified, coherent response. The system
+# operates through three distinct phases designed to maximize comprehensiveness and clarity:
+#
+# PHASE 1 - MECHANISM EXECUTION:
+#   - Executes multiple thinking mechanisms on the same prompt
+#   - Collects diverse perspectives and approaches
+#   - Ensures comprehensive coverage of the problem space
+#   - Creates a foundation for synthesis
+#
+# PHASE 2 - CONFLICT RESOLUTION:
+#   - Identifies contradictions and conflicts between mechanism outputs
+#   - Resolves disagreements through logical analysis
+#   - Prioritizes information based on confidence and relevance
+#   - Maintains intellectual honesty about uncertainties
+#
+# PHASE 3 - SYNTHESIS GENERATION:
+#   - Combines the best elements from each mechanism
+#   - Creates a coherent narrative that addresses all aspects
+#   - Provides a unified response that leverages multiple perspectives
+#   - Ensures the final output is greater than the sum of its parts
+#
+# SYNTHESIS MODELING:
+# The system applies integrative thinking principles to AI response generation:
+#   - Multiple mechanisms provide diverse perspectives on the same problem
+#   - Conflict resolution ensures logical consistency in the final output
+#   - Synthesis leverages the strengths of each individual mechanism
+#   - The process may reveal insights that individual mechanisms miss
+#   - Transparency shows how different perspectives were integrated
+#   - The method may provide more comprehensive and balanced responses
+#
+# The synthesis process emphasizes comprehensiveness and coherence,
+# ensuring users get the benefits of multiple thinking approaches in a unified response.
+
+# Source the logging system using absolute path
+source "${SCRIPT_DIR}/logging.sh"
+
+# Source the quality guard for output quality protection
+source "${SCRIPT_DIR}/quality_guard.sh"
+
+# Get mechanism name automatically
+MECHANISM_NAME=$(get_mechanism_name "$0")
+
+# --- Model Configuration ---
+SYNTHESIS_MODEL="llama3:8b-instruct-q4_K_M"
+
+# --- Defaults ---
+DEFAULT_MECHANISMS=("consensus" "critique" "socratic")
+
+# --- Argument Validation ---
+if [ "$#" -lt 1 ]; then
+    echo -e "\n\tSynthesis"
+    echo -e "\tThis script combines outputs from multiple thinking mechanisms into a coherent final response."
+    echo -e "\n\tUsage: $0 [-f <file_path>] [-m <mechanism1,mechanism2,...>] \"<your prompt>\" [number_of_rounds]"
+    echo -e "\n\tExample: $0 -f ./input.txt -m consensus,critique \"Please analyze this text\" 2"
+    echo -e "\n\tIf number_of_rounds is not provided, the program will default to 2 rounds."
+    echo -e "\n\t-f <file_path> (optional): Append the contents of the file to the prompt."
+    echo -e "\n\t-m <mechanisms> (optional): Comma-separated list of mechanisms to use (default: consensus,critique,socratic)."
+    echo -e "\n"
+    exit 1
+fi
+
+# --- Argument Parsing ---
+FILE_PATH=""
+MECHANISMS_STR=""
+while getopts "f:m:" opt; do
+  case $opt in
+    f)
+      FILE_PATH="$OPTARG"
+      ;;
+    m)
+      MECHANISMS_STR="$OPTARG"
+      ;;
+    *)
+      echo "Invalid option: -$OPTARG" >&2
+      exit 1
+      ;;
+  esac
+done
+shift $((OPTIND -1))
+
+PROMPT="$1"
+if [ -z "$2" ]; then
+    ROUNDS=2
+else
+    ROUNDS=$2
+fi
+
+# Parse mechanisms
+if [ -n "$MECHANISMS_STR" ]; then
+    IFS=',' read -ra MECHANISMS <<< "$MECHANISMS_STR"
+else
+    MECHANISMS=("${DEFAULT_MECHANISMS[@]}")
+fi
+
+# If file path is provided, append its contents to the prompt
+if [ -n "$FILE_PATH" ]; then
+    if [ ! -f "$FILE_PATH" ]; then
+        echo "File not found: $FILE_PATH" >&2
+        exit 1
+    fi
+    FILE_CONTENTS=$(cat "$FILE_PATH")
+    PROMPT="$PROMPT\n[FILE CONTENTS]\n$FILE_CONTENTS\n[END FILE]"
+fi
+
+# --- File Initialization ---
+mkdir -p ~/tmp
+SESSION_FILE=~/tmp/synthesis_$(date +%Y%m%d_%H%M%S).txt
+
+# Initialize timing
+SESSION_ID=$(generate_session_id)
+start_timer "$SESSION_ID" "synthesis"
+
+echo "Synthesis Session Log: ${SESSION_FILE}"
+echo "---------------------------------"
+
+# Store the initial user prompt in the session file
+echo "USER PROMPT: ${PROMPT}" >> "${SESSION_FILE}"
+echo "MECHANISMS: ${MECHANISMS[*]}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+# --- Phase 1: Mechanism Execution ---
+echo "Phase 1: Executing thinking mechanisms..."
+echo "PHASE 1 - MECHANISM EXECUTION:" >> "${SESSION_FILE}"
+
+declare -a mechanism_outputs
+declare -a mechanism_names
+
+for i in "${!MECHANISMS[@]}"; do
+    mechanism="${MECHANISMS[$i]}"
+    echo "  Executing ${mechanism} mechanism..."
+    
+    # Execute the mechanism using absolute path
+    if [ -f "${SCRIPT_DIR}/${mechanism}" ]; then
+        output=$("${SCRIPT_DIR}/${mechanism}" "${PROMPT}" "${ROUNDS}" 2>&1)
+        mechanism_outputs[$i]="${output}"
+        mechanism_names[$i]="${mechanism}"
+        
+        echo "MECHANISM ${i+1} (${mechanism}):" >> "${SESSION_FILE}"
+        echo "${output}" >> "${SESSION_FILE}"
+        echo "" >> "${SESSION_FILE}"
+    else
+        echo "  WARNING: Mechanism ${mechanism} not found, skipping..." >&2
+    fi
+done
+
+# --- Phase 2: Conflict Resolution ---
+echo "Phase 2: Analyzing and resolving conflicts..."
+echo "PHASE 2 - CONFLICT RESOLUTION:" >> "${SESSION_FILE}"
+
+# Create conflict analysis prompt
+CONFLICT_PROMPT="You are a conflict resolution specialist. Analyze the following outputs from different thinking mechanisms and identify any contradictions, conflicts, or areas of disagreement.
+
+ORIGINAL PROMPT: ${PROMPT}
+
+MECHANISM OUTPUTS:"
+
+for i in "${!MECHANISMS[@]}"; do
+    if [ -n "${mechanism_outputs[$i]}" ]; then
+        CONFLICT_PROMPT="${CONFLICT_PROMPT}
+
+${mechanism_names[$i]} OUTPUT:
+${mechanism_outputs[$i]}"
+    fi
+done
+
+CONFLICT_PROMPT="${CONFLICT_PROMPT}
+
+Please identify:
+1. Any direct contradictions between the outputs
+2. Areas where the mechanisms disagree
+3. Information that appears to be conflicting
+4. How these conflicts might be resolved
+
+Provide a clear analysis of conflicts and potential resolutions."
+
+conflict_analysis=$(ollama run "${SYNTHESIS_MODEL}" "${CONFLICT_PROMPT}")
+conflict_analysis=$(guard_output_quality "$conflict_analysis" "$PROMPT" "$MECHANISM_NAME" "$SYNTHESIS_MODEL")
+
+echo "CONFLICT ANALYSIS:" >> "${SESSION_FILE}"
+echo "${conflict_analysis}" >> "${SESSION_FILE}"
+echo "" >> "${SESSION_FILE}"
+
+# --- Phase 3: Synthesis Generation ---
+echo "Phase 3: Generating unified synthesis..."
+echo "PHASE 3 - SYNTHESIS GENERATION:" >> "${SESSION_FILE}"
+
+# Create synthesis prompt
+SYNTHESIS_PROMPT="You are a synthesis specialist. Your task is to combine the outputs from multiple thinking mechanisms into a coherent, unified response that leverages the strengths of each approach.
+
+ORIGINAL PROMPT: ${PROMPT}
+
+MECHANISM OUTPUTS:"
+
+for i in "${!MECHANISMS[@]}"; do
+    if [ -n "${mechanism_outputs[$i]}" ]; then
+        SYNTHESIS_PROMPT="${SYNTHESIS_PROMPT}
+
+${mechanism_names[$i]} OUTPUT:
+${mechanism_outputs[$i]}"
+    fi
+done
+
+SYNTHESIS_PROMPT="${SYNTHESIS_PROMPT}
+
+CONFLICT ANALYSIS:
+${conflict_analysis}
+
+Please create a unified synthesis that:
+1. Combines the best insights from each mechanism
+2. Resolves any identified conflicts logically
+3. Provides a comprehensive response that addresses all aspects
+4. Maintains intellectual honesty about uncertainties
+5. Creates a coherent narrative that flows naturally
+6. Leverages the unique strengths of each thinking approach
+
+Your synthesis should be greater than the sum of its parts - it should provide insights that individual mechanisms might miss."
+
+final_synthesis=$(ollama run "${SYNTHESIS_MODEL}" "${SYNTHESIS_PROMPT}")
+final_synthesis=$(guard_output_quality "$final_synthesis" "$PROMPT" "$MECHANISM_NAME" "$SYNTHESIS_MODEL")
+
+echo "FINAL SYNTHESIS:" >> "${SESSION_FILE}"
+echo "${final_synthesis}" >> "${SESSION_FILE}"
+
+# End timing
+duration=$(end_timer "$SESSION_ID" "synthesis")
+
+# --- Final Output ---
+echo "---------------------------------"
+echo "Synthesis process complete."
+echo "Final unified response:"
+echo "---------------------------------"
+
+echo "${final_synthesis}"
+echo ""
+echo "Mechanisms used: ${MECHANISMS[*]}"
+echo "Execution time: ${duration} seconds"
+echo ""
+echo "Full synthesis log: ${SESSION_FILE}" 
\ No newline at end of file
diff --git a/bash/talk-to-computer/test_framework.sh b/bash/talk-to-computer/test_framework.sh
new file mode 100755
index 0000000..c74ad56
--- /dev/null
+++ b/bash/talk-to-computer/test_framework.sh
@@ -0,0 +1,434 @@
+#!/bin/bash
+
+# Comprehensive Test Framework for AI Thinking Mechanisms
+# This script provides automated testing capabilities for all system components.
+
+# Source common functionality
+source "$(dirname "${BASH_SOURCE[0]}")/common.sh"
+source "$(dirname "${BASH_SOURCE[0]}")/config.sh"
+
+# --- Test Configuration ---
+
+# Test directories
+TEST_DIR="${LOG_DIR}/tests"
+RESULTS_DIR="${TEST_DIR}/results"
+COVERAGE_DIR="${TEST_DIR}/coverage"
+
+# Test counters
+TESTS_PASSED=0
+TESTS_FAILED=0
+TESTS_SKIPPED=0
+
+# --- Test Utilities ---
+
+# Initialize test framework
+init_test_framework() {
+    mkdir -p "$RESULTS_DIR" "$COVERAGE_DIR"
+    echo "🧪 AI Thinking Mechanisms Test Framework"
+    echo "========================================"
+    echo
+}
+
+# Test result functions
+test_pass() {
+    local test_name="$1"
+    echo "✅ PASS: $test_name"
+    ((TESTS_PASSED++))
+}
+
+test_fail() {
+    local test_name="$1"
+    local reason="$2"
+    echo "❌ FAIL: $test_name - $reason"
+    ((TESTS_FAILED++))
+}
+
+test_skip() {
+    local test_name="$1"
+    local reason="$2"
+    echo "⏭️  SKIP: $test_name - $reason"
+    ((TESTS_SKIPPED++))
+}
+
+# Assert functions
+assert_equals() {
+    local expected="$1"
+    local actual="$2"
+    local test_name="$3"
+
+    if [ "$expected" = "$actual" ]; then
+        test_pass "$test_name"
+    else
+        test_fail "$test_name" "Expected '$expected', got '$actual'"
+    fi
+}
+
+assert_not_empty() {
+    local value="$1"
+    local test_name="$2"
+
+    if [ -n "$value" ]; then
+        test_pass "$test_name"
+    else
+        test_fail "$test_name" "Value is empty"
+    fi
+}
+
+assert_file_exists() {
+    local file_path="$1"
+    local test_name="$2"
+
+    if [ -f "$file_path" ]; then
+        test_pass "$test_name"
+    else
+        test_fail "$test_name" "File does not exist: $file_path"
+    fi
+}
+
+# --- Component Tests ---
+
+test_common_functions() {
+    echo "Testing Common Functions..."
+
+    # Test script directory detection
+    local script_dir
+    script_dir=$(get_script_dir)
+    assert_not_empty "$script_dir" "get_script_dir"
+
+    # Test model validation (if ollama is available)
+    if command_exists ollama; then
+        local result
+        result=$(validate_model "gemma3n:e2b" "gemma3n:e2b")
+        if [ $? -eq 0 ]; then
+            test_pass "validate_model_success"
+        else
+            test_skip "validate_model_success" "Model not available"
+        fi
+    else
+        test_skip "validate_model_success" "Ollama not available"
+    fi
+}
+
+test_config_loading() {
+    echo "Testing Configuration Loading..."
+
+    # Test that config variables are loaded
+    if [ -n "$DEFAULT_MODEL" ]; then
+        test_pass "config_default_model"
+    else
+        test_fail "config_default_model" "DEFAULT_MODEL not set"
+    fi
+
+    if [ -n "$FALLBACK_MODEL" ]; then
+        test_pass "config_fallback_model"
+    else
+        test_fail "config_fallback_model" "FALLBACK_MODEL not set"
+    fi
+
+    # Test model arrays
+    if [ ${#CONSENSUS_MODELS[@]} -gt 0 ]; then
+        test_pass "config_consensus_models"
+    else
+        test_fail "config_consensus_models" "CONSENSUS_MODELS array is empty"
+    fi
+}
+
+test_quality_guard() {
+    echo "Testing Quality Guard..."
+
+    source "./quality_guard.sh"
+
+    # Test quality assessment
+    local test_response="This is a comprehensive answer that should pass quality checks."
+    local quality_score
+    quality_score=$(assess_quality "$test_response" "test prompt" "socratic")
+    assert_not_empty "$quality_score" "assess_quality"
+
+    # Test degradation detection
+    local degradation_score
+    degradation_score=$(detect_degradation_patterns "$test_response")
+    assert_not_empty "$degradation_score" "detect_degradation_patterns"
+
+    # Test degraded response detection
+    local lorem_response="Lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod tempor incididunt"
+    local lorem_degradation
+    lorem_degradation=$(detect_degradation_patterns "$lorem_response")
+
+    if (( $(echo "$lorem_degradation > 0" | bc -l 2>/dev/null || echo "0") )); then
+        test_pass "lorem_ipsum_detection"
+    else
+        test_fail "lorem_ipsum_detection" "Failed to detect lorem ipsum pattern"
+    fi
+}
+
+test_logging_system() {
+    echo "Testing Logging System..."
+
+    source "./logging.sh"
+
+    # Test error logging
+    log_error "Test error message"
+    if [ -f "$ERROR_LOG" ]; then
+        test_pass "error_logging"
+    else
+        test_fail "error_logging" "Error log file not created"
+    fi
+
+    # Test validation functions
+    local temp_file
+    temp_file=$(create_managed_temp_file "test" "tmp")
+    echo "test content" > "$temp_file"
+
+    if validate_file_path "$temp_file"; then
+        test_pass "validate_file_path"
+    else
+        test_fail "validate_file_path" "Failed to validate existing file"
+    fi
+
+    # Test invalid file
+    if ! validate_file_path "/nonexistent/file.txt" 2>/dev/null; then
+        test_pass "validate_invalid_file"
+    else
+        test_fail "validate_invalid_file" "Should have failed for nonexistent file"
+    fi
+}
+
+test_resource_management() {
+    echo "Testing Resource Management..."
+
+    source "./common.sh"
+
+    # Test temporary directory creation
+    local temp_dir
+    temp_dir=$(create_managed_temp_dir "test")
+    if [ -d "$temp_dir" ]; then
+        test_pass "create_temp_dir"
+    else
+        test_fail "create_temp_dir" "Failed to create temp directory"
+    fi
+
+    # Test cleanup registration
+    register_cleanup_resource "$temp_dir"
+    if [ ${#CLEANUP_RESOURCES[@]} -gt 0 ]; then
+        test_pass "register_cleanup_resource"
+    else
+        test_fail "register_cleanup_resource" "Resource not registered for cleanup"
+    fi
+}
+
+# --- Integration Tests ---
+
+test_mechanism_integration() {
+    echo "Testing Mechanism Integration..."
+
+    # Test if mechanisms are executable
+    local mechanisms=("socratic" "exploration" "consensus" "critique" "synthesis" "peer-review" "puzzle")
+
+    for mechanism in "${mechanisms[@]}"; do
+        if [ -x "./$mechanism" ]; then
+            test_pass "mechanism_executable_$mechanism"
+        else
+            test_fail "mechanism_executable_$mechanism" "Mechanism not executable"
+        fi
+    done
+}
+
+test_classifier_integration() {
+    echo "Testing Classifier Integration..."
+
+    if [ -x "./classifier.sh" ]; then
+        test_pass "classifier_executable"
+
+        # Test basic classification (if possible without models)
+        local test_result
+        test_result=$(source "./classifier.sh" && analyze_intent_patterns "What are the different approaches to solving this problem?" 2>/dev/null)
+        if [ -n "$test_result" ]; then
+            test_pass "classifier_basic_functionality"
+        else
+            test_skip "classifier_basic_functionality" "Cannot test without models"
+        fi
+    else
+        test_fail "classifier_executable" "Classifier script not executable"
+    fi
+}
+
+# --- Performance Tests ---
+
+test_performance_metrics() {
+    echo "Testing Performance Metrics..."
+
+    source "./logging.sh"
+
+    # Test metrics functions exist
+    if command -v log_session_start >/dev/null 2>&1; then
+        test_pass "performance_functions_available"
+    else
+        test_fail "performance_functions_available" "Performance logging functions not available"
+    fi
+
+    # Test metrics file creation
+    if [ -f "$METRICS_FILE" ] || touch "$METRICS_FILE" 2>/dev/null; then
+        test_pass "metrics_file_accessible"
+    else
+        test_fail "metrics_file_accessible" "Cannot access metrics file"
+    fi
+}
+
+# --- Main Test Runner ---
+
+run_all_tests() {
+    init_test_framework
+
+    echo "Running Test Suite..."
+    echo "====================="
+    echo
+
+    # Unit Tests
+    test_common_functions
+    echo
+
+    test_config_loading
+    echo
+
+    test_quality_guard
+    echo
+
+    test_logging_system
+    echo
+
+    test_resource_management
+    echo
+
+    # Integration Tests
+    test_mechanism_integration
+    echo
+
+    test_classifier_integration
+    echo
+
+    # Performance Tests
+    test_performance_metrics
+    echo
+
+    # Test Summary
+    echo "Test Summary"
+    echo "============"
+    echo "✅ Passed: $TESTS_PASSED"
+    echo "❌ Failed: $TESTS_FAILED"
+    echo "⏭️  Skipped: $TESTS_SKIPPED"
+    echo
+
+    local total_tests=$((TESTS_PASSED + TESTS_FAILED))
+    if [ $total_tests -gt 0 ]; then
+        local pass_rate=$((TESTS_PASSED * 100 / total_tests))
+        echo "Pass Rate: $pass_rate%"
+
+        if [ $TESTS_FAILED -eq 0 ]; then
+            echo "🎉 All tests completed successfully!"
+            return 0
+        else
+            echo "⚠️  Some tests failed. Please review the results above."
+            return 1
+        fi
+    else
+        echo "No tests were run."
+        return 1
+    fi
+}
+
+# --- CLI Interface ---
+
+show_help() {
+    echo "AI Thinking Mechanisms Test Framework"
+    echo "Usage: $0 [OPTIONS]"
+    echo
+    echo "Options:"
+    echo "  -a, --all         Run all tests (default)"
+    echo "  -u, --unit        Run only unit tests"
+    echo "  -i, --integration Run only integration tests"
+    echo "  -p, --performance Run only performance tests"
+    echo "  -v, --verbose     Enable verbose output"
+    echo "  -h, --help        Show this help message"
+    echo
+    echo "Examples:"
+    echo "  $0 -a              # Run all tests"
+    echo "  $0 -u -v          # Run unit tests with verbose output"
+    echo "  $0 -p             # Run only performance tests"
+}
+
+# Parse command line arguments
+VERBOSE=false
+TEST_TYPE="all"
+
+while [[ $# -gt 0 ]]; do
+    case $1 in
+        -a|--all)
+            TEST_TYPE="all"
+            shift
+            ;;
+        -u|--unit)
+            TEST_TYPE="unit"
+            shift
+            ;;
+        -i|--integration)
+            TEST_TYPE="integration"
+            shift
+            ;;
+        -p|--performance)
+            TEST_TYPE="performance"
+            shift
+            ;;
+        -v|--verbose)
+            VERBOSE=true
+            shift
+            ;;
+        -h|--help)
+            show_help
+            exit 0
+            ;;
+        *)
+            echo "Unknown option: $1"
+            show_help
+            exit 1
+            ;;
+    esac
+done
+
+# Set verbose output
+if [ "$VERBOSE" = true ]; then
+    set -x
+fi
+
+# Run tests based on type
+case $TEST_TYPE in
+    "all")
+        run_all_tests
+        ;;
+    "unit")
+        init_test_framework
+        test_common_functions
+        echo
+        test_config_loading
+        echo
+        test_quality_guard
+        echo
+        test_logging_system
+        echo
+        test_resource_management
+        ;;
+    "integration")
+        init_test_framework
+        test_mechanism_integration
+        echo
+        test_classifier_integration
+        ;;
+    "performance")
+        init_test_framework
+        test_performance_metrics
+        ;;
+    *)
+        echo "Invalid test type: $TEST_TYPE"
+        show_help
+        exit 1
+        ;;
+esac
diff --git a/bash/talk-to-computer/test_model_selector.sh b/bash/talk-to-computer/test_model_selector.sh
new file mode 100755
index 0000000..f727d42
--- /dev/null
+++ b/bash/talk-to-computer/test_model_selector.sh
@@ -0,0 +1,50 @@
+#!/bin/bash
+
+# Test script for the Dynamic Model Selector
+
+source "./model_selector.sh"
+
+echo "=== Dynamic Model Selector Test ==="
+echo
+
+# Test 1: Show available models
+echo "Test 1: Available Models"
+echo "Available models:"
+get_available_models | nl
+echo
+
+# Test 2: Task type classification
+echo "Test 2: Task Type Classification"
+echo "Coding task: $(classify_task_type "How can I implement a binary search algorithm?" "puzzle")"
+echo "Reasoning task: $(classify_task_type "Why do you think this approach might fail?" "socratic")"
+echo "Creative task: $(classify_task_type "Write a story about a robot" "exploration")"
+echo
+
+# Test 3: Model selection for coding task
+echo "Test 3: Model Selection for Coding Task"
+selected_model=$(select_model_for_task "How can I implement a sorting algorithm?" "puzzle" "")
+echo "Selected model: $selected_model"
+if [ -n "$selected_model" ]; then
+    echo "Model info:"
+    get_model_info "$selected_model"
+fi
+echo
+
+# Test 4: Model selection for reasoning task
+echo "Test 4: Model Selection for Reasoning Task"
+selected_model=$(select_model_for_task "What are the implications of this decision?" "socratic" "")
+echo "Selected model: $selected_model"
+if [ -n "$selected_model" ]; then
+    echo "Model info:"
+    get_model_info "$selected_model"
+fi
+echo
+
+# Test 5: Model selection with preferred models
+echo "Test 5: Model Selection with Preferred Models"
+preferred="llama3:8b-instruct-q4_K_M phi3:3.8b-mini-4k-instruct-q4_K_M"
+selected_model=$(select_model_for_task "How can we improve this code?" "puzzle" "$preferred")
+echo "Selected model: $selected_model"
+echo
+
+echo "=== Test Complete ==="
diff --git a/bash/talk-to-computer/test_quality_guard.sh b/bash/talk-to-computer/test_quality_guard.sh
new file mode 100755
index 0000000..420211e
--- /dev/null
+++ b/bash/talk-to-computer/test_quality_guard.sh
@@ -0,0 +1,70 @@
+#!/bin/bash
+
+# Test script for the Quality Guard system
+source "./quality_guard.sh"
+
+echo "=== Quality Guard System Test ==="
+echo
+
+# Test 1: Good quality response
+echo "Test 1: Good quality response"
+good_response="This is a comprehensive analysis of the problem. The algorithm has O(n) time complexity and requires careful consideration of edge cases. We should implement a robust solution that handles all scenarios effectively."
+quality=$(assess_quality "$good_response" "test prompt" "puzzle")
+degradation=$(detect_degradation_patterns "$good_response")
+echo "Quality Score: $quality"
+echo "Degradation Score: $degradation"
+echo
+
+# Test 2: Degraded response (lorem ipsum)
+echo "Test 2: Degraded response (lorem ipsum)"
+degraded_response="Lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod tempor incididunt ut labore et dolore magna aliqua ut enim ad minim veniam quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat"
+quality=$(assess_quality "$degraded_response" "test prompt" "puzzle")
+degradation=$(detect_degradation_patterns "$degraded_response")
+echo "Quality Score: $quality"
+echo "Degradation Score: $degradation"
+echo
+
+# Test 3: Repetitive response
+echo "Test 3: Repetitive response"
+repetitive_response="The solution is good. The solution is good. The solution is good. The solution is good. The solution is good. The solution is good. The solution is good. The solution is good. The solution is good. The solution is good."
+quality=$(assess_quality "$repetitive_response" "test prompt" "puzzle")
+degradation=$(detect_degradation_patterns "$repetitive_response")
+echo "Quality Score: $quality"
+echo "Degradation Score: $degradation"
+echo
+
+# Test 4: Very short response
+echo "Test 4: Very short response"
+short_response="Good."
+quality=$(assess_quality "$short_response" "test prompt" "puzzle")
+degradation=$(detect_degradation_patterns "$short_response")
+echo "Quality Score: $quality"
+echo "Degradation Score: $degradation"
+echo
+
+# Test 5: Gibberish response
+echo "Test 5: Gibberish response"
+gibberish_response="aaaaa bbbbb ccccc ddddd eeeee fffff ggggg hhhhh iiiii jjjjj kkkkk lllll mmmmm nnnnn ooooo ppppp"
+quality=$(assess_quality "$gibberish_response" "test prompt" "puzzle")
+degradation=$(detect_degradation_patterns "$gibberish_response")
+echo "Quality Score: $quality"
+echo "Degradation Score: $degradation"
+echo
+
+# Test 6: Mechanism-specific relevance
+echo "Test 6: Mechanism-specific relevance"
+echo "Puzzle mechanism relevance:"
+puzzle_response="This algorithm implementation shows good structure and follows best practices."
+puzzle_relevance=$(assess_relevance "$puzzle_response" "test prompt" "puzzle")
+echo "Puzzle relevance score: $puzzle_relevance"
+
+echo "Socratic mechanism relevance:"
+socratic_response="This analysis examines the underlying assumptions and questions the fundamental approach."
+socratic_relevance=$(assess_relevance "$socratic_response" "test prompt" "socratic")
+echo "Socratic relevance score: $socratic_relevance"
+echo
+
+echo "=== Quality Guard Test Complete ==="
+echo
+echo "To test the full correction system, run:"
+echo "echo 'lorem ipsum test' | ./quality_guard.sh"