about summary refs log tree commit diff stats
path: root/bash/talk-to-computer/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'bash/talk-to-computer/README.md')
-rw-r--r--bash/talk-to-computer/README.md876
1 files changed, 876 insertions, 0 deletions
diff --git a/bash/talk-to-computer/README.md b/bash/talk-to-computer/README.md
new file mode 100644
index 0000000..493a54c
--- /dev/null
+++ b/bash/talk-to-computer/README.md
@@ -0,0 +1,876 @@
+# AI Thinking Mechanisms
+
+A Bash-based system that routes user prompts through different AI thinking mechanisms. It uses pattern matching and basic analysis to select from multiple approaches: direct response, socratic questioning, exploration, critique, consensus, synthesis, and puzzle solving.
+
+## Architecture Overview
+
+The system processes prompts through several stages:
+
+```
+User Prompt → Computer (Dispatcher) → Classifier → RAG System → Mechanism → LLM Models → Quality Guard → Response
+     ↓              ↓                      ↓              ↓              ↓              ↓              ↓
+  Validation    Pattern Analysis      Selection      Corpus Search  Processing     Model Calls    Quality Check
+     ↓              ↓                      ↓              ↓              ↓              ↓              ↓
+  Sanitization  Keyword Matching      Routing        Context Augment Execution      Fallbacks      Error Handling
+```
+
+### Core Components
+
+- **`computer`** - Main dispatcher script with manual mechanism selection
+- **`classifier.sh`** - Advanced prompt classification with Lil-specific routing
+- **`logging.sh`** - Logging and validation utilities
+- **`quality_guard.sh`** - System-wide response quality monitoring
+- **`corpus/`** - RAG knowledge corpus directory structure
+- **`corpus_manager.sh`** - Corpus management and auto-discovery
+- **`rag_search.sh`** - Efficient corpus searching with Unix tools
+- **Thinking Mechanisms** - Specialized AI interaction patterns
+- **Dynamic Model Selection** - Intelligent model routing based on task type
+
+## Getting Started
+
+### Prerequisites
+
+- **Bash 4.0+** (for advanced features)
+- **Ollama** installed and running
+- **jq** (optional, for enhanced JSON processing)
+- **bc** (optional, for precise timing calculations)
+
+### Installation
+
+1. Clone or download the scripts to your desired directory
+2. Ensure all scripts have execute permissions:
+   ```bash
+   chmod +x computer exploration consensus socratic critique peer-review synthesis puzzle
+   chmod +x logging.sh classifier.sh quality_guard.sh
+   ```
+3. Verify Ollama is running and accessible:
+   ```bash
+   ollama list
+   ```
+
+### Basic Usage
+
+```bash
+# Use the intelligent dispatcher (recommended)
+./computer "Your question or prompt here"
+
+# Force direct response (bypass thinking mechanisms)
+./computer -d "Simple question"
+
+# Include file context
+./computer -f input.txt "Analyze this file"
+
+# Specify number of rounds
+./computer "Complex question" 3
+
+# Manual mechanism selection
+./computer -m puzzle "How can I implement a sorting algorithm?"
+./computer -m socratic "Analyze this deeply"
+
+# Get help
+./computer --help              # Show all options and examples
+./computer --mechanisms        # List available thinking mechanisms
+```
+
+## Quality Guard System
+
+### Basic Response Quality Monitoring
+
+The Quality Guard provides simple monitoring and error handling for AI responses.
+
+#### **What It Does**
+
+- Monitors basic response characteristics
+- Detects obvious quality issues
+- Provides fallback responses when needed
+- Attempts to regenerate responses up to 2 times
+
+#### **How It Works**
+
+1. **Basic Checks** - Simple pattern matching for obvious issues
+2. **Scoring** - Basic heuristics for response quality
+3. **Retry Logic** - Up to 2 attempts to get better responses
+4. **Fallbacks** - Generic helpful responses when retries fail
+
+#### **Limitations**
+
+- Uses simple pattern matching, not advanced analysis
+- May not catch subtle quality issues
+- Fallback responses are generic
+- Not a substitute for careful prompt engineering
+
+#### **Configuration Options**
+
+```bash
+# Basic threshold adjustments
+export MIN_RESPONSE_LENGTH=30        # Minimum words required
+export MAX_REPETITION_RATIO=0.4      # Maximum repetition allowed
+export MAX_NONSENSE_SCORE=0.6        # Maximum nonsense score
+export DEGRADATION_THRESHOLD=0.65    # Quality threshold for correction
+export MAX_CORRECTION_ATTEMPTS=2     # Number of correction attempts
+export FALLBACK_ENABLED=true         # Enable fallback responses
+```
+
+## RAG (Retrieval-Augmented Generation) System
+
+### Knowledge Corpus Architecture
+
+The RAG system provides intelligent knowledge augmentation by searching a structured corpus of documentation and returning relevant context to enhance AI responses.
+
+#### **Key Features**
+
+- **Extensible Corpus Structure** - Easy to add new topics and content
+- **Efficient Search** - Uses grep/sed/awk for sub-second lookups
+- **Auto-Discovery** - Automatically finds and indexes new content
+- **Topic-Based Routing** - Matches queries to relevant knowledge areas
+- **Context Injection** - Provides relevant information to AI models
+
+#### **Corpus Organization**
+
+```
+corpus/
+├── README.md              # Usage guide and templates
+├── corpus_registry.txt    # Auto-generated registry of topics
+├── corpus_manager.sh      # Management utilities
+├── .topic_keywords       # Topic keyword mappings
+├── .file_processors      # File type handlers
+│
+├── programming/          # Programming topics
+│   ├── lil/             # Lil programming language
+│   │   └── guide.md
+│   └── algorithms.txt
+│
+├── science/              # Scientific topics
+│   ├── physics.txt
+│   └── chemistry.md
+│
+└── [your_topics]/        # Add your own topics here
+```
+
+#### **Corpus Manager Usage**
+
+```bash
+# Update corpus registry after adding files
+./corpus_manager.sh update
+
+# List all available topics
+./corpus_manager.sh list
+
+# Check if topic exists
+./corpus_manager.sh exists programming
+
+# List files in a topic
+./corpus_manager.sh files science
+
+# Create template for new topic
+./corpus_manager.sh template "machine-learning"
+
+# Get corpus statistics
+./corpus_manager.sh count programming
+```
+
+#### **RAG Search Usage**
+
+```bash
+# Search entire corpus
+./rag_search.sh search "quantum physics"
+
+# Search specific topic
+./rag_search.sh search "lil programming" programming
+
+# Get context around matches
+./rag_search.sh context "variables" programming
+
+# Extract relevant sections
+./rag_search.sh extract "functions" programming
+
+# Show corpus statistics
+./rag_search.sh stats
+```
+
+#### **Adding New Content**
+
+1. **Create topic directory**:
+   ```bash
+   mkdir -p corpus/newtopic
+   ```
+
+2. **Add content files** (use .md, .txt, or .html):
+   ```bash
+   vim corpus/newtopic/guide.md
+   vim corpus/newtopic/examples.txt
+   ```
+
+3. **Update registry**:
+   ```bash
+   ./corpus_manager.sh update
+   ```
+
+4. **Test search**:
+   ```bash
+   ./rag_search.sh search "keyword" newtopic
+   ```
+
+#### **File Format Guidelines**
+
+- **Markdown (.md)** - Recommended for structured content
+- **Plain text (.txt)** - Simple notes and documentation
+- **HTML (.html)** - Rich content with formatting
+- **Descriptive names** - Use clear, descriptive filenames
+- **Consistent headers** - Use standard Markdown headers (# ## ###)
+- **Cross-references** - Link related topics when helpful
+
+#### **Search Behavior**
+
+- **Case-insensitive** matching across all text files
+- **Multi-word queries** supported
+- **Partial matching** within words
+- **Context extraction** with configurable line limits
+- **Topic filtering** for focused searches
+- **Relevance ranking** based on match proximity
+
+#### **Integration with AI**
+
+The RAG system integrates seamlessly with thinking mechanisms:
+
+- **Automatic RAG detection** - Knows when to search corpus
+- **Topic classification** - Routes queries to relevant knowledge
+- **Context injection** - Provides relevant information to enhance responses
+- **Fallback handling** - Graceful degradation when no corpus available
+
+#### **Performance**
+
+- **Sub-second lookups** using cached registry
+- **Efficient Unix tools** (grep/sed/awk) for processing
+- **Memory efficient** with file-based storage
+- **Scalable architecture** supporting thousands of files
+- **Minimal latency** for AI response enhancement
+
+#### **Configuration**
+
+```bash
+# RAG system settings (in rag_config.sh)
+export CORPUS_DIR="corpus"                    # Corpus root directory
+export CORPUS_REGISTRY="corpus_registry.txt"  # Topic registry
+export MAX_SEARCH_RESULTS=5                  # Max results to return
+export MIN_CONTENT_LENGTH=50                 # Min content length
+export SEARCH_CONTEXT_LINES=3                # Context lines around matches
+```
+
+## Prompt Classification
+
+### Basic Pattern Matching
+
+The system uses keyword and pattern matching to route prompts to different mechanisms:
+
+#### **Pattern Matching Rules**
+- **Question type detection**: what/when/where → DIRECT, why/how → SOCRATIC
+- **Action-oriented patterns**: improve → CRITIQUE, compare → EXPLORATION
+- **Puzzle & coding patterns**: algorithm/implement → PUZZLE, challenge/problem → PUZZLE
+- **Lil-specific routing**: "using lil"/"in lil" → PUZZLE (highest priority)
+- **Context-aware scoring**: strategy/planning → EXPLORATION, analysis → SOCRATIC
+- **Enhanced scoring system** with multi-layer analysis
+
+#### **Basic Analysis**
+- **Word count analysis**: Short prompts → DIRECT, longer → complex mechanisms
+- **Keyword presence**: Simple keyword matching for routing decisions
+- **Basic confidence scoring**: Simple scoring mechanism
+
+#### **Limitations**
+- Relies on keyword matching, not deep understanding
+- May misclassify prompts without obvious keywords
+- Confidence scores are basic heuristics, not accurate measures
+- Not a substitute for manual routing when precision matters
+
+### Decision Making
+
+- **Basic confidence scoring** from pattern matching
+- **Keyword-based routing** with fallback to DIRECT
+- **Simple word count analysis** for complexity estimation
+
+### Classification Examples
+
+```bash
+# Strategic Planning
+Input: "What are the different approaches to solve climate change?"
+Output: EXPLORATION:1.00 (matches "different approaches" pattern)
+
+# Improvement Request
+Input: "How can we improve our development workflow?"
+Output: CRITIQUE:1.00 (matches "improve" keyword)
+
+# Complex Analysis
+Input: "Why do you think this approach might fail and what are the underlying assumptions?"
+Output: SOCRATIC:0.85 (matches "why" and complexity indicators)
+
+# Simple Question
+Input: "What is 2+2?"
+Output: DIRECT:0.8 (simple, short question)
+
+# Algorithm Challenge
+Input: "How can I implement a binary search algorithm?"
+Output: PUZZLE:1.00 (matches "algorithm" and "implement" keywords)
+```
+
+## Thinking Mechanisms
+
+### 1. **Exploration** - Multiple Path Analysis
+**Purpose**: Generate multiple solution approaches and compare them
+
+```bash
+./exploration -p 4 "How can we improve our development process?"
+```
+
+**Process**:
+- **Phase 1**: Generate multiple solution paths
+- **Phase 2**: Basic analysis of each path
+- **Phase 3**: Simple comparison and recommendations
+
+**Notes**: Uses multiple LLM calls to generate different approaches
+
+### 2. **Consensus** - Multiple Model Responses
+**Purpose**: Get responses from multiple models and compare them
+
+```bash
+./consensus "What's the best approach to this problem?"
+```
+
+**Process**:
+- **Phase 1**: Get responses from multiple models
+- **Phase 2**: Basic comparison
+- **Phase 3**: Simple voting mechanism
+- **Phase 4**: Combine responses
+
+**Notes**: Limited by available models and simple comparison logic
+
+### 3. **Socratic** - Question-Based Analysis
+**Purpose**: Use AI-generated questions to analyze prompts
+
+```bash
+./socratic "Explain the implications of this decision"
+```
+
+**Process**:
+- **Phase 1**: Generate initial response
+- **Phase 2**: Generate follow-up questions
+- **Phase 3**: Get responses to questions
+- **Phase 4**: Combine into final output
+
+**Notes**: Creates a back-and-forth conversation between AI models
+
+### 4. **Critique** - Improvement Analysis
+**Purpose**: Get improvement suggestions for code or text
+
+```bash
+./critique -f code.py "How can we improve this code?"
+```
+
+**Process**:
+- **Phase 1**: Initial assessment
+- **Phase 2**: Generate critique
+- **Phase 3**: Suggest improvements
+- **Phase 4**: Provide guidance
+
+**Notes**: Basic improvement suggestions based on AI analysis
+
+### 5. **Peer Review** - Multiple AI Reviewers
+**Purpose**: Get feedback from multiple AI perspectives
+
+```bash
+./peer-review "Review this proposal"
+```
+
+**Process**:
+- **Phase 1**: Generate multiple reviews
+- **Phase 2**: Basic consolidation
+- **Phase 3**: Combine feedback
+
+**Notes**: Simple multiple AI review approach
+
+### 6. **Synthesis** - Combine Approaches
+**Purpose**: Combine multiple approaches into one
+
+```bash
+./synthesis "How can we combine these different approaches?"
+```
+
+**Process**:
+- **Phase 1**: Identify approaches
+- **Phase 2**: Basic analysis
+- **Phase 3**: Simple combination
+
+**Notes**: Basic approach combination mechanism
+
+### 7. **Puzzle** - Coding Problem Solving
+**Purpose**: Help with coding problems and algorithms
+
+```bash
+./puzzle "How can I implement a sorting algorithm?"
+./puzzle -l python "What's the best way to solve this data structure problem?"
+```
+
+**Process**:
+- **Phase 1**: Basic problem analysis
+- **Phase 2**: Solution approach
+- **Phase 3**: Code examples
+- **Phase 4**: Basic validation
+- **Phase 5**: Optional Lil code testing if available
+
+**Features**:
+- **Enhanced Lil language knowledge** with comprehensive documentation
+- **Intelligent Lil routing** - automatically triggered by Lil-related keywords
+- **RAG integration** - searches Lil corpus for relevant context
+- **Code testing** with secure Lil script execution
+- **Multi-language support** with Lil as primary focus
+
+**Notes**: Includes extensive Lil documentation and testing capabilities
+
+## Computer Script Features
+
+### Intelligent Routing with Manual Override
+
+The main `computer` script provides both automatic routing and manual mechanism selection:
+
+#### **Automatic Routing**
+```bash
+# Automatically detects and routes based on content
+./computer "Using Lil, how can I implement a sorting algorithm?"
+# → PUZZLE (Lil-specific routing)
+
+./computer "How can we improve our development process?"
+# → EXPLORATION (improvement keywords)
+
+./computer "What is 2+2?"
+# → DIRECT (simple question)
+```
+
+#### **Manual Selection**
+```bash
+# Force specific mechanism
+./computer -m puzzle "Complex algorithm question"
+./computer -m socratic "Deep analysis needed"
+./computer -m exploration "Compare multiple approaches"
+./computer -m consensus "Get multiple perspectives"
+./computer -m critique "Review and improve"
+./computer -m synthesis "Combine different ideas"
+./computer -m peer-review "Get feedback"
+./computer -m direct "Simple factual question"
+```
+
+#### **Help System**
+```bash
+# Comprehensive help
+./computer --help
+
+# List all available mechanisms
+./computer --mechanisms
+```
+
+### Advanced Options
+
+```bash
+# File integration
+./computer -f document.txt "Analyze this content"
+
+# Multi-round processing
+./computer "Complex topic" 3
+
+# Force direct response (bypass mechanisms)
+./computer -d "Simple question"
+```
+
+### Routing Intelligence
+
+The computer script uses multi-layer classification:
+- **Pattern Analysis**: Keyword and pattern matching
+- **Semantic Analysis**: LLM-based content understanding
+- **Complexity Assessment**: Word count and structure analysis
+- **Lil-Specific Routing**: Automatic PUZZLE for Lil-related queries
+- **Confidence Scoring**: Ensures high-confidence routing decisions
+
+## Configuration
+
+### Model Selection
+
+### Dynamic Model Selection (NEW)
+
+The system now includes intelligent model selection based on task type and model capabilities:
+
+```bash
+# Enable dynamic selection
+source model_selector.sh
+selected_model=$(select_model_for_task "How can I implement a sorting algorithm?" "puzzle" "")
+echo "Selected: $selected_model"
+```
+
+**Features:**
+- **Task-aware selection**: Matches models to coding, reasoning, or creative tasks
+- **Capability scoring**: Rates models by performance in different areas (0.0-1.0)
+- **Real-time discovery**: Automatically finds available models via Ollama
+- **Performance weighting**: Considers speed, size, and capability scores
+- **Fallback handling**: Graceful degradation when preferred models unavailable
+
+**Model Capabilities Database:**
+- `llama3:8b-instruct-q4_K_M`: Excellent reasoning (0.9), good coding (0.8)
+- `phi3:3.8b-mini-4k-instruct-q4_K_M`: Fast (0.9 speed), good reasoning (0.8)
+- `gemma3n:e2b`: Balanced performer (0.8 across all categories)
+- `deepseek-r1:1.5b`: Excellent reasoning (0.9), fast (0.95 speed)
+
+### Static Model Selection (Legacy)
+
+For simple setups, you can still use static model configuration:
+
+```bash
+# Models for different mechanisms
+EXPLORATION_MODEL="llama3:8b-instruct-q4_K_M"
+ANALYSIS_MODEL="phi3:3.8b-mini-4k-instruct-q4_K_M"
+
+# Models for consensus mechanism
+MODELS=(
+    "llama3:8b-instruct-q4_K_M"
+    "phi3:3.8b-mini-4k-instruct-q4_K_M"
+    "deepseek-r1:1.5b"
+    "gemma3n:e2b"
+    "dolphin3:latest"
+)
+```
+
+### Model Management
+
+The system includes both basic and advanced model management:
+
+- **Availability checking**: Verifies models are available before use
+- **Fallback mechanisms**: Automatic fallback to alternative models
+- **Error handling**: Graceful handling of model unavailability
+- **Performance tracking**: Optional model performance history
+
+## Logging & Metrics
+
+### Session Logging
+
+All sessions are logged with comprehensive metadata:
+- **Timing information** (start/end/duration)
+- **Input validation** results
+- **Classification decisions** with confidence scores
+- **Model selection** decisions
+- **Full conversation** transcripts
+- **Error handling** details
+- **Quality monitoring** results and correction attempts
+
+### Performance Metrics
+
+```bash
+# View performance summary
+get_metrics_summary
+
+# Metrics stored in JSON format
+~/tmp/ai_thinking/performance_metrics.json
+```
+
+### Error Logging
+
+```bash
+# Error log location
+~/tmp/ai_thinking/errors.log
+
+# Warning log location
+~/tmp/ai_thinking/errors.log
+
+# Classification logs
+~/tmp/ai_thinking/classification.log
+
+# Quality monitoring logs (integrated into session files)
+```
+
+## Security & Validation
+
+### Input Sanitization
+
+- **Prompt length** validation (max 10,000 characters)
+- **Special character** sanitization with warnings
+- **File path** validation and security checks
+- **Parameter** bounds checking (rounds: 1-5, paths: 1-10)
+
+### Error Handling
+
+- **Graceful degradation** when models unavailable
+- **Comprehensive error** logging and reporting
+- **User-friendly** error messages with actionable guidance
+- **Fallback mechanisms** for critical failures
+- **Input validation** with clear error reporting
+- **Quality degradation** protection with automatic correction
+
+## Advanced Features
+
+### File Integration
+
+```bash
+# Include file contents in prompts
+./computer -f document.txt "Analyze this document"
+
+# File validation and security
+- Path existence checking
+- Read permission validation
+- Content sanitization
+- Graceful error handling
+```
+
+### Multi-Round Processing
+
+```bash
+# Specify processing rounds (1-5)
+./computer "Complex question" 3
+
+# Each round builds on previous insights
+- Round 1: Initial analysis
+- Round 2: Deep exploration
+- Round 3: Synthesis and conclusions
+```
+
+### Intelligent Routing
+
+The `computer` script automatically routes prompts based on:
+- **Advanced classification** with confidence scoring
+- **Multi-layer analysis** (pattern + semantic + complexity)
+- **Context-aware** mechanism selection
+- **Optimal mechanism** selection with fallbacks
+
+### Quality Monitoring Integration
+
+All thinking mechanisms now include:
+- **Automatic quality assessment** of every LLM response
+- **Degradation detection** with pattern recognition
+- **Automatic correction** attempts for poor quality outputs
+- **Intelligent fallbacks** when correction fails
+- **Mechanism-specific** quality relevance checking
+
+## Testing & Validation
+
+### Validation Functions
+
+```bash
+# Prompt validation
+validate_prompt "Your prompt here"
+
+# File validation
+validate_file_path "/path/to/file"
+
+# Model validation
+validate_model "model_name" "fallback_model"
+
+# Classification testing
+classify_prompt "Test prompt" false  # Pattern-only mode
+classify_prompt "Test prompt" true   # Full semantic mode
+
+# Quality monitoring testing
+assess_quality "response" "context" "mechanism"
+detect_degradation_patterns "response"
+guard_output_quality "response" "context" "mechanism" "model"
+```
+
+### Error Testing
+
+```bash
+# Test with invalid inputs
+./computer ""                    # Empty prompt
+./computer -f nonexistent.txt   # Missing file
+./computer -x                    # Invalid option
+./computer "test" 10            # Invalid rounds
+
+# Test quality monitoring
+./test_quality_guard.sh         # Quality guard system test
+```
+
+## Performance Considerations
+
+### Optimization Features
+
+- **Model availability** checking before execution
+- **Efficient file** handling and validation
+- **Minimal overhead** for simple queries
+- **Classification caching** for repeated patterns
+- **Parallel processing** where applicable
+- **Quality monitoring** with minimal performance impact
+
+### Resource Management
+
+- **Temporary file** cleanup
+- **Session management** with unique IDs
+- **Memory-efficient** processing
+- **Graceful timeout** handling
+- **Classification result** caching
+- **Quality assessment** optimization
+
+## Troubleshooting
+
+### Common Issues
+
+1. **"Ollama not found"**
+   - Ensure Ollama is installed and in PATH
+   - Check if Ollama service is running
+
+2. **"Model not available"**
+   - Verify model names with `ollama list`
+   - Check model download status
+   - System will automatically fall back to available models
+
+3. **"Permission denied"**
+   - Ensure scripts have execute permissions
+   - Check file ownership and permissions
+
+4. **"File not found"**
+   - Verify file path is correct
+   - Check file exists and is readable
+   - Ensure absolute or correct relative paths
+
+5. **"Low classification confidence"**
+   - Check error logs for classification details
+   - Consider using -d flag for direct responses
+   - Review prompt clarity and specificity
+
+6. **"Quality below threshold"**
+   - System automatically attempts correction
+   - Check quality monitoring logs for details
+   - Fallback responses ensure helpful output
+   - Consider rephrasing complex prompts
+
+### Debug Mode
+
+```bash
+# Enable verbose logging
+export AI_THINKING_DEBUG=1
+
+# Check logs
+tail -f ~/tmp/ai_thinking/errors.log
+
+# Test classification directly
+source classifier.sh
+classify_prompt "Your test prompt" true
+
+# Test quality monitoring
+source quality_guard.sh
+assess_quality "test response" "test context" "puzzle"
+```
+
+## Future Improvements
+
+### Possible Enhancements
+
+- Additional thinking mechanisms
+- More model integration options
+- Improved caching
+- Better error handling
+- Enhanced testing
+- Performance optimizations
+
+### Extensibility
+
+The basic modular structure allows for:
+- Adding new mechanisms
+- Integrating additional models
+- Modifying validation rules
+- Extending logging
+- Updating classification patterns
+
+## Examples
+
+### Strategic Planning
+
+```bash
+./exploration -p 5 "What are our options for scaling this application?"
+```
+
+### Code Review
+
+```bash
+./critique -f main.py "How can we improve this code's performance and maintainability?"
+```
+
+### Decision Making
+
+```bash
+./consensus "Should we migrate to a new database system?"
+```
+
+### Problem Analysis
+
+```bash
+./socratic "What are the root causes of our deployment failures?"
+```
+
+### Algorithm & Coding Challenges
+
+```bash
+# The system automatically routes to puzzle mechanism
+./computer "How can I implement a binary search algorithm?"
+./puzzle "What's the most efficient way to sort this data structure?"
+```
+
+### Complex Classification
+
+```bash
+# The system automatically detects this needs exploration
+./computer "Compare different approaches to implementing microservices"
+```
+
+### Quality Monitoring in Action
+
+```bash
+# All mechanisms now include quality protection
+./computer "Complex question requiring deep analysis" 3
+# Quality monitoring automatically protects against degradation
+# Fallback responses ensure helpful output even with poor LLM performance
+```
+
+### RAG-Enhanced Responses
+
+```bash
+# Lil-specific questions automatically use RAG
+./computer "Using Lil, how can I implement a recursive function?"
+# → PUZZLE with Lil knowledge corpus context
+
+# Manual corpus search
+./rag_search.sh search "function definition" programming
+
+# Corpus management
+./corpus_manager.sh template "data-structures"
+./corpus_manager.sh update
+```
+
+### Manual Mechanism Selection
+
+```bash
+# Force specific thinking style
+./computer -m socratic "What are the fundamental assumptions here?"
+./computer -m exploration "What are our strategic options?"
+./computer -m puzzle "How can I optimize this algorithm?"
+
+# Get help with available options
+./computer --mechanisms
+```
+
+## Contributing
+
+### Development Guidelines
+
+- **Maintain modularity** - Each mechanism should be self-contained
+- **Follow error handling** patterns established in logging.sh
+- **Add comprehensive** documentation for new features
+- **Include validation** for all inputs and parameters
+- **Test thoroughly** with various input types and edge cases
+- **Update classification** patterns when adding new mechanisms
+- **Integrate quality monitoring** for all new LLM interactions
+- **Follow quality guard** patterns for consistent protection
+- **Consider RAG integration** for domain-specific knowledge
+- **Add corpus documentation** when extending knowledge areas
+- **Update classification patterns** for new routing logic
+
+### Code Style
+
+- Consistent naming conventions
+- Clear comments explaining complex logic
+- Error handling for all external calls
+- Modular functions with single responsibilities
+- Validation functions for all inputs
+- Comprehensive logging for debugging
+- Quality monitoring integration for all AI responses
\ No newline at end of file