1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
|
# AI Thinking Mechanisms
A Bash-based system that routes user prompts through different AI thinking mechanisms. It uses pattern matching and basic analysis to select from multiple approaches: direct response, socratic questioning, exploration, critique, consensus, synthesis, and puzzle solving.
## Architecture Overview
The system processes prompts through several stages:
```
User Prompt → Computer (Dispatcher) → Classifier → RAG System → Mechanism → LLM Models → Quality Guard → Response
↓ ↓ ↓ ↓ ↓ ↓ ↓
Validation Pattern Analysis Selection Corpus Search Processing Model Calls Quality Check
↓ ↓ ↓ ↓ ↓ ↓ ↓
Sanitization Keyword Matching Routing Context Augment Execution Fallbacks Error Handling
```
### Core Components
- **`computer`** - Main dispatcher script with manual mechanism selection
- **`classifier.sh`** - Advanced prompt classification with Lil-specific routing
- **`logging.sh`** - Logging and validation utilities
- **`quality_guard.sh`** - System-wide response quality monitoring
- **`corpus/`** - RAG knowledge corpus directory structure
- **`corpus_manager.sh`** - Corpus management and auto-discovery
- **`rag_search.sh`** - Efficient corpus searching with Unix tools
- **Thinking Mechanisms** - Specialized AI interaction patterns
- **Dynamic Model Selection** - Intelligent model routing based on task type
## Getting Started
### Prerequisites
- **Bash 4.0+** (for advanced features)
- **Ollama** installed and running
- **jq** (optional, for enhanced JSON processing)
- **bc** (optional, for precise timing calculations)
### Installation
1. Clone or download the scripts to your desired directory
2. Ensure all scripts have execute permissions:
```bash
chmod +x computer exploration consensus socratic critique peer-review synthesis puzzle
chmod +x logging.sh classifier.sh quality_guard.sh
```
3. Verify Ollama is running and accessible:
```bash
ollama list
```
### Basic Usage
```bash
# Use the intelligent dispatcher (recommended)
./computer "Your question or prompt here"
# Force direct response (bypass thinking mechanisms)
./computer -d "Simple question"
# Include file context
./computer -f input.txt "Analyze this file"
# Specify number of rounds
./computer "Complex question" 3
# Manual mechanism selection
./computer -m puzzle "How can I implement a sorting algorithm?"
./computer -m socratic "Analyze this deeply"
# Get help
./computer --help # Show all options and examples
./computer --mechanisms # List available thinking mechanisms
```
## Quality Guard System
### Basic Response Quality Monitoring
The Quality Guard provides simple monitoring and error handling for AI responses.
#### **What It Does**
- Monitors basic response characteristics
- Detects obvious quality issues
- Provides fallback responses when needed
- Attempts to regenerate responses up to 2 times
#### **How It Works**
1. **Basic Checks** - Simple pattern matching for obvious issues
2. **Scoring** - Basic heuristics for response quality
3. **Retry Logic** - Up to 2 attempts to get better responses
4. **Fallbacks** - Generic helpful responses when retries fail
#### **Limitations**
- Uses simple pattern matching, not advanced analysis
- May not catch subtle quality issues
- Fallback responses are generic
- Not a substitute for careful prompt engineering
#### **Configuration Options**
```bash
# Basic threshold adjustments
export MIN_RESPONSE_LENGTH=30 # Minimum words required
export MAX_REPETITION_RATIO=0.4 # Maximum repetition allowed
export MAX_NONSENSE_SCORE=0.6 # Maximum nonsense score
export DEGRADATION_THRESHOLD=0.65 # Quality threshold for correction
export MAX_CORRECTION_ATTEMPTS=2 # Number of correction attempts
export FALLBACK_ENABLED=true # Enable fallback responses
```
## RAG (Retrieval-Augmented Generation) System
### Knowledge Corpus Architecture
The RAG system provides intelligent knowledge augmentation by searching a structured corpus of documentation and returning relevant context to enhance AI responses.
#### **Key Features**
- **Extensible Corpus Structure** - Easy to add new topics and content
- **Efficient Search** - Uses grep/sed/awk for sub-second lookups
- **Auto-Discovery** - Automatically finds and indexes new content
- **Topic-Based Routing** - Matches queries to relevant knowledge areas
- **Context Injection** - Provides relevant information to AI models
#### **Corpus Organization**
```
corpus/
├── README.md # Usage guide and templates
├── corpus_registry.txt # Auto-generated registry of topics
├── corpus_manager.sh # Management utilities
├── .topic_keywords # Topic keyword mappings
├── .file_processors # File type handlers
│
├── programming/ # Programming topics
│ ├── lil/ # Lil programming language
│ │ └── guide.md
│ └── algorithms.txt
│
├── science/ # Scientific topics
│ ├── physics.txt
│ └── chemistry.md
│
└── [your_topics]/ # Add your own topics here
```
#### **Corpus Manager Usage**
```bash
# Update corpus registry after adding files
./corpus_manager.sh update
# List all available topics
./corpus_manager.sh list
# Check if topic exists
./corpus_manager.sh exists programming
# List files in a topic
./corpus_manager.sh files science
# Create template for new topic
./corpus_manager.sh template "machine-learning"
# Get corpus statistics
./corpus_manager.sh count programming
```
#### **RAG Search Usage**
```bash
# Search entire corpus
./rag_search.sh search "quantum physics"
# Search specific topic
./rag_search.sh search "lil programming" programming
# Get context around matches
./rag_search.sh context "variables" programming
# Extract relevant sections
./rag_search.sh extract "functions" programming
# Show corpus statistics
./rag_search.sh stats
```
#### **Adding New Content**
1. **Create topic directory**:
```bash
mkdir -p corpus/newtopic
```
2. **Add content files** (use .md, .txt, or .html):
```bash
vim corpus/newtopic/guide.md
vim corpus/newtopic/examples.txt
```
3. **Update registry**:
```bash
./corpus_manager.sh update
```
4. **Test search**:
```bash
./rag_search.sh search "keyword" newtopic
```
#### **File Format Guidelines**
- **Markdown (.md)** - Recommended for structured content
- **Plain text (.txt)** - Simple notes and documentation
- **HTML (.html)** - Rich content with formatting
- **Descriptive names** - Use clear, descriptive filenames
- **Consistent headers** - Use standard Markdown headers (# ## ###)
- **Cross-references** - Link related topics when helpful
#### **Search Behavior**
- **Case-insensitive** matching across all text files
- **Multi-word queries** supported
- **Partial matching** within words
- **Context extraction** with configurable line limits
- **Topic filtering** for focused searches
- **Relevance ranking** based on match proximity
#### **Integration with AI**
The RAG system integrates seamlessly with thinking mechanisms:
- **Automatic RAG detection** - Knows when to search corpus
- **Topic classification** - Routes queries to relevant knowledge
- **Context injection** - Provides relevant information to enhance responses
- **Fallback handling** - Graceful degradation when no corpus available
#### **Performance**
- **Sub-second lookups** using cached registry
- **Efficient Unix tools** (grep/sed/awk) for processing
- **Memory efficient** with file-based storage
- **Scalable architecture** supporting thousands of files
- **Minimal latency** for AI response enhancement
#### **Configuration**
```bash
# RAG system settings (in rag_config.sh)
export CORPUS_DIR="corpus" # Corpus root directory
export CORPUS_REGISTRY="corpus_registry.txt" # Topic registry
export MAX_SEARCH_RESULTS=5 # Max results to return
export MIN_CONTENT_LENGTH=50 # Min content length
export SEARCH_CONTEXT_LINES=3 # Context lines around matches
```
## Prompt Classification
### Basic Pattern Matching
The system uses keyword and pattern matching to route prompts to different mechanisms:
#### **Pattern Matching Rules**
- **Question type detection**: what/when/where → DIRECT, why/how → SOCRATIC
- **Action-oriented patterns**: improve → CRITIQUE, compare → EXPLORATION
- **Puzzle & coding patterns**: algorithm/implement → PUZZLE, challenge/problem → PUZZLE
- **Lil-specific routing**: "using lil"/"in lil" → PUZZLE (highest priority)
- **Context-aware scoring**: strategy/planning → EXPLORATION, analysis → SOCRATIC
- **Enhanced scoring system** with multi-layer analysis
#### **Basic Analysis**
- **Word count analysis**: Short prompts → DIRECT, longer → complex mechanisms
- **Keyword presence**: Simple keyword matching for routing decisions
- **Basic confidence scoring**: Simple scoring mechanism
#### **Limitations**
- Relies on keyword matching, not deep understanding
- May misclassify prompts without obvious keywords
- Confidence scores are basic heuristics, not accurate measures
- Not a substitute for manual routing when precision matters
### Decision Making
- **Basic confidence scoring** from pattern matching
- **Keyword-based routing** with fallback to DIRECT
- **Simple word count analysis** for complexity estimation
### Classification Examples
```bash
# Strategic Planning
Input: "What are the different approaches to solve climate change?"
Output: EXPLORATION:1.00 (matches "different approaches" pattern)
# Improvement Request
Input: "How can we improve our development workflow?"
Output: CRITIQUE:1.00 (matches "improve" keyword)
# Complex Analysis
Input: "Why do you think this approach might fail and what are the underlying assumptions?"
Output: SOCRATIC:0.85 (matches "why" and complexity indicators)
# Simple Question
Input: "What is 2+2?"
Output: DIRECT:0.8 (simple, short question)
# Algorithm Challenge
Input: "How can I implement a binary search algorithm?"
Output: PUZZLE:1.00 (matches "algorithm" and "implement" keywords)
```
## Thinking Mechanisms
### 1. **Exploration** - Multiple Path Analysis
**Purpose**: Generate multiple solution approaches and compare them
```bash
./exploration -p 4 "How can we improve our development process?"
```
**Process**:
- **Phase 1**: Generate multiple solution paths
- **Phase 2**: Basic analysis of each path
- **Phase 3**: Simple comparison and recommendations
**Notes**: Uses multiple LLM calls to generate different approaches
### 2. **Consensus** - Multiple Model Responses
**Purpose**: Get responses from multiple models and compare them
```bash
./consensus "What's the best approach to this problem?"
```
**Process**:
- **Phase 1**: Get responses from multiple models
- **Phase 2**: Basic comparison
- **Phase 3**: Simple voting mechanism
- **Phase 4**: Combine responses
**Notes**: Limited by available models and simple comparison logic
### 3. **Socratic** - Question-Based Analysis
**Purpose**: Use AI-generated questions to analyze prompts
```bash
./socratic "Explain the implications of this decision"
```
**Process**:
- **Phase 1**: Generate initial response
- **Phase 2**: Generate follow-up questions
- **Phase 3**: Get responses to questions
- **Phase 4**: Combine into final output
**Notes**: Creates a back-and-forth conversation between AI models
### 4. **Critique** - Improvement Analysis
**Purpose**: Get improvement suggestions for code or text
```bash
./critique -f code.py "How can we improve this code?"
```
**Process**:
- **Phase 1**: Initial assessment
- **Phase 2**: Generate critique
- **Phase 3**: Suggest improvements
- **Phase 4**: Provide guidance
**Notes**: Basic improvement suggestions based on AI analysis
### 5. **Peer Review** - Multiple AI Reviewers
**Purpose**: Get feedback from multiple AI perspectives
```bash
./peer-review "Review this proposal"
```
**Process**:
- **Phase 1**: Generate multiple reviews
- **Phase 2**: Basic consolidation
- **Phase 3**: Combine feedback
**Notes**: Simple multiple AI review approach
### 6. **Synthesis** - Combine Approaches
**Purpose**: Combine multiple approaches into one
```bash
./synthesis "How can we combine these different approaches?"
```
**Process**:
- **Phase 1**: Identify approaches
- **Phase 2**: Basic analysis
- **Phase 3**: Simple combination
**Notes**: Basic approach combination mechanism
### 7. **Puzzle** - Coding Problem Solving
**Purpose**: Help with coding problems and algorithms
```bash
./puzzle "How can I implement a sorting algorithm?"
./puzzle -l python "What's the best way to solve this data structure problem?"
```
**Process**:
- **Phase 1**: Basic problem analysis
- **Phase 2**: Solution approach
- **Phase 3**: Code examples
- **Phase 4**: Basic validation
- **Phase 5**: Optional Lil code testing if available
**Features**:
- **Enhanced Lil language knowledge** with comprehensive documentation
- **Intelligent Lil routing** - automatically triggered by Lil-related keywords
- **RAG integration** - searches Lil corpus for relevant context
- **Code testing** with secure Lil script execution
- **Multi-language support** with Lil as primary focus
**Notes**: Includes extensive Lil documentation and testing capabilities
## Computer Script Features
### Intelligent Routing with Manual Override
The main `computer` script provides both automatic routing and manual mechanism selection:
#### **Automatic Routing**
```bash
# Automatically detects and routes based on content
./computer "Using Lil, how can I implement a sorting algorithm?"
# → PUZZLE (Lil-specific routing)
./computer "How can we improve our development process?"
# → EXPLORATION (improvement keywords)
./computer "What is 2+2?"
# → DIRECT (simple question)
```
#### **Manual Selection**
```bash
# Force specific mechanism
./computer -m puzzle "Complex algorithm question"
./computer -m socratic "Deep analysis needed"
./computer -m exploration "Compare multiple approaches"
./computer -m consensus "Get multiple perspectives"
./computer -m critique "Review and improve"
./computer -m synthesis "Combine different ideas"
./computer -m peer-review "Get feedback"
./computer -m direct "Simple factual question"
```
#### **Help System**
```bash
# Comprehensive help
./computer --help
# List all available mechanisms
./computer --mechanisms
```
### Advanced Options
```bash
# File integration
./computer -f document.txt "Analyze this content"
# Multi-round processing
./computer "Complex topic" 3
# Force direct response (bypass mechanisms)
./computer -d "Simple question"
```
### Routing Intelligence
The computer script uses multi-layer classification:
- **Pattern Analysis**: Keyword and pattern matching
- **Semantic Analysis**: LLM-based content understanding
- **Complexity Assessment**: Word count and structure analysis
- **Lil-Specific Routing**: Automatic PUZZLE for Lil-related queries
- **Confidence Scoring**: Ensures high-confidence routing decisions
## Configuration
### Model Selection
### Dynamic Model Selection (NEW)
The system now includes intelligent model selection based on task type and model capabilities:
```bash
# Enable dynamic selection
source model_selector.sh
selected_model=$(select_model_for_task "How can I implement a sorting algorithm?" "puzzle" "")
echo "Selected: $selected_model"
```
**Features:**
- **Task-aware selection**: Matches models to coding, reasoning, or creative tasks
- **Capability scoring**: Rates models by performance in different areas (0.0-1.0)
- **Real-time discovery**: Automatically finds available models via Ollama
- **Performance weighting**: Considers speed, size, and capability scores
- **Fallback handling**: Graceful degradation when preferred models unavailable
**Model Capabilities Database:**
- `llama3:8b-instruct-q4_K_M`: Excellent reasoning (0.9), good coding (0.8)
- `phi3:3.8b-mini-4k-instruct-q4_K_M`: Fast (0.9 speed), good reasoning (0.8)
- `gemma3n:e2b`: Balanced performer (0.8 across all categories)
- `deepseek-r1:1.5b`: Excellent reasoning (0.9), fast (0.95 speed)
### Static Model Selection (Legacy)
For simple setups, you can still use static model configuration:
```bash
# Models for different mechanisms
EXPLORATION_MODEL="llama3:8b-instruct-q4_K_M"
ANALYSIS_MODEL="phi3:3.8b-mini-4k-instruct-q4_K_M"
# Models for consensus mechanism
MODELS=(
"llama3:8b-instruct-q4_K_M"
"phi3:3.8b-mini-4k-instruct-q4_K_M"
"deepseek-r1:1.5b"
"gemma3n:e2b"
"dolphin3:latest"
)
```
### Model Management
The system includes both basic and advanced model management:
- **Availability checking**: Verifies models are available before use
- **Fallback mechanisms**: Automatic fallback to alternative models
- **Error handling**: Graceful handling of model unavailability
- **Performance tracking**: Optional model performance history
## Logging & Metrics
### Session Logging
All sessions are logged with comprehensive metadata:
- **Timing information** (start/end/duration)
- **Input validation** results
- **Classification decisions** with confidence scores
- **Model selection** decisions
- **Full conversation** transcripts
- **Error handling** details
- **Quality monitoring** results and correction attempts
### Performance Metrics
```bash
# View performance summary
get_metrics_summary
# Metrics stored in JSON format
~/tmp/ai_thinking/performance_metrics.json
```
### Error Logging
```bash
# Error log location
~/tmp/ai_thinking/errors.log
# Warning log location
~/tmp/ai_thinking/errors.log
# Classification logs
~/tmp/ai_thinking/classification.log
# Quality monitoring logs (integrated into session files)
```
## Security & Validation
### Input Sanitization
- **Prompt length** validation (max 10,000 characters)
- **Special character** sanitization with warnings
- **File path** validation and security checks
- **Parameter** bounds checking (rounds: 1-5, paths: 1-10)
### Error Handling
- **Graceful degradation** when models unavailable
- **Comprehensive error** logging and reporting
- **User-friendly** error messages with actionable guidance
- **Fallback mechanisms** for critical failures
- **Input validation** with clear error reporting
- **Quality degradation** protection with automatic correction
## Advanced Features
### File Integration
```bash
# Include file contents in prompts
./computer -f document.txt "Analyze this document"
# File validation and security
- Path existence checking
- Read permission validation
- Content sanitization
- Graceful error handling
```
### Multi-Round Processing
```bash
# Specify processing rounds (1-5)
./computer "Complex question" 3
# Each round builds on previous insights
- Round 1: Initial analysis
- Round 2: Deep exploration
- Round 3: Synthesis and conclusions
```
### Intelligent Routing
The `computer` script automatically routes prompts based on:
- **Advanced classification** with confidence scoring
- **Multi-layer analysis** (pattern + semantic + complexity)
- **Context-aware** mechanism selection
- **Optimal mechanism** selection with fallbacks
### Quality Monitoring Integration
All thinking mechanisms now include:
- **Automatic quality assessment** of every LLM response
- **Degradation detection** with pattern recognition
- **Automatic correction** attempts for poor quality outputs
- **Intelligent fallbacks** when correction fails
- **Mechanism-specific** quality relevance checking
## Testing & Validation
### Validation Functions
```bash
# Prompt validation
validate_prompt "Your prompt here"
# File validation
validate_file_path "/path/to/file"
# Model validation
validate_model "model_name" "fallback_model"
# Classification testing
classify_prompt "Test prompt" false # Pattern-only mode
classify_prompt "Test prompt" true # Full semantic mode
# Quality monitoring testing
assess_quality "response" "context" "mechanism"
detect_degradation_patterns "response"
guard_output_quality "response" "context" "mechanism" "model"
```
### Error Testing
```bash
# Test with invalid inputs
./computer "" # Empty prompt
./computer -f nonexistent.txt # Missing file
./computer -x # Invalid option
./computer "test" 10 # Invalid rounds
# Test quality monitoring
./test_quality_guard.sh # Quality guard system test
```
## Performance Considerations
### Optimization Features
- **Model availability** checking before execution
- **Efficient file** handling and validation
- **Minimal overhead** for simple queries
- **Classification caching** for repeated patterns
- **Parallel processing** where applicable
- **Quality monitoring** with minimal performance impact
### Resource Management
- **Temporary file** cleanup
- **Session management** with unique IDs
- **Memory-efficient** processing
- **Graceful timeout** handling
- **Classification result** caching
- **Quality assessment** optimization
## Troubleshooting
### Common Issues
1. **"Ollama not found"**
- Ensure Ollama is installed and in PATH
- Check if Ollama service is running
2. **"Model not available"**
- Verify model names with `ollama list`
- Check model download status
- System will automatically fall back to available models
3. **"Permission denied"**
- Ensure scripts have execute permissions
- Check file ownership and permissions
4. **"File not found"**
- Verify file path is correct
- Check file exists and is readable
- Ensure absolute or correct relative paths
5. **"Low classification confidence"**
- Check error logs for classification details
- Consider using -d flag for direct responses
- Review prompt clarity and specificity
6. **"Quality below threshold"**
- System automatically attempts correction
- Check quality monitoring logs for details
- Fallback responses ensure helpful output
- Consider rephrasing complex prompts
### Debug Mode
```bash
# Enable verbose logging
export AI_THINKING_DEBUG=1
# Check logs
tail -f ~/tmp/ai_thinking/errors.log
# Test classification directly
source classifier.sh
classify_prompt "Your test prompt" true
# Test quality monitoring
source quality_guard.sh
assess_quality "test response" "test context" "puzzle"
```
## Future Improvements
### Possible Enhancements
- Additional thinking mechanisms
- More model integration options
- Improved caching
- Better error handling
- Enhanced testing
- Performance optimizations
### Extensibility
The basic modular structure allows for:
- Adding new mechanisms
- Integrating additional models
- Modifying validation rules
- Extending logging
- Updating classification patterns
## Examples
### Strategic Planning
```bash
./exploration -p 5 "What are our options for scaling this application?"
```
### Code Review
```bash
./critique -f main.py "How can we improve this code's performance and maintainability?"
```
### Decision Making
```bash
./consensus "Should we migrate to a new database system?"
```
### Problem Analysis
```bash
./socratic "What are the root causes of our deployment failures?"
```
### Algorithm & Coding Challenges
```bash
# The system automatically routes to puzzle mechanism
./computer "How can I implement a binary search algorithm?"
./puzzle "What's the most efficient way to sort this data structure?"
```
### Complex Classification
```bash
# The system automatically detects this needs exploration
./computer "Compare different approaches to implementing microservices"
```
### Quality Monitoring in Action
```bash
# All mechanisms now include quality protection
./computer "Complex question requiring deep analysis" 3
# Quality monitoring automatically protects against degradation
# Fallback responses ensure helpful output even with poor LLM performance
```
### RAG-Enhanced Responses
```bash
# Lil-specific questions automatically use RAG
./computer "Using Lil, how can I implement a recursive function?"
# → PUZZLE with Lil knowledge corpus context
# Manual corpus search
./rag_search.sh search "function definition" programming
# Corpus management
./corpus_manager.sh template "data-structures"
./corpus_manager.sh update
```
### Manual Mechanism Selection
```bash
# Force specific thinking style
./computer -m socratic "What are the fundamental assumptions here?"
./computer -m exploration "What are our strategic options?"
./computer -m puzzle "How can I optimize this algorithm?"
# Get help with available options
./computer --mechanisms
```
## Contributing
### Development Guidelines
- **Maintain modularity** - Each mechanism should be self-contained
- **Follow error handling** patterns established in logging.sh
- **Add comprehensive** documentation for new features
- **Include validation** for all inputs and parameters
- **Test thoroughly** with various input types and edge cases
- **Update classification** patterns when adding new mechanisms
- **Integrate quality monitoring** for all new LLM interactions
- **Follow quality guard** patterns for consistent protection
- **Consider RAG integration** for domain-specific knowledge
- **Add corpus documentation** when extending knowledge areas
- **Update classification patterns** for new routing logic
### Code Style
- Consistent naming conventions
- Clear comments explaining complex logic
- Error handling for all external calls
- Modular functions with single responsibilities
- Validation functions for all inputs
- Comprehensive logging for debugging
- Quality monitoring integration for all AI responses
|