Neosgenesis

Neosgenesis

https://dev.to/answeryt/the-demo-spell-and-production-dilemma-of-ai-agents-how-i-built-a-self-learning-agent-system-4okk

Stars: 994

Visit
 screenshot

Neogenesis System is an advanced AI decision-making framework that enables agents to 'think about how to think'. It implements a metacognitive approach with real-time learning, tool integration, and multi-LLM support, allowing AI to make expert-level decisions in complex environments. Key features include metacognitive intelligence, tool-enhanced decisions, real-time learning, aha-moment breakthroughs, experience accumulation, and multi-LLM support.

README:

๐Ÿง  Neogenesis System - Metacognitive AI Decision Framework

Python License Multi-LLM MAB

๐ŸŒŸ Making AI Think Like Experts - Metacognitive Decision Intelligence

Quick Start ยท Core Features ยท Installation ยท Usage


๐ŸŽฏ Project Overview

Neogenesis System is an advanced AI decision-making framework that enables agents to "think about how to think". Unlike traditional question-answer systems, it implements a metacognitive approach with real-time learning, tool integration, and multi-LLM support, allowing AI to make expert-level decisions in complex environments.

๐ŸŒŸ Key Features

  • ๐Ÿง  Metacognitive Intelligence: AI that thinks about "how to think"
  • ๐Ÿ”ง Tool-Enhanced Decisions: Dynamic tool integration during decision-making
  • ๐Ÿ”ฌ Real-time Learning: Learns during thinking phase, not just after execution
  • ๐Ÿ’ก Aha-Moment Breakthroughs: Creative problem-solving when stuck
  • ๐Ÿ† Experience Accumulation: Builds reusable decision templates from success
  • ๐Ÿค– Multi-LLM Support: OpenAI, Anthropic, DeepSeek, Ollama with auto-failover

๐Ÿš€ Core Innovations

1. ๐Ÿ”ฌ Five-Stage Decision Process

Traditional AI: Think โ†’ Execute โ†’ Learn
Neogenesis: Think โ†’ Verify โ†’ Learn โ†’ Optimize โ†’ Decide (all during thinking phase)

graph LR
    A[Seed Generation] --> B[Verification] 
    B --> C[Path Generation] 
    C --> D[Learning & Optimization]
    D --> E[Final Decision]
    D --> C
    style D fill:#fff9c4

Value: AI learns and optimizes before execution, avoiding costly mistakes and improving decision quality.

2. ๐ŸŽฐ Multi-Armed Bandit Learning

  • Experience Accumulation: Learns which decision strategies work best in different contexts
  • Golden Templates: Automatically identifies and reuses successful reasoning patterns
  • Exploration vs Exploitation: Balances trying new approaches vs using proven methods

3. ๐Ÿ’ก Aha-Moment Breakthrough

When conventional approaches fail, the system automatically:

  • Activates creative problem-solving mode
  • Generates unconventional thinking paths
  • Breaks through decision deadlocks with innovative solutions

4. ๐Ÿ”ง Tool-Enhanced Intelligence

  • Real-time Information: Integrates web search and verification tools during thinking
  • Dynamic Tool Selection: Hybrid MAB+LLM approach for optimal tool choice
  • Unified Tool Interface: LangChain-inspired tool abstraction for extensibility

๐Ÿš€ Installation

Requirements

  • Python 3.8 or higher
  • pip package manager

Setup

# Clone repository
git clone https://github.com/your-repo/neogenesis-system.git
cd neogenesis-system

# Create and activate virtual environment (recommended)
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

Configuration

Create a .env file in the project root:

# Configure one or more LLM providers (system auto-detects available ones)
DEEPSEEK_API_KEY="your_deepseek_api_key"
OPENAI_API_KEY="your_openai_api_key"  
ANTHROPIC_API_KEY="your_anthropic_api_key"

๐ŸŽฏ Usage Examples

Quick Demo

# Launch demo menu
python start_demo.py

# Quick simulation demo (no API key needed)
python quick_demo.py

# Full interactive demo  
python run_demo.py

Basic Usage

from neogenesis_system.core.neogenesis_planner import NeogenesisPlanner
from neogenesis_system.cognitive_engine.reasoner import PriorReasoner
from neogenesis_system.cognitive_engine.path_generator import PathGenerator  
from neogenesis_system.cognitive_engine.mab_converger import MABConverger

# Initialize components
planner = NeogenesisPlanner(
    prior_reasoner=PriorReasoner(),
    path_generator=PathGenerator(),
    mab_converger=MABConverger()
)

# Create a decision plan
plan = planner.create_plan(
    query="Design a scalable microservices architecture",
    memory=None,
    context={"domain": "system_design", "complexity": "high"}
)

print(f"Plan: {plan.thought}")
print(f"Actions: {len(plan.actions)}")

๐Ÿš€ Performance

Metric Performance Description
๐ŸŽฏ Decision Accuracy 85%+ Based on validation data
โšก Response Time 2-5 sec Full five-stage process
๐Ÿง  Path Generation 95%+ Success rate
๐Ÿ’ก Innovation Rate 15%+ Aha-moment breakthroughs
๐Ÿ”ง Tool Integration 92%+ Success rate
๐Ÿค– Multi-LLM Reliability 99%+ Provider failover

๐Ÿ“„ License

MIT License - see LICENSE file.


๐Ÿ™ Acknowledgments

  • OpenAI, Anthropic, DeepSeek: LLM providers
  • LangChain: Tool ecosystem inspiration
  • Multi-Armed Bandit Theory: Algorithmic foundation
  • Metacognitive Theory: Architecture inspiration

๐Ÿ“ž Contact

Email: [email protected]


๐ŸŒŸ If this project helps you, please give us a Star!

Making AI Think Like Experts, Decide More Wisely

Distributed Features

  • ๐Ÿ”— Node Coordination: Synchronize state across multiple Neogenesis instances
  • ๐Ÿ“ก Event Broadcasting: Real-time state change notifications
  • โš–๏ธ Conflict Resolution: Intelligent merging of concurrent state modifications
  • ๐Ÿ”„ Consensus Protocols: Ensure state consistency in distributed environments
from neogenesis_system.langchain_integration.distributed_state import DistributedStateManager

# Configure distributed coordination
distributed_state = DistributedStateManager(
    node_id="neogenesis_node_1",
    cluster_nodes=["node_1:8001", "node_2:8002", "node_3:8003"],
    consensus_protocol="raft"
)

# Distribute decision state across cluster
await distributed_state.broadcast_decision_update({
    "session_id": "global_decision_001",
    "chosen_path": {"id": 5, "confidence": 0.93},
    "timestamp": time.time()
})

โ›“๏ธ Advanced Chain Composition & Workflows

advanced_chains.py & chains.py - Sophisticated workflow orchestration:

Chain Types

  • ๐Ÿ”„ Sequential Chains: Linear execution with state passing
  • ๐ŸŒŸ Parallel Chains: Concurrent execution with result aggregation
  • ๐Ÿ”€ Conditional Chains: Dynamic routing based on intermediate results
  • ๐Ÿ” Loop Chains: Iterative processing with convergence criteria
  • ๐ŸŒณ Tree Chains: Hierarchical decision trees with pruning strategies

Advanced Features

  • ๐Ÿ“Š Chain Analytics: Performance monitoring and bottleneck identification
  • ๐ŸŽฏ Dynamic Routing: Intelligent path selection based on context
  • โšก Parallel Execution: Multi-threaded chain processing
  • ๐Ÿ›ก๏ธ Error Recovery: Graceful handling of chain failures with retry mechanisms
from neogenesis_system.langchain_integration.advanced_chains import AdvancedChainComposer

# Create sophisticated decision workflow
composer = AdvancedChainComposer()

# Define parallel analysis chains
technical_analysis = composer.create_parallel_chain([
    "architecture_evaluation",
    "performance_analysis", 
    "security_assessment"
])

# Define sequential decision chain
decision_workflow = composer.create_sequential_chain([
    "problem_analysis",
    technical_analysis,  # Parallel sub-chain
    "cost_benefit_analysis",
    "risk_assessment",
    "final_recommendation"
])

# Execute with state persistence
result = await composer.execute_chain(
    chain=decision_workflow,
    input_data={"project": "cloud_migration", "scale": "enterprise"},
    persist_state=True,
    session_id="migration_decision_001"
)

๐Ÿš€ Parallel Execution Framework

execution_engines.py - High-performance parallel processing:

Execution Capabilities

  • ๐ŸŽฏ Task Scheduling: Intelligent workload distribution
  • โšก Parallel Processing: Multi-core and distributed execution
  • ๐Ÿ“Š Resource Management: CPU, memory, and network optimization
  • ๐Ÿ”„ Fault Tolerance: Automatic retry and failure recovery
from neogenesis_system.langchain_integration.execution_engines import ParallelExecutionEngine

# Configure high-performance execution
engine = ParallelExecutionEngine(
    max_workers=8,
    execution_timeout=300,
    retry_strategy="exponential_backoff"
)

# Execute multiple decision paths in parallel
paths_to_evaluate = [
    {"path_id": 1, "strategy": "microservices_approach"},
    {"path_id": 2, "strategy": "monolithic_approach"},
    {"path_id": 3, "strategy": "hybrid_approach"}
]

results = await engine.execute_parallel(
    tasks=paths_to_evaluate,
    evaluation_function="evaluate_architecture_path"
)

๐Ÿ”ง Extended Tool Ecosystem

tools.py - Comprehensive LangChain-compatible tool library:

Enhanced Tool Categories

  • ๐Ÿ” Research Tools: Advanced web search, academic paper retrieval, market analysis
  • ๐Ÿ’พ Data Tools: Database queries, file processing, API integrations
  • ๐Ÿงฎ Analysis Tools: Statistical analysis, ML model inference, data visualization
  • ๐Ÿ”„ Workflow Tools: Task automation, notification systems, reporting generators

๐Ÿ“‹ Installation & Configuration

To use the LangChain integration features:

# Install core LangChain integration dependencies
pip install langchain langchain-community

# Install storage backend dependencies
pip install lmdb                    # For LMDB high-performance storage
pip install redis                   # For Redis distributed storage
pip install sqlalchemy              # For enhanced SQL operations

# Install distributed coordination dependencies  
pip install aioredis                # For async Redis operations
pip install consul                  # For service discovery (optional)

๐ŸŽฏ LangChain Integration Usage Examples

Basic LangChain-Compatible Workflow

from neogenesis_system.langchain_integration import (
    create_neogenesis_chain, 
    PersistentStateManager,
    AdvancedChainComposer
)

# Create LangChain-compatible Neogenesis chain
neogenesis_chain = create_neogenesis_chain(
    storage_backend="lmdb",
    enable_distributed_state=True,
    session_persistence=True
)

# Use as standard LangChain component
from langchain.chains import SequentialChain

# Integrate with existing LangChain workflows
full_workflow = SequentialChain(chains=[
    preprocessing_chain,       # Standard LangChain chain
    neogenesis_chain,         # Our intelligent decision engine
    postprocessing_chain      # Standard LangChain chain
])

# Execute with persistent state
result = full_workflow.run({
    "input": "Design scalable microservices architecture",
    "context": {"team_size": 15, "timeline": "6_months"}
})

Enterprise Decision Workflow

from neogenesis_system.langchain_integration.coordinators import EnterpriseCoordinator

# Configure enterprise-grade decision workflow
coordinator = EnterpriseCoordinator(
    storage_config={
        "backend": "lmdb",
        "encryption": True,
        "backup_enabled": True
    },
    distributed_config={
        "cluster_size": 3,
        "consensus_protocol": "raft"
    }
)

# Execute complex business decision
decision_result = await coordinator.execute_enterprise_decision(
    query="Should we acquire startup company TechCorp for $50M?",
    context={
        "industry": "fintech",
        "company_stage": "series_b",
        "financial_position": "strong",
        "strategic_goals": ["market_expansion", "talent_acquisition"]
    },
    analysis_depth="comprehensive",
    stakeholder_perspectives=["ceo", "cto", "cfo", "head_of_strategy"]
)

# Access persistent decision history
decision_history = coordinator.get_decision_history(
    filters={"domain": "mergers_acquisitions", "timeframe": "last_year"}
)

๐Ÿ“Š Performance Benchmarks

LangChain Integration Metric Performance Description
๐Ÿช Storage Backend Latency <2ms LMDB read/write operations
๐Ÿ”„ State Transaction Speed <5ms ACID transaction completion
๐Ÿ“ก Distributed Sync Latency <50ms Cross-node state synchronization
โšก Parallel Chain Execution 4x faster Compared to sequential execution
๐Ÿ’พ Storage Compression Ratio 60-80% Space savings with GZIP compression
๐Ÿ›ก๏ธ State Consistency Rate 99.9%+ Distributed state accuracy
๐Ÿ”ง Tool Integration Success 95%+ LangChain tool compatibility

๐Ÿ—๏ธ System Architecture & Tech Stack

Neogenesis System adopts a highly modular and extensible architectural design where components have clear responsibilities and work together through dependency injection.

Core Component Overview

graph TD
    subgraph "Launch & Demo Layer"
        UI[start_demo.py / interactive_demo.py]
    end

    subgraph "Core Control Layer"
        MC[MainController<br/><b>(controller.py)</b><br/>Five-stage Process Coordination]
    end

    subgraph "LangChain Integration Layer"
        LC_AD[LangChain Adapters<br/><b>(adapters.py)</b><br/>LangChain Compatibility]
        LC_PS[PersistentStorage<br/><b>(persistent_storage.py)</b><br/>Multi-Backend Storage]
        LC_SM[StateManagement<br/><b>(state_management.py)</b><br/>ACID Transactions]
        LC_DS[DistributedState<br/><b>(distributed_state.py)</b><br/>Multi-Node Sync]
        LC_AC[AdvancedChains<br/><b>(advanced_chains.py)</b><br/>Chain Workflows]
        LC_EE[ExecutionEngines<br/><b>(execution_engines.py)</b><br/>Parallel Processing]
        LC_CO[Coordinators<br/><b>(coordinators.py)</b><br/>Chain Coordination]
        LC_TO[LangChain Tools<br/><b>(tools.py)</b><br/>Extended Tool Library]
    end

    subgraph "Decision Logic Layer"
        PR[PriorReasoner<br/><b>(reasoner.py)</b><br/>Quick Heuristic Analysis]
        RAG[RAGSeedGenerator<br/><b>(rag_seed_generator.py)</b><br/>RAG-Enhanced Seed Generation]
        PG[PathGenerator<br/><b>(path_generator.py)</b><br/>Multi-path Thinking Generation]
        MAB[MABConverger<br/><b>(mab_converger.py)</b><br/>Meta-MAB & Learning]
    end

    subgraph "Tool Abstraction Layer"
        TR[ToolRegistry<br/><b>(tool_abstraction.py)</b><br/>Unified Tool Management]
        WST[WebSearchTool<br/><b>(search_tools.py)</b><br/>Web Search Tool]
        IVT[IdeaVerificationTool<br/><b>(search_tools.py)</b><br/>Idea Verification Tool]
    end

    subgraph "Tools & Services Layer"
        LLM[LLMManager<br/><b>(llm_manager.py)</b><br/>Multi-LLM Provider Management]
        SC[SearchClient<br/><b>(search_client.py)</b><br/>Web Search & Verification]
        PO[PerformanceOptimizer<br/><b>(performance_optimizer.py)</b><br/>Parallelization & Caching]
        CFG[config.py<br/><b>(Main/Demo Configuration)</b>]
    end

    subgraph "Storage Backends"
        FS[FileSystem<br/>Versioned Storage]
        SQL[SQLite<br/>ACID Database]
        LMDB[LMDB<br/>High-Performance KV]
        MEM[Memory<br/>In-Memory Cache]
        REDIS[Redis<br/>Distributed Cache]
    end

    subgraph "LLM Providers Layer"
        OAI[OpenAI<br/>GPT-3.5/4/4o]
        ANT[Anthropic<br/>Claude-3 Series]
        DS[DeepSeek<br/>deepseek-chat/coder]
        OLL[Ollama<br/>Local Models]
        AZ[Azure OpenAI<br/>Enterprise Models]
    end

    UI --> MC
    MC --> LC_AD
    LC_AD --> LC_CO
    LC_CO --> LC_AC & LC_EE
    LC_AC --> LC_SM
    LC_SM --> LC_PS
    LC_DS --> LC_SM
    LC_PS --> FS & SQL & LMDB & MEM & REDIS
    
    MC --> PR & RAG
    MC --> PG
    MC --> MAB
    MC --> TR
    
    MAB --> LC_SM
    RAG --> TR
    RAG --> LLM
    PG --> LLM
    MAB --> PG
    MC -- "Uses" --> PO
    TR --> WST & IVT
    TR --> LC_TO
    WST --> SC
    IVT --> SC
    LLM --> OAI & ANT & DS & OLL & AZ

    style LC_AD fill:#e3f2fd,stroke:#1976d2,stroke-width:3px
    style LC_PS fill:#fff3e0,stroke:#f57c00,stroke-width:2px
    style LC_SM fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
    style LC_DS fill:#e8f5e8,stroke:#388e3c,stroke-width:2px

Component Description:

Core System Components

  • MainController: System commander, responsible for orchestrating the complete five-stage decision process with tool-enhanced verification capabilities
  • RAGSeedGenerator / PriorReasoner: Decision starting point, responsible for generating high-quality "thinking seeds"
  • PathGenerator: System's "divergent thinking" module, generating diverse solutions based on seeds
  • MABConverger: System's "convergent thinking" and "learning" module, responsible for evaluation, selection, and learning from experience

LangChain Integration Components

  • LangChain Adapters: Compatibility layer enabling seamless integration with existing LangChain workflows and components
  • PersistentStorage: Multi-backend storage engine supporting FileSystem, SQLite, LMDB, Memory, and Redis with enterprise features
  • StateManagement: Professional state management with ACID transactions, checkpointing, and branch management
  • DistributedState: Multi-node state coordination with consensus protocols for enterprise deployment
  • AdvancedChains: Sophisticated chain composition supporting sequential, parallel, conditional, and tree-based workflows
  • ExecutionEngines: High-performance parallel processing framework with intelligent task scheduling and fault tolerance
  • Coordinators: Multi-chain coordination system managing complex workflow orchestration and resource allocation
  • LangChain Tools: Extended tool ecosystem with advanced research, data processing, analysis, and workflow capabilities

Tool & Service Components

  • ToolRegistry: LangChain-inspired unified tool management system, providing centralized registration, discovery, and execution of tools
  • WebSearchTool / IdeaVerificationTool: Specialized tools implementing the BaseTool interface for web search and idea verification capabilities
  • LLMManager: Universal LLM interface manager, providing unified access to multiple AI providers with intelligent routing and fallback
  • Tool Layer: Provides reusable underlying capabilities such as multi-LLM management, search engines, performance optimizers

Storage Backend Layer

  • FileSystem: Hierarchical storage with versioning, backup, and metadata management
  • SQLite: ACID-compliant relational database for complex queries and structured data
  • LMDB: Lightning-fast memory-mapped database optimized for high-performance scenarios
  • Memory: In-memory storage for caching and testing scenarios
  • Redis: Distributed caching and session storage for enterprise scalability

๐Ÿ”ง Tech Stack

Core Technologies:

  • Core Language: Python 3.8+
  • AI Engines: Multi-LLM Support (OpenAI, Anthropic, DeepSeek, Ollama, Azure OpenAI)
  • LangChain Integration: Full LangChain compatibility with custom adapters, chains, and tools
  • Tool Architecture: LangChain-inspired unified tool abstraction with BaseTool interface, ToolRegistry management, and dynamic tool discovery
  • Core Algorithms: Meta Multi-Armed Bandit (Thompson Sampling, UCB, Epsilon-Greedy), Retrieval-Augmented Generation (RAG), Tool-Enhanced Decision Making
  • Storage Backends: Multi-backend support (LMDB, SQLite, FileSystem, Memory, Redis) with enterprise features
  • State Management: ACID transactions, distributed state coordination, and persistent workflows
  • External Services: DuckDuckGo Search, Multi-provider LLM APIs, Tool-enhanced web verification

LangChain Integration Stack:

  • Framework: LangChain, LangChain-Community for ecosystem compatibility
  • Storage Engines: LMDB (high-performance), SQLite (ACID compliance), Redis (distributed caching)
  • State Systems: Custom transaction management, distributed consensus protocols
  • Chain Types: Sequential, Parallel, Conditional, Loop, and Tree-based chain execution
  • Execution: Multi-threaded parallel processing with intelligent resource management

Key Libraries:

  • Core: requests, numpy, typing, dataclasses, abc, asyncio
  • AI/LLM: openai, anthropic, langchain, langchain-community
  • Storage: lmdb, sqlite3, redis, sqlalchemy
  • Search: duckduckgo-search, web scraping utilities
  • Performance: threading, multiprocessing, caching mechanisms
  • Distributed: aioredis, consul (optional), network coordination

๐Ÿš€ Quick Start

Environment Requirements

  • Python 3.8 or higher
  • pip package manager

Installation & Configuration

  1. Clone Repository

    git clone https://github.com/your-repo/neogenesis-system.git
    cd neogenesis-system
  2. Install Dependencies

    # (Recommended) Create and activate virtual environment
    python -m venv venv
    source venv/bin/activate  # on Windows: venv\Scripts\activate
    
    # Install core dependencies
    pip install -r requirements.txt
    
    # (Optional) Install additional LLM provider libraries for enhanced functionality
    pip install openai        # For OpenAI GPT models
    pip install anthropic     # For Anthropic Claude models
    # Note: DeepSeek support is included in core dependencies
    
    # (Optional) Install LangChain integration dependencies for advanced features
    pip install langchain langchain-community  # Core LangChain integration
    pip install lmdb                           # High-performance LMDB storage
    pip install redis                          # Distributed caching and state
    pip install sqlalchemy                     # Enhanced SQL operations
    pip install aioredis                       # Async Redis for distributed coordination
  3. Configure API Keys (Optional but Recommended)

    Create a .env file in the project root directory and configure your preferred LLM provider API keys:

    # Configure one or more LLM providers (the system will auto-detect available ones)
    DEEPSEEK_API_KEY="your_deepseek_api_key"
    OPENAI_API_KEY="your_openai_api_key"
    ANTHROPIC_API_KEY="your_anthropic_api_key"
    
    # For Azure OpenAI (optional)
    AZURE_OPENAI_API_KEY="your_azure_openai_key"
    AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"

    Note: You only need to configure at least one provider. The system automatically:

    • Detects available providers based on configured API keys
    • Selects the best available provider automatically
    • Falls back to other providers if the primary one fails

    Without any keys, the system will run in limited simulation mode.

๐ŸŽญ Demo Experience

We provide multiple demo modes to let you intuitively experience AI's thinking process.

# Launch menu to select experience mode
python start_demo.py

# (Recommended) Run quick simulation demo directly, no configuration needed
python quick_demo.py

# Run complete interactive demo connected to real system
python run_demo.py

Basic Usage Example

import os
from dotenv import load_dotenv
from meta_mab.controller import MainController

# Load environment variables
load_dotenv()

# Initialize controller (auto-detects available LLM providers)
controller = MainController()

# The system automatically selects the best available LLM provider
# You can check which providers are available
status = controller.get_llm_provider_status()
print(f"Available providers: {status['healthy_providers']}/{status['total_providers']}")

# Pose a complex question
query = "Design a scalable, low-cost cloud-native tech stack for a startup tech company"
context = {"domain": "cloud_native_architecture", "company_stage": "seed"}

# Get AI's decision (automatically uses the best available provider)
decision_result = controller.make_decision(user_query=query, execution_context=context)

# View the final chosen thinking path
chosen_path = decision_result.get('chosen_path')
if chosen_path:
    print(f"๐Ÿš€ AI's chosen thinking path: {chosen_path.path_type}")
    print(f"๐Ÿ“ Core approach: {chosen_path.description}")

# (Optional) Switch to a specific provider
controller.switch_llm_provider("openai")  # or "anthropic", "deepseek", etc.

# (Optional) Provide execution result feedback to help AI learn
controller.update_performance_feedback(
    decision_result=decision_result,
    execution_success=True,
    execution_time=12.5,
    user_satisfaction=0.9,
    rl_reward=0.85
)
print("\nโœ… AI has received feedback and completed learning!")

# Tool Integration Examples
print("\n" + "="*50)
print("๐Ÿ”ง Tool-Enhanced Decision Making Examples")
print("="*50)

# Check available tools
from meta_mab.utils.tool_abstraction import list_available_tools, get_registry_stats
tools = list_available_tools()
stats = get_registry_stats()
print(f"๐Ÿ“Š Available tools: {len(tools)} ({', '.join(tools)})")
print(f"๐Ÿ“ˆ Tool registry stats: {stats['total_tools']} tools, {stats['success_rate']:.1%} success rate")

# Direct tool usage example
from meta_mab.utils.tool_abstraction import execute_tool
search_result = execute_tool("web_search", query="latest trends in cloud computing 2024", max_results=3)
if search_result and search_result.success:
    print(f"๐Ÿ” Web search successful: Found {len(search_result.data.get('results', []))} results")
else:
    print(f"โŒ Web search failed: {search_result.error_message if search_result else 'No result'}")

# Tool-enhanced verification example
verification_result = execute_tool("idea_verification", 
                                 idea="Implement blockchain-based supply chain tracking for food safety",
                                 context={"industry": "food_tech", "scale": "enterprise"})
if verification_result and verification_result.success:
    analysis = verification_result.data.get('analysis', {})
    print(f"๐Ÿ’ก Idea verification: Feasibility score {analysis.get('feasibility_score', 0):.2f}")
else:
    print(f"โŒ Idea verification failed: {verification_result.error_message if verification_result else 'No result'}")

๐Ÿ“Š Performance Metrics

Core System Performance

Metric Performance Description
๐ŸŽฏ Decision Accuracy 85%+ Based on historical validation data
โšก Average Response Time 2-5 seconds Including complete five-stage processing
๐Ÿง  Path Generation Success Rate 95%+ Diverse thinking path generation
๐Ÿ† Golden Template Hit Rate 60%+ Successful experience reuse efficiency
๐Ÿ’ก Aha-Moment Trigger Rate 15%+ Innovation breakthrough scenario percentage
๐Ÿ”ง Tool Integration Success Rate 92%+ Tool-enhanced verification reliability
๐Ÿ” Tool Discovery Accuracy 88%+ Correct tool selection for context
๐Ÿš€ Tool-Enhanced Decision Quality +25% Improvement over non-tool decisions
๐ŸŽฏ Hybrid Selection Accuracy 94%+ MAB+LLM fusion mode precision
๐Ÿง  Cold-Start Detection Rate 96%+ Accurate unfamiliar tool identification
โšก Experience Mode Efficiency +40% Performance boost for familiar tools
๐Ÿ” Exploration Mode Success 89%+ LLM-guided tool discovery effectiveness
๐Ÿ“ˆ Learning Convergence Speed 3-5 uses MAB optimization learning curve
๐Ÿค– Provider Availability 99%+ Multi-LLM fallback reliability
๐Ÿ”„ Automatic Fallback Success 98%+ Seamless provider switching rate

LangChain Integration Performance

LangChain Integration Metric Performance Description
๐Ÿช Storage Backend Latency <2ms LMDB read/write operations
๐Ÿ”„ State Transaction Speed <5ms ACID transaction completion
๐Ÿ“ก Distributed Sync Latency <50ms Cross-node state synchronization
โšก Parallel Chain Execution 4x faster Compared to sequential execution
๐Ÿ’พ Storage Compression Ratio 60-80% Space savings with GZIP compression
๐Ÿ›ก๏ธ State Consistency Rate 99.9%+ Distributed state accuracy
๐Ÿ”ง Tool Integration Success 95%+ LangChain tool compatibility
๐ŸŒ Chain Composition Success 98%+ Complex workflow execution reliability
๐Ÿ“Š Workflow Persistence Rate 99.5%+ State recovery after failures
โš–๏ธ Load Balancing Efficiency 92%+ Distributed workload optimization

๐Ÿงช Testing & Verification

Run Tests

# Run all tests
python -m pytest tests/

# Run unit test examples
python tests/examples/simple_test_example.py

# Run performance tests
python tests/unit/test_performance.py

Verify Core Functions

# Verify MAB algorithm convergence
python tests/unit/test_mab_converger.py

# Verify path generation robustness
python tests/unit/test_path_creation_robustness.py

# Verify RAG seed generation
python tests/unit/test_rag_seed_generator.py

๐Ÿ’ก Use Cases

๐ŸŽฏ Product Decision Scenarios

# Product strategy decisions
result = controller.make_decision(
    "How to prioritize features for our SaaS product for next quarter?",
    execution_context={
        "industry": "software",
        "stage": "growth",
        "constraints": ["budget_limited", "team_capacity"]
    }
)

๐Ÿ”ง Technical Solutions

# Architecture design decisions
result = controller.make_decision(
    "Design a real-time recommendation system supporting tens of millions of concurrent users",
    execution_context={
        "domain": "system_architecture", 
        "scale": "large",
        "requirements": ["real_time", "high_availability"]
    }
)

๐Ÿ“Š Business Analysis

# Market analysis decisions
result = controller.make_decision(
    "Analyze competitive landscape and opportunities in the AI tools market",
    execution_context={
        "analysis_type": "market_research",
        "time_horizon": "6_months",
        "focus": ["opportunities", "threats"]
    }
)

๐Ÿ”ง Tool-Enhanced Decision Making

# Tool-enhanced technical decisions with real-time information gathering
result = controller.make_decision(
    "Should we adopt Kubernetes for our microservices architecture?",
    execution_context={
        "domain": "system_architecture",
        "team_size": "10_engineers", 
        "current_stack": ["docker", "aws"],
        "constraints": ["learning_curve", "migration_complexity"]
    }
)

# The system automatically:
# 1. Uses WebSearchTool to gather latest Kubernetes trends and best practices
# 2. Applies IdeaVerificationTool to validate feasibility based on team constraints
# 3. Integrates real-time information into decision-making process
# 4. Provides evidence-based recommendations with source citations

print(f"Tool-enhanced decision: {result.get('chosen_path', {}).get('description', 'N/A')}")
print(f"Tools used: {result.get('tools_used', [])}")
print(f"Information sources: {result.get('verification_sources', [])}")

๐Ÿค– Multi-LLM Provider Management

# Check available providers and their status
status = controller.get_llm_provider_status()
print(f"Healthy providers: {status['healthy_providers']}")

# Switch to a specific provider for particular tasks
controller.switch_llm_provider("anthropic")  # Use Claude for complex reasoning
result_reasoning = controller.make_decision("Complex philosophical analysis...")

controller.switch_llm_provider("deepseek")   # Use DeepSeek for coding tasks
result_coding = controller.make_decision("Optimize this Python algorithm...")

controller.switch_llm_provider("openai")     # Use GPT for general tasks
result_general = controller.make_decision("Business strategy planning...")

# Get cost and usage statistics
cost_summary = controller.get_llm_cost_summary()
print(f"Total cost: ${cost_summary['total_cost_usd']:.4f}")
print(f"Requests by provider: {cost_summary['cost_by_provider']}")

# Run health check on all providers
health_status = controller.run_llm_health_check()
print(f"Provider health: {health_status}")

๐Ÿ”— LangChain Integration Use Cases

Enterprise Workflow with Persistent State

from neogenesis_system.langchain_integration import (
    create_neogenesis_chain,
    StateManager,
    DistributedStateManager
)

# Create enterprise-grade persistent workflow
state_manager = StateManager(storage_backend="lmdb", enable_encryption=True)
neogenesis_chain = create_neogenesis_chain(
    state_manager=state_manager,
    enable_persistence=True,
    session_id="enterprise_decision_2024"
)

# Execute long-running decision process with state persistence
result = neogenesis_chain.execute({
    "query": "Develop comprehensive digital transformation strategy",
    "context": {
        "industry": "manufacturing",
        "company_size": "enterprise",
        "timeline": "3_years",
        "budget": "10M_USD",
        "current_state": "legacy_systems"
    }
})

# Access persistent decision history
decision_timeline = state_manager.get_decision_timeline("enterprise_decision_2024")
print(f"Decision milestones: {len(decision_timeline)} checkpoints")

Multi-Chain Parallel Analysis

from neogenesis_system.langchain_integration.advanced_chains import AdvancedChainComposer
from neogenesis_system.langchain_integration.execution_engines import ParallelExecutionEngine

# Configure parallel analysis workflow
composer = AdvancedChainComposer()
execution_engine = ParallelExecutionEngine(max_workers=6)

# Create specialized analysis chains
market_analysis_chain = composer.create_analysis_chain("market_research")
technical_analysis_chain = composer.create_analysis_chain("technical_feasibility")  
financial_analysis_chain = composer.create_analysis_chain("financial_modeling")
risk_analysis_chain = composer.create_analysis_chain("risk_assessment")

# Execute parallel comprehensive analysis
parallel_analysis = composer.create_parallel_chain([
    market_analysis_chain,
    technical_analysis_chain,
    financial_analysis_chain,
    risk_analysis_chain
])

# Run analysis with persistent state and error recovery
result = await execution_engine.execute_chain(
    chain=parallel_analysis,
    input_data={
        "project": "AI-powered customer service platform",
        "market": "enterprise_software",
        "timeline": "18_months"
    },
    persist_state=True,
    enable_recovery=True
)

print(f"Analysis completed: {result['analysis_summary']}")
print(f"Execution time: {result['execution_time']:.2f}s")

Distributed Decision-Making Cluster

from neogenesis_system.langchain_integration.distributed_state import DistributedStateManager
from neogenesis_system.langchain_integration.coordinators import ClusterCoordinator

# Configure distributed decision cluster
distributed_state = DistributedStateManager(
    node_id="decision_node_1",
    cluster_nodes=["node_1:8001", "node_2:8002", "node_3:8003"],
    consensus_protocol="raft"
)

cluster_coordinator = ClusterCoordinator(
    distributed_state=distributed_state,
    load_balancing="intelligent"
)

# Execute distributed decision with consensus
decision_result = await cluster_coordinator.execute_distributed_decision(
    query="Should we enter the European market with our fintech product?",
    context={
        "industry": "fintech",
        "target_markets": ["germany", "france", "uk"],
        "regulatory_complexity": "high",
        "competition_level": "intense"
    },
    require_consensus=True,
    min_node_agreement=2
)

# Access cluster decision metrics
cluster_stats = cluster_coordinator.get_cluster_stats()
print(f"Nodes participated: {cluster_stats['active_nodes']}")
print(f"Consensus achieved: {cluster_stats['consensus_reached']}")
print(f"Decision confidence: {decision_result['confidence']:.2f}")

LangChain Ecosystem Compatibility

from langchain.chains import SequentialChain, LLMChain
from langchain.prompts import PromptTemplate
from neogenesis_system.langchain_integration.adapters import NeogenesisLangChainAdapter

# Create standard LangChain components
prompt = PromptTemplate(template="Analyze the market for {product}", input_variables=["product"])
market_chain = LLMChain(llm=llm, prompt=prompt)

# Create Neogenesis intelligent decision chain
neogenesis_adapter = NeogenesisLangChainAdapter(
    storage_backend="lmdb",
    enable_advanced_reasoning=True
)
neogenesis_chain = neogenesis_adapter.create_decision_chain()

# Combine in standard LangChain workflow
complete_workflow = SequentialChain(
    chains=[
        market_chain,           # Standard LangChain analysis
        neogenesis_chain,       # Intelligent Neogenesis decision-making
        market_chain            # Follow-up LangChain processing
    ],
    input_variables=["product"],
    output_variables=["final_recommendation"]
)

# Execute integrated workflow
result = complete_workflow.run({
    "product": "AI-powered legal document analyzer"
})

print(f"Integrated analysis result: {result}")

๐Ÿค Contributing Guide

We warmly welcome community contributions! Whether bug fixes, feature suggestions, or code submissions, all help make Neogenesis System better.

Ways to Contribute

  1. ๐Ÿ› Bug Reports: Submit issues when you find problems
  2. โœจ Feature Suggestions: Propose new feature ideas
  3. ๐Ÿ“ Documentation Improvements: Enhance documentation and examples
  4. ๐Ÿ”ง Code Contributions: Submit Pull Requests
  5. ๐Ÿ”จ Tool Development: Create new tools implementing the BaseTool interface
  6. ๐Ÿงช Tool Testing: Help test and validate tool integrations

Development Guide

# 1. Fork and clone project
git clone https://github.com/your-username/neogenesis-system.git

# 2. Create development branch
git checkout -b feature/your-feature-name

# 3. Install development dependencies
pip install -r requirements-dev.txt

# 4. Run tests to ensure baseline functionality
python -m pytest tests/

# 5. Develop new features...

# 6. Submit Pull Request

Please refer to CONTRIBUTING.md for detailed guidelines.


๐Ÿ“„ License

This project is open-sourced under the MIT License. See LICENSE file for details.


๐Ÿ™ Acknowledgments

Core Technology Acknowledgments

  • LangChain: Revolutionary framework for building LLM-powered applications that inspired our comprehensive integration architecture
  • OpenAI: Pioneering GPT models and API standards that inspired our universal interface design
  • Anthropic: Advanced Claude models with superior reasoning capabilities
  • DeepSeek AI: Cost-effective models with excellent coding and multilingual support
  • Ollama: Enabling local and privacy-focused AI deployments
  • LMDB: Lightning-fast memory-mapped database enabling high-performance persistent storage
  • Redis: Distributed caching and state management for enterprise scalability
  • Multi-Armed Bandit Theory: Providing algorithmic foundation for intelligent decision-making
  • RAG Technology: Enabling knowledge-enhanced thinking generation
  • Metacognitive Theory: Inspiring the overall system architecture design

Development Team

Neogenesis System is independently developed by the author.


๐Ÿ“ž Support & Feedback

Getting Help

  • ๐Ÿ“ง Email Contact: This project is still in development. If you're interested in the project or need commercial use, please contact: [email protected]

Roadmap

LangChain Integration Roadmap

  • v1.1: Enhanced LangChain tool ecosystem with database, API, and file operation tools; improved chain discovery algorithms
  • v1.2: Advanced chain composition and parallel execution capabilities; LangChain performance analytics and monitoring
  • v1.3: Visual chain execution flows and distributed state management Web interface; LangChain marketplace integration
  • v1.4: Multi-language LangChain support, internationalization deployment, and enterprise LangChain connectors
  • v2.0: Distributed chain execution, enterprise-level LangChain integration, and custom chain marketplace

Core System Roadmap

  • v1.1: Enhanced tool ecosystem with database, API, and file operation tools; improved tool discovery algorithms
  • v1.2: Advanced tool composition and chaining capabilities; tool performance analytics
  • v1.3: Visual tool execution flows and decision-making process Web interface
  • v1.4: Multi-language support, internationalization deployment
  • v2.0: Distributed tool execution, enterprise-level integration, and custom tool marketplace

๐ŸŒŸ If this project helps you, please give us a Star!

GitHub stars GitHub forks

Making AI Think Like Experts, Decide More Wisely

๐Ÿš€ Get Started | ๐Ÿ“– View Documentation | ๐Ÿ’ก Suggest Ideas

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for Neosgenesis

Similar Open Source Tools

For similar tasks

For similar jobs