
Neosgenesis
https://dev.to/answeryt/the-demo-spell-and-production-dilemma-of-ai-agents-how-i-built-a-self-learning-agent-system-4okk
Stars: 994

Neogenesis System is an advanced AI decision-making framework that enables agents to 'think about how to think'. It implements a metacognitive approach with real-time learning, tool integration, and multi-LLM support, allowing AI to make expert-level decisions in complex environments. Key features include metacognitive intelligence, tool-enhanced decisions, real-time learning, aha-moment breakthroughs, experience accumulation, and multi-LLM support.
README:
Quick Start ยท Core Features ยท Installation ยท Usage
Neogenesis System is an advanced AI decision-making framework that enables agents to "think about how to think". Unlike traditional question-answer systems, it implements a metacognitive approach with real-time learning, tool integration, and multi-LLM support, allowing AI to make expert-level decisions in complex environments.
- ๐ง Metacognitive Intelligence: AI that thinks about "how to think"
- ๐ง Tool-Enhanced Decisions: Dynamic tool integration during decision-making
- ๐ฌ Real-time Learning: Learns during thinking phase, not just after execution
- ๐ก Aha-Moment Breakthroughs: Creative problem-solving when stuck
- ๐ Experience Accumulation: Builds reusable decision templates from success
- ๐ค Multi-LLM Support: OpenAI, Anthropic, DeepSeek, Ollama with auto-failover
Traditional AI: Think โ Execute โ Learn
Neogenesis: Think โ Verify โ Learn โ Optimize โ Decide (all during thinking phase)
graph LR
A[Seed Generation] --> B[Verification]
B --> C[Path Generation]
C --> D[Learning & Optimization]
D --> E[Final Decision]
D --> C
style D fill:#fff9c4
Value: AI learns and optimizes before execution, avoiding costly mistakes and improving decision quality.
- Experience Accumulation: Learns which decision strategies work best in different contexts
- Golden Templates: Automatically identifies and reuses successful reasoning patterns
- Exploration vs Exploitation: Balances trying new approaches vs using proven methods
When conventional approaches fail, the system automatically:
- Activates creative problem-solving mode
- Generates unconventional thinking paths
- Breaks through decision deadlocks with innovative solutions
- Real-time Information: Integrates web search and verification tools during thinking
- Dynamic Tool Selection: Hybrid MAB+LLM approach for optimal tool choice
- Unified Tool Interface: LangChain-inspired tool abstraction for extensibility
- Python 3.8 or higher
- pip package manager
# Clone repository
git clone https://github.com/your-repo/neogenesis-system.git
cd neogenesis-system
# Create and activate virtual environment (recommended)
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
Create a .env
file in the project root:
# Configure one or more LLM providers (system auto-detects available ones)
DEEPSEEK_API_KEY="your_deepseek_api_key"
OPENAI_API_KEY="your_openai_api_key"
ANTHROPIC_API_KEY="your_anthropic_api_key"
# Launch demo menu
python start_demo.py
# Quick simulation demo (no API key needed)
python quick_demo.py
# Full interactive demo
python run_demo.py
from neogenesis_system.core.neogenesis_planner import NeogenesisPlanner
from neogenesis_system.cognitive_engine.reasoner import PriorReasoner
from neogenesis_system.cognitive_engine.path_generator import PathGenerator
from neogenesis_system.cognitive_engine.mab_converger import MABConverger
# Initialize components
planner = NeogenesisPlanner(
prior_reasoner=PriorReasoner(),
path_generator=PathGenerator(),
mab_converger=MABConverger()
)
# Create a decision plan
plan = planner.create_plan(
query="Design a scalable microservices architecture",
memory=None,
context={"domain": "system_design", "complexity": "high"}
)
print(f"Plan: {plan.thought}")
print(f"Actions: {len(plan.actions)}")
Metric | Performance | Description |
---|---|---|
๐ฏ Decision Accuracy | 85%+ | Based on validation data |
โก Response Time | 2-5 sec | Full five-stage process |
๐ง Path Generation | 95%+ | Success rate |
๐ก Innovation Rate | 15%+ | Aha-moment breakthroughs |
๐ง Tool Integration | 92%+ | Success rate |
๐ค Multi-LLM Reliability | 99%+ | Provider failover |
MIT License - see LICENSE file.
- OpenAI, Anthropic, DeepSeek: LLM providers
- LangChain: Tool ecosystem inspiration
- Multi-Armed Bandit Theory: Algorithmic foundation
- Metacognitive Theory: Architecture inspiration
Email: [email protected]
๐ If this project helps you, please give us a Star!
- ๐ Node Coordination: Synchronize state across multiple Neogenesis instances
- ๐ก Event Broadcasting: Real-time state change notifications
- โ๏ธ Conflict Resolution: Intelligent merging of concurrent state modifications
- ๐ Consensus Protocols: Ensure state consistency in distributed environments
from neogenesis_system.langchain_integration.distributed_state import DistributedStateManager
# Configure distributed coordination
distributed_state = DistributedStateManager(
node_id="neogenesis_node_1",
cluster_nodes=["node_1:8001", "node_2:8002", "node_3:8003"],
consensus_protocol="raft"
)
# Distribute decision state across cluster
await distributed_state.broadcast_decision_update({
"session_id": "global_decision_001",
"chosen_path": {"id": 5, "confidence": 0.93},
"timestamp": time.time()
})
advanced_chains.py
& chains.py
- Sophisticated workflow orchestration:
- ๐ Sequential Chains: Linear execution with state passing
- ๐ Parallel Chains: Concurrent execution with result aggregation
- ๐ Conditional Chains: Dynamic routing based on intermediate results
- ๐ Loop Chains: Iterative processing with convergence criteria
- ๐ณ Tree Chains: Hierarchical decision trees with pruning strategies
- ๐ Chain Analytics: Performance monitoring and bottleneck identification
- ๐ฏ Dynamic Routing: Intelligent path selection based on context
- โก Parallel Execution: Multi-threaded chain processing
- ๐ก๏ธ Error Recovery: Graceful handling of chain failures with retry mechanisms
from neogenesis_system.langchain_integration.advanced_chains import AdvancedChainComposer
# Create sophisticated decision workflow
composer = AdvancedChainComposer()
# Define parallel analysis chains
technical_analysis = composer.create_parallel_chain([
"architecture_evaluation",
"performance_analysis",
"security_assessment"
])
# Define sequential decision chain
decision_workflow = composer.create_sequential_chain([
"problem_analysis",
technical_analysis, # Parallel sub-chain
"cost_benefit_analysis",
"risk_assessment",
"final_recommendation"
])
# Execute with state persistence
result = await composer.execute_chain(
chain=decision_workflow,
input_data={"project": "cloud_migration", "scale": "enterprise"},
persist_state=True,
session_id="migration_decision_001"
)
execution_engines.py
- High-performance parallel processing:
- ๐ฏ Task Scheduling: Intelligent workload distribution
- โก Parallel Processing: Multi-core and distributed execution
- ๐ Resource Management: CPU, memory, and network optimization
- ๐ Fault Tolerance: Automatic retry and failure recovery
from neogenesis_system.langchain_integration.execution_engines import ParallelExecutionEngine
# Configure high-performance execution
engine = ParallelExecutionEngine(
max_workers=8,
execution_timeout=300,
retry_strategy="exponential_backoff"
)
# Execute multiple decision paths in parallel
paths_to_evaluate = [
{"path_id": 1, "strategy": "microservices_approach"},
{"path_id": 2, "strategy": "monolithic_approach"},
{"path_id": 3, "strategy": "hybrid_approach"}
]
results = await engine.execute_parallel(
tasks=paths_to_evaluate,
evaluation_function="evaluate_architecture_path"
)
tools.py
- Comprehensive LangChain-compatible tool library:
- ๐ Research Tools: Advanced web search, academic paper retrieval, market analysis
- ๐พ Data Tools: Database queries, file processing, API integrations
- ๐งฎ Analysis Tools: Statistical analysis, ML model inference, data visualization
- ๐ Workflow Tools: Task automation, notification systems, reporting generators
To use the LangChain integration features:
# Install core LangChain integration dependencies
pip install langchain langchain-community
# Install storage backend dependencies
pip install lmdb # For LMDB high-performance storage
pip install redis # For Redis distributed storage
pip install sqlalchemy # For enhanced SQL operations
# Install distributed coordination dependencies
pip install aioredis # For async Redis operations
pip install consul # For service discovery (optional)
from neogenesis_system.langchain_integration import (
create_neogenesis_chain,
PersistentStateManager,
AdvancedChainComposer
)
# Create LangChain-compatible Neogenesis chain
neogenesis_chain = create_neogenesis_chain(
storage_backend="lmdb",
enable_distributed_state=True,
session_persistence=True
)
# Use as standard LangChain component
from langchain.chains import SequentialChain
# Integrate with existing LangChain workflows
full_workflow = SequentialChain(chains=[
preprocessing_chain, # Standard LangChain chain
neogenesis_chain, # Our intelligent decision engine
postprocessing_chain # Standard LangChain chain
])
# Execute with persistent state
result = full_workflow.run({
"input": "Design scalable microservices architecture",
"context": {"team_size": 15, "timeline": "6_months"}
})
from neogenesis_system.langchain_integration.coordinators import EnterpriseCoordinator
# Configure enterprise-grade decision workflow
coordinator = EnterpriseCoordinator(
storage_config={
"backend": "lmdb",
"encryption": True,
"backup_enabled": True
},
distributed_config={
"cluster_size": 3,
"consensus_protocol": "raft"
}
)
# Execute complex business decision
decision_result = await coordinator.execute_enterprise_decision(
query="Should we acquire startup company TechCorp for $50M?",
context={
"industry": "fintech",
"company_stage": "series_b",
"financial_position": "strong",
"strategic_goals": ["market_expansion", "talent_acquisition"]
},
analysis_depth="comprehensive",
stakeholder_perspectives=["ceo", "cto", "cfo", "head_of_strategy"]
)
# Access persistent decision history
decision_history = coordinator.get_decision_history(
filters={"domain": "mergers_acquisitions", "timeframe": "last_year"}
)
LangChain Integration Metric | Performance | Description |
---|---|---|
๐ช Storage Backend Latency | <2ms | LMDB read/write operations |
๐ State Transaction Speed | <5ms | ACID transaction completion |
๐ก Distributed Sync Latency | <50ms | Cross-node state synchronization |
โก Parallel Chain Execution | 4x faster | Compared to sequential execution |
๐พ Storage Compression Ratio | 60-80% | Space savings with GZIP compression |
๐ก๏ธ State Consistency Rate | 99.9%+ | Distributed state accuracy |
๐ง Tool Integration Success | 95%+ | LangChain tool compatibility |
Neogenesis System adopts a highly modular and extensible architectural design where components have clear responsibilities and work together through dependency injection.
graph TD
subgraph "Launch & Demo Layer"
UI[start_demo.py / interactive_demo.py]
end
subgraph "Core Control Layer"
MC[MainController<br/><b>(controller.py)</b><br/>Five-stage Process Coordination]
end
subgraph "LangChain Integration Layer"
LC_AD[LangChain Adapters<br/><b>(adapters.py)</b><br/>LangChain Compatibility]
LC_PS[PersistentStorage<br/><b>(persistent_storage.py)</b><br/>Multi-Backend Storage]
LC_SM[StateManagement<br/><b>(state_management.py)</b><br/>ACID Transactions]
LC_DS[DistributedState<br/><b>(distributed_state.py)</b><br/>Multi-Node Sync]
LC_AC[AdvancedChains<br/><b>(advanced_chains.py)</b><br/>Chain Workflows]
LC_EE[ExecutionEngines<br/><b>(execution_engines.py)</b><br/>Parallel Processing]
LC_CO[Coordinators<br/><b>(coordinators.py)</b><br/>Chain Coordination]
LC_TO[LangChain Tools<br/><b>(tools.py)</b><br/>Extended Tool Library]
end
subgraph "Decision Logic Layer"
PR[PriorReasoner<br/><b>(reasoner.py)</b><br/>Quick Heuristic Analysis]
RAG[RAGSeedGenerator<br/><b>(rag_seed_generator.py)</b><br/>RAG-Enhanced Seed Generation]
PG[PathGenerator<br/><b>(path_generator.py)</b><br/>Multi-path Thinking Generation]
MAB[MABConverger<br/><b>(mab_converger.py)</b><br/>Meta-MAB & Learning]
end
subgraph "Tool Abstraction Layer"
TR[ToolRegistry<br/><b>(tool_abstraction.py)</b><br/>Unified Tool Management]
WST[WebSearchTool<br/><b>(search_tools.py)</b><br/>Web Search Tool]
IVT[IdeaVerificationTool<br/><b>(search_tools.py)</b><br/>Idea Verification Tool]
end
subgraph "Tools & Services Layer"
LLM[LLMManager<br/><b>(llm_manager.py)</b><br/>Multi-LLM Provider Management]
SC[SearchClient<br/><b>(search_client.py)</b><br/>Web Search & Verification]
PO[PerformanceOptimizer<br/><b>(performance_optimizer.py)</b><br/>Parallelization & Caching]
CFG[config.py<br/><b>(Main/Demo Configuration)</b>]
end
subgraph "Storage Backends"
FS[FileSystem<br/>Versioned Storage]
SQL[SQLite<br/>ACID Database]
LMDB[LMDB<br/>High-Performance KV]
MEM[Memory<br/>In-Memory Cache]
REDIS[Redis<br/>Distributed Cache]
end
subgraph "LLM Providers Layer"
OAI[OpenAI<br/>GPT-3.5/4/4o]
ANT[Anthropic<br/>Claude-3 Series]
DS[DeepSeek<br/>deepseek-chat/coder]
OLL[Ollama<br/>Local Models]
AZ[Azure OpenAI<br/>Enterprise Models]
end
UI --> MC
MC --> LC_AD
LC_AD --> LC_CO
LC_CO --> LC_AC & LC_EE
LC_AC --> LC_SM
LC_SM --> LC_PS
LC_DS --> LC_SM
LC_PS --> FS & SQL & LMDB & MEM & REDIS
MC --> PR & RAG
MC --> PG
MC --> MAB
MC --> TR
MAB --> LC_SM
RAG --> TR
RAG --> LLM
PG --> LLM
MAB --> PG
MC -- "Uses" --> PO
TR --> WST & IVT
TR --> LC_TO
WST --> SC
IVT --> SC
LLM --> OAI & ANT & DS & OLL & AZ
style LC_AD fill:#e3f2fd,stroke:#1976d2,stroke-width:3px
style LC_PS fill:#fff3e0,stroke:#f57c00,stroke-width:2px
style LC_SM fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
style LC_DS fill:#e8f5e8,stroke:#388e3c,stroke-width:2px
Component Description:
- MainController: System commander, responsible for orchestrating the complete five-stage decision process with tool-enhanced verification capabilities
- RAGSeedGenerator / PriorReasoner: Decision starting point, responsible for generating high-quality "thinking seeds"
- PathGenerator: System's "divergent thinking" module, generating diverse solutions based on seeds
- MABConverger: System's "convergent thinking" and "learning" module, responsible for evaluation, selection, and learning from experience
- LangChain Adapters: Compatibility layer enabling seamless integration with existing LangChain workflows and components
- PersistentStorage: Multi-backend storage engine supporting FileSystem, SQLite, LMDB, Memory, and Redis with enterprise features
- StateManagement: Professional state management with ACID transactions, checkpointing, and branch management
- DistributedState: Multi-node state coordination with consensus protocols for enterprise deployment
- AdvancedChains: Sophisticated chain composition supporting sequential, parallel, conditional, and tree-based workflows
- ExecutionEngines: High-performance parallel processing framework with intelligent task scheduling and fault tolerance
- Coordinators: Multi-chain coordination system managing complex workflow orchestration and resource allocation
- LangChain Tools: Extended tool ecosystem with advanced research, data processing, analysis, and workflow capabilities
- ToolRegistry: LangChain-inspired unified tool management system, providing centralized registration, discovery, and execution of tools
- WebSearchTool / IdeaVerificationTool: Specialized tools implementing the BaseTool interface for web search and idea verification capabilities
- LLMManager: Universal LLM interface manager, providing unified access to multiple AI providers with intelligent routing and fallback
- Tool Layer: Provides reusable underlying capabilities such as multi-LLM management, search engines, performance optimizers
- FileSystem: Hierarchical storage with versioning, backup, and metadata management
- SQLite: ACID-compliant relational database for complex queries and structured data
- LMDB: Lightning-fast memory-mapped database optimized for high-performance scenarios
- Memory: In-memory storage for caching and testing scenarios
- Redis: Distributed caching and session storage for enterprise scalability
Core Technologies:
- Core Language: Python 3.8+
- AI Engines: Multi-LLM Support (OpenAI, Anthropic, DeepSeek, Ollama, Azure OpenAI)
- LangChain Integration: Full LangChain compatibility with custom adapters, chains, and tools
- Tool Architecture: LangChain-inspired unified tool abstraction with BaseTool interface, ToolRegistry management, and dynamic tool discovery
- Core Algorithms: Meta Multi-Armed Bandit (Thompson Sampling, UCB, Epsilon-Greedy), Retrieval-Augmented Generation (RAG), Tool-Enhanced Decision Making
- Storage Backends: Multi-backend support (LMDB, SQLite, FileSystem, Memory, Redis) with enterprise features
- State Management: ACID transactions, distributed state coordination, and persistent workflows
- External Services: DuckDuckGo Search, Multi-provider LLM APIs, Tool-enhanced web verification
LangChain Integration Stack:
- Framework: LangChain, LangChain-Community for ecosystem compatibility
- Storage Engines: LMDB (high-performance), SQLite (ACID compliance), Redis (distributed caching)
- State Systems: Custom transaction management, distributed consensus protocols
- Chain Types: Sequential, Parallel, Conditional, Loop, and Tree-based chain execution
- Execution: Multi-threaded parallel processing with intelligent resource management
Key Libraries:
- Core: requests, numpy, typing, dataclasses, abc, asyncio
- AI/LLM: openai, anthropic, langchain, langchain-community
- Storage: lmdb, sqlite3, redis, sqlalchemy
- Search: duckduckgo-search, web scraping utilities
- Performance: threading, multiprocessing, caching mechanisms
- Distributed: aioredis, consul (optional), network coordination
- Python 3.8 or higher
- pip package manager
-
Clone Repository
git clone https://github.com/your-repo/neogenesis-system.git cd neogenesis-system
-
Install Dependencies
# (Recommended) Create and activate virtual environment python -m venv venv source venv/bin/activate # on Windows: venv\Scripts\activate # Install core dependencies pip install -r requirements.txt # (Optional) Install additional LLM provider libraries for enhanced functionality pip install openai # For OpenAI GPT models pip install anthropic # For Anthropic Claude models # Note: DeepSeek support is included in core dependencies # (Optional) Install LangChain integration dependencies for advanced features pip install langchain langchain-community # Core LangChain integration pip install lmdb # High-performance LMDB storage pip install redis # Distributed caching and state pip install sqlalchemy # Enhanced SQL operations pip install aioredis # Async Redis for distributed coordination
-
Configure API Keys (Optional but Recommended)
Create a
.env
file in the project root directory and configure your preferred LLM provider API keys:# Configure one or more LLM providers (the system will auto-detect available ones) DEEPSEEK_API_KEY="your_deepseek_api_key" OPENAI_API_KEY="your_openai_api_key" ANTHROPIC_API_KEY="your_anthropic_api_key" # For Azure OpenAI (optional) AZURE_OPENAI_API_KEY="your_azure_openai_key" AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"
Note: You only need to configure at least one provider. The system automatically:
- Detects available providers based on configured API keys
- Selects the best available provider automatically
- Falls back to other providers if the primary one fails
Without any keys, the system will run in limited simulation mode.
We provide multiple demo modes to let you intuitively experience AI's thinking process.
# Launch menu to select experience mode
python start_demo.py
# (Recommended) Run quick simulation demo directly, no configuration needed
python quick_demo.py
# Run complete interactive demo connected to real system
python run_demo.py
import os
from dotenv import load_dotenv
from meta_mab.controller import MainController
# Load environment variables
load_dotenv()
# Initialize controller (auto-detects available LLM providers)
controller = MainController()
# The system automatically selects the best available LLM provider
# You can check which providers are available
status = controller.get_llm_provider_status()
print(f"Available providers: {status['healthy_providers']}/{status['total_providers']}")
# Pose a complex question
query = "Design a scalable, low-cost cloud-native tech stack for a startup tech company"
context = {"domain": "cloud_native_architecture", "company_stage": "seed"}
# Get AI's decision (automatically uses the best available provider)
decision_result = controller.make_decision(user_query=query, execution_context=context)
# View the final chosen thinking path
chosen_path = decision_result.get('chosen_path')
if chosen_path:
print(f"๐ AI's chosen thinking path: {chosen_path.path_type}")
print(f"๐ Core approach: {chosen_path.description}")
# (Optional) Switch to a specific provider
controller.switch_llm_provider("openai") # or "anthropic", "deepseek", etc.
# (Optional) Provide execution result feedback to help AI learn
controller.update_performance_feedback(
decision_result=decision_result,
execution_success=True,
execution_time=12.5,
user_satisfaction=0.9,
rl_reward=0.85
)
print("\nโ
AI has received feedback and completed learning!")
# Tool Integration Examples
print("\n" + "="*50)
print("๐ง Tool-Enhanced Decision Making Examples")
print("="*50)
# Check available tools
from meta_mab.utils.tool_abstraction import list_available_tools, get_registry_stats
tools = list_available_tools()
stats = get_registry_stats()
print(f"๐ Available tools: {len(tools)} ({', '.join(tools)})")
print(f"๐ Tool registry stats: {stats['total_tools']} tools, {stats['success_rate']:.1%} success rate")
# Direct tool usage example
from meta_mab.utils.tool_abstraction import execute_tool
search_result = execute_tool("web_search", query="latest trends in cloud computing 2024", max_results=3)
if search_result and search_result.success:
print(f"๐ Web search successful: Found {len(search_result.data.get('results', []))} results")
else:
print(f"โ Web search failed: {search_result.error_message if search_result else 'No result'}")
# Tool-enhanced verification example
verification_result = execute_tool("idea_verification",
idea="Implement blockchain-based supply chain tracking for food safety",
context={"industry": "food_tech", "scale": "enterprise"})
if verification_result and verification_result.success:
analysis = verification_result.data.get('analysis', {})
print(f"๐ก Idea verification: Feasibility score {analysis.get('feasibility_score', 0):.2f}")
else:
print(f"โ Idea verification failed: {verification_result.error_message if verification_result else 'No result'}")
Metric | Performance | Description |
---|---|---|
๐ฏ Decision Accuracy | 85%+ | Based on historical validation data |
โก Average Response Time | 2-5 seconds | Including complete five-stage processing |
๐ง Path Generation Success Rate | 95%+ | Diverse thinking path generation |
๐ Golden Template Hit Rate | 60%+ | Successful experience reuse efficiency |
๐ก Aha-Moment Trigger Rate | 15%+ | Innovation breakthrough scenario percentage |
๐ง Tool Integration Success Rate | 92%+ | Tool-enhanced verification reliability |
๐ Tool Discovery Accuracy | 88%+ | Correct tool selection for context |
๐ Tool-Enhanced Decision Quality | +25% | Improvement over non-tool decisions |
๐ฏ Hybrid Selection Accuracy | 94%+ | MAB+LLM fusion mode precision |
๐ง Cold-Start Detection Rate | 96%+ | Accurate unfamiliar tool identification |
โก Experience Mode Efficiency | +40% | Performance boost for familiar tools |
๐ Exploration Mode Success | 89%+ | LLM-guided tool discovery effectiveness |
๐ Learning Convergence Speed | 3-5 uses | MAB optimization learning curve |
๐ค Provider Availability | 99%+ | Multi-LLM fallback reliability |
๐ Automatic Fallback Success | 98%+ | Seamless provider switching rate |
LangChain Integration Metric | Performance | Description |
---|---|---|
๐ช Storage Backend Latency | <2ms | LMDB read/write operations |
๐ State Transaction Speed | <5ms | ACID transaction completion |
๐ก Distributed Sync Latency | <50ms | Cross-node state synchronization |
โก Parallel Chain Execution | 4x faster | Compared to sequential execution |
๐พ Storage Compression Ratio | 60-80% | Space savings with GZIP compression |
๐ก๏ธ State Consistency Rate | 99.9%+ | Distributed state accuracy |
๐ง Tool Integration Success | 95%+ | LangChain tool compatibility |
๐ Chain Composition Success | 98%+ | Complex workflow execution reliability |
๐ Workflow Persistence Rate | 99.5%+ | State recovery after failures |
โ๏ธ Load Balancing Efficiency | 92%+ | Distributed workload optimization |
# Run all tests
python -m pytest tests/
# Run unit test examples
python tests/examples/simple_test_example.py
# Run performance tests
python tests/unit/test_performance.py
# Verify MAB algorithm convergence
python tests/unit/test_mab_converger.py
# Verify path generation robustness
python tests/unit/test_path_creation_robustness.py
# Verify RAG seed generation
python tests/unit/test_rag_seed_generator.py
# Product strategy decisions
result = controller.make_decision(
"How to prioritize features for our SaaS product for next quarter?",
execution_context={
"industry": "software",
"stage": "growth",
"constraints": ["budget_limited", "team_capacity"]
}
)
# Architecture design decisions
result = controller.make_decision(
"Design a real-time recommendation system supporting tens of millions of concurrent users",
execution_context={
"domain": "system_architecture",
"scale": "large",
"requirements": ["real_time", "high_availability"]
}
)
# Market analysis decisions
result = controller.make_decision(
"Analyze competitive landscape and opportunities in the AI tools market",
execution_context={
"analysis_type": "market_research",
"time_horizon": "6_months",
"focus": ["opportunities", "threats"]
}
)
# Tool-enhanced technical decisions with real-time information gathering
result = controller.make_decision(
"Should we adopt Kubernetes for our microservices architecture?",
execution_context={
"domain": "system_architecture",
"team_size": "10_engineers",
"current_stack": ["docker", "aws"],
"constraints": ["learning_curve", "migration_complexity"]
}
)
# The system automatically:
# 1. Uses WebSearchTool to gather latest Kubernetes trends and best practices
# 2. Applies IdeaVerificationTool to validate feasibility based on team constraints
# 3. Integrates real-time information into decision-making process
# 4. Provides evidence-based recommendations with source citations
print(f"Tool-enhanced decision: {result.get('chosen_path', {}).get('description', 'N/A')}")
print(f"Tools used: {result.get('tools_used', [])}")
print(f"Information sources: {result.get('verification_sources', [])}")
# Check available providers and their status
status = controller.get_llm_provider_status()
print(f"Healthy providers: {status['healthy_providers']}")
# Switch to a specific provider for particular tasks
controller.switch_llm_provider("anthropic") # Use Claude for complex reasoning
result_reasoning = controller.make_decision("Complex philosophical analysis...")
controller.switch_llm_provider("deepseek") # Use DeepSeek for coding tasks
result_coding = controller.make_decision("Optimize this Python algorithm...")
controller.switch_llm_provider("openai") # Use GPT for general tasks
result_general = controller.make_decision("Business strategy planning...")
# Get cost and usage statistics
cost_summary = controller.get_llm_cost_summary()
print(f"Total cost: ${cost_summary['total_cost_usd']:.4f}")
print(f"Requests by provider: {cost_summary['cost_by_provider']}")
# Run health check on all providers
health_status = controller.run_llm_health_check()
print(f"Provider health: {health_status}")
from neogenesis_system.langchain_integration import (
create_neogenesis_chain,
StateManager,
DistributedStateManager
)
# Create enterprise-grade persistent workflow
state_manager = StateManager(storage_backend="lmdb", enable_encryption=True)
neogenesis_chain = create_neogenesis_chain(
state_manager=state_manager,
enable_persistence=True,
session_id="enterprise_decision_2024"
)
# Execute long-running decision process with state persistence
result = neogenesis_chain.execute({
"query": "Develop comprehensive digital transformation strategy",
"context": {
"industry": "manufacturing",
"company_size": "enterprise",
"timeline": "3_years",
"budget": "10M_USD",
"current_state": "legacy_systems"
}
})
# Access persistent decision history
decision_timeline = state_manager.get_decision_timeline("enterprise_decision_2024")
print(f"Decision milestones: {len(decision_timeline)} checkpoints")
from neogenesis_system.langchain_integration.advanced_chains import AdvancedChainComposer
from neogenesis_system.langchain_integration.execution_engines import ParallelExecutionEngine
# Configure parallel analysis workflow
composer = AdvancedChainComposer()
execution_engine = ParallelExecutionEngine(max_workers=6)
# Create specialized analysis chains
market_analysis_chain = composer.create_analysis_chain("market_research")
technical_analysis_chain = composer.create_analysis_chain("technical_feasibility")
financial_analysis_chain = composer.create_analysis_chain("financial_modeling")
risk_analysis_chain = composer.create_analysis_chain("risk_assessment")
# Execute parallel comprehensive analysis
parallel_analysis = composer.create_parallel_chain([
market_analysis_chain,
technical_analysis_chain,
financial_analysis_chain,
risk_analysis_chain
])
# Run analysis with persistent state and error recovery
result = await execution_engine.execute_chain(
chain=parallel_analysis,
input_data={
"project": "AI-powered customer service platform",
"market": "enterprise_software",
"timeline": "18_months"
},
persist_state=True,
enable_recovery=True
)
print(f"Analysis completed: {result['analysis_summary']}")
print(f"Execution time: {result['execution_time']:.2f}s")
from neogenesis_system.langchain_integration.distributed_state import DistributedStateManager
from neogenesis_system.langchain_integration.coordinators import ClusterCoordinator
# Configure distributed decision cluster
distributed_state = DistributedStateManager(
node_id="decision_node_1",
cluster_nodes=["node_1:8001", "node_2:8002", "node_3:8003"],
consensus_protocol="raft"
)
cluster_coordinator = ClusterCoordinator(
distributed_state=distributed_state,
load_balancing="intelligent"
)
# Execute distributed decision with consensus
decision_result = await cluster_coordinator.execute_distributed_decision(
query="Should we enter the European market with our fintech product?",
context={
"industry": "fintech",
"target_markets": ["germany", "france", "uk"],
"regulatory_complexity": "high",
"competition_level": "intense"
},
require_consensus=True,
min_node_agreement=2
)
# Access cluster decision metrics
cluster_stats = cluster_coordinator.get_cluster_stats()
print(f"Nodes participated: {cluster_stats['active_nodes']}")
print(f"Consensus achieved: {cluster_stats['consensus_reached']}")
print(f"Decision confidence: {decision_result['confidence']:.2f}")
from langchain.chains import SequentialChain, LLMChain
from langchain.prompts import PromptTemplate
from neogenesis_system.langchain_integration.adapters import NeogenesisLangChainAdapter
# Create standard LangChain components
prompt = PromptTemplate(template="Analyze the market for {product}", input_variables=["product"])
market_chain = LLMChain(llm=llm, prompt=prompt)
# Create Neogenesis intelligent decision chain
neogenesis_adapter = NeogenesisLangChainAdapter(
storage_backend="lmdb",
enable_advanced_reasoning=True
)
neogenesis_chain = neogenesis_adapter.create_decision_chain()
# Combine in standard LangChain workflow
complete_workflow = SequentialChain(
chains=[
market_chain, # Standard LangChain analysis
neogenesis_chain, # Intelligent Neogenesis decision-making
market_chain # Follow-up LangChain processing
],
input_variables=["product"],
output_variables=["final_recommendation"]
)
# Execute integrated workflow
result = complete_workflow.run({
"product": "AI-powered legal document analyzer"
})
print(f"Integrated analysis result: {result}")
We warmly welcome community contributions! Whether bug fixes, feature suggestions, or code submissions, all help make Neogenesis System better.
- ๐ Bug Reports: Submit issues when you find problems
- โจ Feature Suggestions: Propose new feature ideas
- ๐ Documentation Improvements: Enhance documentation and examples
- ๐ง Code Contributions: Submit Pull Requests
- ๐จ Tool Development: Create new tools implementing the BaseTool interface
- ๐งช Tool Testing: Help test and validate tool integrations
# 1. Fork and clone project
git clone https://github.com/your-username/neogenesis-system.git
# 2. Create development branch
git checkout -b feature/your-feature-name
# 3. Install development dependencies
pip install -r requirements-dev.txt
# 4. Run tests to ensure baseline functionality
python -m pytest tests/
# 5. Develop new features...
# 6. Submit Pull Request
Please refer to CONTRIBUTING.md for detailed guidelines.
This project is open-sourced under the MIT License. See LICENSE file for details.
- LangChain: Revolutionary framework for building LLM-powered applications that inspired our comprehensive integration architecture
- OpenAI: Pioneering GPT models and API standards that inspired our universal interface design
- Anthropic: Advanced Claude models with superior reasoning capabilities
- DeepSeek AI: Cost-effective models with excellent coding and multilingual support
- Ollama: Enabling local and privacy-focused AI deployments
- LMDB: Lightning-fast memory-mapped database enabling high-performance persistent storage
- Redis: Distributed caching and state management for enterprise scalability
- Multi-Armed Bandit Theory: Providing algorithmic foundation for intelligent decision-making
- RAG Technology: Enabling knowledge-enhanced thinking generation
- Metacognitive Theory: Inspiring the overall system architecture design
Neogenesis System is independently developed by the author.
- ๐ง Email Contact: This project is still in development. If you're interested in the project or need commercial use, please contact: [email protected]
- v1.1: Enhanced LangChain tool ecosystem with database, API, and file operation tools; improved chain discovery algorithms
- v1.2: Advanced chain composition and parallel execution capabilities; LangChain performance analytics and monitoring
- v1.3: Visual chain execution flows and distributed state management Web interface; LangChain marketplace integration
- v1.4: Multi-language LangChain support, internationalization deployment, and enterprise LangChain connectors
- v2.0: Distributed chain execution, enterprise-level LangChain integration, and custom chain marketplace
- v1.1: Enhanced tool ecosystem with database, API, and file operation tools; improved tool discovery algorithms
- v1.2: Advanced tool composition and chaining capabilities; tool performance analytics
- v1.3: Visual tool execution flows and decision-making process Web interface
- v1.4: Multi-language support, internationalization deployment
- v2.0: Distributed tool execution, enterprise-level integration, and custom tool marketplace
๐ If this project helps you, please give us a Star!
๐ Get Started | ๐ View Documentation | ๐ก Suggest Ideas
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Neosgenesis
Similar Open Source Tools

Neosgenesis
Neogenesis System is an advanced AI decision-making framework that enables agents to 'think about how to think'. It implements a metacognitive approach with real-time learning, tool integration, and multi-LLM support, allowing AI to make expert-level decisions in complex environments. Key features include metacognitive intelligence, tool-enhanced decisions, real-time learning, aha-moment breakthroughs, experience accumulation, and multi-LLM support.

sgr-deep-research
This repository contains a deep learning research project focused on natural language processing tasks. It includes implementations of various state-of-the-art models and algorithms for text classification, sentiment analysis, named entity recognition, and more. The project aims to provide a comprehensive resource for researchers and developers interested in exploring deep learning techniques for NLP applications.

quantalogic
QuantaLogic is a ReAct framework for building advanced AI agents that seamlessly integrates large language models with a robust tool system. It aims to bridge the gap between advanced AI models and practical implementation in business processes by enabling agents to understand, reason about, and execute complex tasks through natural language interaction. The framework includes features such as ReAct Framework, Universal LLM Support, Secure Tool System, Real-time Monitoring, Memory Management, and Enterprise Ready components.

cua
Cua is a tool for creating and running high-performance macOS and Linux virtual machines on Apple Silicon, with built-in support for AI agents. It provides libraries like Lume for running VMs with near-native performance, Computer for interacting with sandboxes, and Agent for running agentic workflows. Users can refer to the documentation for onboarding, explore demos showcasing AI-Gradio and GitHub issue fixing, and utilize accessory libraries like Core, PyLume, Computer Server, and SOM. Contributions are welcome, and the tool is open-sourced under the MIT License.

open-responses
OpenResponses API provides enterprise-grade AI capabilities through a powerful API, simplifying development and deployment while ensuring complete data control. It offers automated tracing, integrated RAG for contextual information retrieval, pre-built tool integrations, self-hosted architecture, and an OpenAI-compatible interface. The toolkit addresses development challenges like feature gaps and integration complexity, as well as operational concerns such as data privacy and operational control. Engineering teams can benefit from improved productivity, production readiness, compliance confidence, and simplified architecture by choosing OpenResponses.

dingo
Dingo is a data quality evaluation tool that automatically detects data quality issues in datasets. It provides built-in rules and model evaluation methods, supports text and multimodal datasets, and offers local CLI and SDK usage. Dingo is designed for easy integration into evaluation platforms like OpenCompass.

ChatGLM3
ChatGLM3 is a conversational pretrained model jointly released by Zhipu AI and THU's KEG Lab. ChatGLM3-6B is the open-sourced model in the ChatGLM3 series. It inherits the advantages of its predecessors, such as fluent conversation and low deployment threshold. In addition, ChatGLM3-6B introduces the following features: 1. A stronger foundation model: ChatGLM3-6B's foundation model ChatGLM3-6B-Base employs more diverse training data, more sufficient training steps, and more reasonable training strategies. Evaluation on datasets from different perspectives, such as semantics, mathematics, reasoning, code, and knowledge, shows that ChatGLM3-6B-Base has the strongest performance among foundation models below 10B parameters. 2. More complete functional support: ChatGLM3-6B adopts a newly designed prompt format, which supports not only normal multi-turn dialogue, but also complex scenarios such as tool invocation (Function Call), code execution (Code Interpreter), and Agent tasks. 3. A more comprehensive open-source sequence: In addition to the dialogue model ChatGLM3-6B, the foundation model ChatGLM3-6B-Base, the long-text dialogue model ChatGLM3-6B-32K, and ChatGLM3-6B-128K, which further enhances the long-text comprehension ability, are also open-sourced. All the above weights are completely open to academic research and are also allowed for free commercial use after filling out a questionnaire.

osaurus
Osaurus is a native, Apple Silicon-only local LLM server built on Apple's MLX for maximum performance on Mโseries chips. It is a SwiftUI app + SwiftNIO server with OpenAIโcompatible and Ollamaโcompatible endpoints. The tool supports native MLX text generation, model management, streaming and nonโstreaming chat completions, OpenAIโcompatible function calling, real-time system resource monitoring, and path normalization for API compatibility. Osaurus is designed for macOS 15.5+ and Apple Silicon (M1 or newer) with Xcode 16.4+ required for building from source.

acte
Acte is a framework designed to build GUI-like tools for AI Agents. It aims to address the issues of cognitive load and freedom degrees when interacting with multiple APIs in complex scenarios. By providing a graphical user interface (GUI) for Agents, Acte helps reduce cognitive load and constraints interaction, similar to how humans interact with computers through GUIs. The tool offers APIs for starting new sessions, executing actions, and displaying screens, accessible via HTTP requests or the SessionManager class.

EduChat
EduChat is a large-scale language model-based chatbot system designed for intelligent education by the EduNLP team at East China Normal University. The project focuses on developing a dialogue-based language model for the education vertical domain, integrating diverse education vertical domain data, and providing functions such as automatic question generation, homework correction, emotional support, course guidance, and college entrance examination consultation. The tool aims to serve teachers, students, and parents to achieve personalized, fair, and warm intelligent education.

nexus
Nexus is a tool that acts as a unified gateway for multiple LLM providers and MCP servers. It allows users to aggregate, govern, and control their AI stack by connecting multiple servers and providers through a single endpoint. Nexus provides features like MCP Server Aggregation, LLM Provider Routing, Context-Aware Tool Search, Protocol Support, Flexible Configuration, Security features, Rate Limiting, and Docker readiness. It supports tool calling, tool discovery, and error handling for STDIO servers. Nexus also integrates with AI assistants, Cursor, Claude Code, and LangChain for seamless usage.

sparrow
Sparrow is an innovative open-source solution for efficient data extraction and processing from various documents and images. It seamlessly handles forms, invoices, receipts, and other unstructured data sources. Sparrow stands out with its modular architecture, offering independent services and pipelines all optimized for robust performance. One of the critical functionalities of Sparrow - pluggable architecture. You can easily integrate and run data extraction pipelines using tools and frameworks like LlamaIndex, Haystack, or Unstructured. Sparrow enables local LLM data extraction pipelines through Ollama or Apple MLX. With Sparrow solution you get API, which helps to process and transform your data into structured output, ready to be integrated with custom workflows. Sparrow Agents - with Sparrow you can build independent LLM agents, and use API to invoke them from your system. **List of available agents:** * **llamaindex** - RAG pipeline with LlamaIndex for PDF processing * **vllamaindex** - RAG pipeline with LLamaIndex multimodal for image processing * **vprocessor** - RAG pipeline with OCR and LlamaIndex for image processing * **haystack** - RAG pipeline with Haystack for PDF processing * **fcall** - Function call pipeline * **unstructured-light** - RAG pipeline with Unstructured and LangChain, supports PDF and image processing * **unstructured** - RAG pipeline with Weaviate vector DB query, Unstructured and LangChain, supports PDF and image processing * **instructor** - RAG pipeline with Unstructured and Instructor libraries, supports PDF and image processing. Works great for JSON response generation

freeGPT
freeGPT provides free access to text and image generation models. It supports various models, including gpt3, gpt4, alpaca_7b, falcon_40b, prodia, and pollinations. The tool offers both asynchronous and non-asynchronous interfaces for text completion and image generation. It also features an interactive Discord bot that provides access to all the models in the repository. The tool is easy to use and can be integrated into various applications.

LLMVoX
LLMVoX is a lightweight 30M-parameter, LLM-agnostic, autoregressive streaming Text-to-Speech (TTS) system designed to convert text outputs from Large Language Models into high-fidelity streaming speech with low latency. It achieves significantly lower Word Error Rate compared to speech-enabled LLMs while operating at comparable latency and speech quality. Key features include being lightweight & fast with only 30M parameters, LLM-agnostic for easy integration with existing models, multi-queue streaming for continuous speech generation, and multilingual support for easy adaptation to new languages.

arxiv-mcp-server
The ArXiv MCP Server acts as a bridge between AI assistants and arXiv's research repository, enabling AI models to search for and access papers programmatically through the Message Control Protocol (MCP). It offers features like paper search, access, listing, local storage, and research prompts. Users can install it via Smithery or manually for Claude Desktop. The server provides tools for paper search, download, listing, and reading, along with specialized prompts for paper analysis. Configuration can be done through environment variables, and testing is supported with a test suite. The tool is released under the MIT License and is developed by the Pearl Labs Team.

meet-libai
The 'meet-libai' project aims to promote and popularize the cultural heritage of the Chinese poet Li Bai by constructing a knowledge graph of Li Bai and training a professional AI intelligent body using large models. The project includes features such as data preprocessing, knowledge graph construction, question-answering system development, and visualization exploration of the graph structure. It also provides code implementations for large models and RAG retrieval enhancement.
For similar tasks

Neosgenesis
Neogenesis System is an advanced AI decision-making framework that enables agents to 'think about how to think'. It implements a metacognitive approach with real-time learning, tool integration, and multi-LLM support, allowing AI to make expert-level decisions in complex environments. Key features include metacognitive intelligence, tool-enhanced decisions, real-time learning, aha-moment breakthroughs, experience accumulation, and multi-LLM support.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.

spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.