
Neosgenesis
https://dev.to/answeryt/the-demo-spell-and-production-dilemma-of-ai-agents-how-i-built-a-self-learning-agent-system-4okk
Stars: 1251

Neogenesis System is an advanced AI decision-making framework that enables agents to 'think about how to think'. It implements a metacognitive approach with real-time learning, tool integration, and multi-LLM support, allowing AI to make expert-level decisions in complex environments. Key features include metacognitive intelligence, tool-enhanced decisions, real-time learning, aha-moment breakthroughs, experience accumulation, and multi-LLM support.
README:
Quick Start ยท Core Features ยท Installation ยท Usage
Neogenesis System is an advanced AI decision-making framework that enables agents to "think about how to think". Unlike traditional question-answer systems, it implements a metacognitive approach with real-time learning, tool integration, and multi-LLM support, allowing AI to make expert-level decisions in complex environments.
- ๐ง Metacognitive Intelligence: AI that thinks about "how to think"
- ๐ง Tool-Enhanced Decisions: Dynamic tool integration during decision-making
- ๐ฌ Real-time Learning: Learns during thinking phase, not just after execution
- ๐ก Aha-Moment Breakthroughs: Creative problem-solving when stuck
- ๐ Experience Accumulation: Builds reusable decision templates from success
- ๐ค Multi-LLM Support: OpenAI, Anthropic, DeepSeek, Ollama with auto-failover
Traditional AI: Think โ Execute โ Learn
Neogenesis: Think โ Verify โ Learn โ Optimize โ Decide (all during thinking phase)
graph LR
A[Seed Generation] --> B[Verification]
B --> C[Path Generation]
C --> D[Learning & Optimization]
D --> E[Final Decision]
D --> C
style D fill:#fff9c4
Value: AI learns and optimizes before execution, avoiding costly mistakes and improving decision quality.
- Experience Accumulation: Learns which decision strategies work best in different contexts
- Golden Templates: Automatically identifies and reuses successful reasoning patterns
- Exploration vs Exploitation: Balances trying new approaches vs using proven methods
When conventional approaches fail, the system automatically:
- Activates creative problem-solving mode
- Generates unconventional thinking paths
- Breaks through decision deadlocks with innovative solutions
- Real-time Information: Integrates web search and verification tools during thinking
- Dynamic Tool Selection: Hybrid MAB+LLM approach for optimal tool choice
- Unified Tool Interface: LangChain-inspired tool abstraction for extensibility
- Python 3.8 or higher
- pip package manager
# Clone repository
git clone https://github.com/your-repo/neogenesis-system.git
cd neogenesis-system
# Create and activate virtual environment (recommended)
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
Create a .env
file in the project root:
# Configure one or more LLM providers (system auto-detects available ones)
DEEPSEEK_API_KEY="your_deepseek_api_key"
OPENAI_API_KEY="your_openai_api_key"
ANTHROPIC_API_KEY="your_anthropic_api_key"
# Launch demo menu
python start_demo.py
# Quick simulation demo (no API key needed)
python quick_demo.py
# Full interactive demo
python run_demo.py
from neogenesis_system.core.neogenesis_planner import NeogenesisPlanner
from neogenesis_system.cognitive_engine.reasoner import PriorReasoner
from neogenesis_system.cognitive_engine.path_generator import PathGenerator
from neogenesis_system.cognitive_engine.mab_converger import MABConverger
# Initialize components
planner = NeogenesisPlanner(
prior_reasoner=PriorReasoner(),
path_generator=PathGenerator(),
mab_converger=MABConverger()
)
# Create a decision plan
plan = planner.create_plan(
query="Design a scalable microservices architecture",
memory=None,
context={"domain": "system_design", "complexity": "high"}
)
print(f"Plan: {plan.thought}")
print(f"Actions: {len(plan.actions)}")
Metric | Performance | Description |
---|---|---|
๐ฏ Decision Accuracy | 85%+ | Based on validation data |
โก Response Time | 2-5 sec | Full five-stage process |
๐ง Path Generation | 95%+ | Success rate |
๐ก Innovation Rate | 15%+ | Aha-moment breakthroughs |
๐ง Tool Integration | 92%+ | Success rate |
๐ค Multi-LLM Reliability | 99%+ | Provider failover |
MIT License - see LICENSE file.
- OpenAI, Anthropic, DeepSeek: LLM providers
- LangChain: Tool ecosystem inspiration
- Multi-Armed Bandit Theory: Algorithmic foundation
- Metacognitive Theory: Architecture inspiration
Email: [email protected]
๐ If this project helps you, please give us a Star!
- ๐ Node Coordination: Synchronize state across multiple Neogenesis instances
- ๐ก Event Broadcasting: Real-time state change notifications
- โ๏ธ Conflict Resolution: Intelligent merging of concurrent state modifications
- ๐ Consensus Protocols: Ensure state consistency in distributed environments
from neogenesis_system.langchain_integration.distributed_state import DistributedStateManager
# Configure distributed coordination
distributed_state = DistributedStateManager(
node_id="neogenesis_node_1",
cluster_nodes=["node_1:8001", "node_2:8002", "node_3:8003"],
consensus_protocol="raft"
)
# Distribute decision state across cluster
await distributed_state.broadcast_decision_update({
"session_id": "global_decision_001",
"chosen_path": {"id": 5, "confidence": 0.93},
"timestamp": time.time()
})
advanced_chains.py
& chains.py
- Sophisticated workflow orchestration:
- ๐ Sequential Chains: Linear execution with state passing
- ๐ Parallel Chains: Concurrent execution with result aggregation
- ๐ Conditional Chains: Dynamic routing based on intermediate results
- ๐ Loop Chains: Iterative processing with convergence criteria
- ๐ณ Tree Chains: Hierarchical decision trees with pruning strategies
- ๐ Chain Analytics: Performance monitoring and bottleneck identification
- ๐ฏ Dynamic Routing: Intelligent path selection based on context
- โก Parallel Execution: Multi-threaded chain processing
- ๐ก๏ธ Error Recovery: Graceful handling of chain failures with retry mechanisms
from neogenesis_system.langchain_integration.advanced_chains import AdvancedChainComposer
# Create sophisticated decision workflow
composer = AdvancedChainComposer()
# Define parallel analysis chains
technical_analysis = composer.create_parallel_chain([
"architecture_evaluation",
"performance_analysis",
"security_assessment"
])
# Define sequential decision chain
decision_workflow = composer.create_sequential_chain([
"problem_analysis",
technical_analysis, # Parallel sub-chain
"cost_benefit_analysis",
"risk_assessment",
"final_recommendation"
])
# Execute with state persistence
result = await composer.execute_chain(
chain=decision_workflow,
input_data={"project": "cloud_migration", "scale": "enterprise"},
persist_state=True,
session_id="migration_decision_001"
)
execution_engines.py
- High-performance parallel processing:
- ๐ฏ Task Scheduling: Intelligent workload distribution
- โก Parallel Processing: Multi-core and distributed execution
- ๐ Resource Management: CPU, memory, and network optimization
- ๐ Fault Tolerance: Automatic retry and failure recovery
from neogenesis_system.langchain_integration.execution_engines import ParallelExecutionEngine
# Configure high-performance execution
engine = ParallelExecutionEngine(
max_workers=8,
execution_timeout=300,
retry_strategy="exponential_backoff"
)
# Execute multiple decision paths in parallel
paths_to_evaluate = [
{"path_id": 1, "strategy": "microservices_approach"},
{"path_id": 2, "strategy": "monolithic_approach"},
{"path_id": 3, "strategy": "hybrid_approach"}
]
results = await engine.execute_parallel(
tasks=paths_to_evaluate,
evaluation_function="evaluate_architecture_path"
)
tools.py
- Comprehensive LangChain-compatible tool library:
- ๐ Research Tools: Advanced web search, academic paper retrieval, market analysis
- ๐พ Data Tools: Database queries, file processing, API integrations
- ๐งฎ Analysis Tools: Statistical analysis, ML model inference, data visualization
- ๐ Workflow Tools: Task automation, notification systems, reporting generators
To use the LangChain integration features:
# Install core LangChain integration dependencies
pip install langchain langchain-community
# Install storage backend dependencies
pip install lmdb # For LMDB high-performance storage
pip install redis # For Redis distributed storage
pip install sqlalchemy # For enhanced SQL operations
# Install distributed coordination dependencies
pip install aioredis # For async Redis operations
pip install consul # For service discovery (optional)
from neogenesis_system.langchain_integration import (
create_neogenesis_chain,
PersistentStateManager,
AdvancedChainComposer
)
# Create LangChain-compatible Neogenesis chain
neogenesis_chain = create_neogenesis_chain(
storage_backend="lmdb",
enable_distributed_state=True,
session_persistence=True
)
# Use as standard LangChain component
from langchain.chains import SequentialChain
# Integrate with existing LangChain workflows
full_workflow = SequentialChain(chains=[
preprocessing_chain, # Standard LangChain chain
neogenesis_chain, # Our intelligent decision engine
postprocessing_chain # Standard LangChain chain
])
# Execute with persistent state
result = full_workflow.run({
"input": "Design scalable microservices architecture",
"context": {"team_size": 15, "timeline": "6_months"}
})
from neogenesis_system.langchain_integration.coordinators import EnterpriseCoordinator
# Configure enterprise-grade decision workflow
coordinator = EnterpriseCoordinator(
storage_config={
"backend": "lmdb",
"encryption": True,
"backup_enabled": True
},
distributed_config={
"cluster_size": 3,
"consensus_protocol": "raft"
}
)
# Execute complex business decision
decision_result = await coordinator.execute_enterprise_decision(
query="Should we acquire startup company TechCorp for $50M?",
context={
"industry": "fintech",
"company_stage": "series_b",
"financial_position": "strong",
"strategic_goals": ["market_expansion", "talent_acquisition"]
},
analysis_depth="comprehensive",
stakeholder_perspectives=["ceo", "cto", "cfo", "head_of_strategy"]
)
# Access persistent decision history
decision_history = coordinator.get_decision_history(
filters={"domain": "mergers_acquisitions", "timeframe": "last_year"}
)
LangChain Integration Metric | Performance | Description |
---|---|---|
๐ช Storage Backend Latency | <2ms | LMDB read/write operations |
๐ State Transaction Speed | <5ms | ACID transaction completion |
๐ก Distributed Sync Latency | <50ms | Cross-node state synchronization |
โก Parallel Chain Execution | 4x faster | Compared to sequential execution |
๐พ Storage Compression Ratio | 60-80% | Space savings with GZIP compression |
๐ก๏ธ State Consistency Rate | 99.9%+ | Distributed state accuracy |
๐ง Tool Integration Success | 95%+ | LangChain tool compatibility |
Neogenesis System adopts a highly modular and extensible architectural design where components have clear responsibilities and work together through dependency injection.
graph TD
subgraph "Launch & Demo Layer"
UI["Demo & Interactive Interface"]
end
subgraph "Core Control Layer"
MC["MainController - Five-stage Process Coordination"]
end
subgraph "LangChain Integration Layer"
LC_AD["LangChain Adapters - LangChain Compatibility"]
LC_PS["PersistentStorage - Multi-Backend Storage"]
LC_SM["StateManagement - ACID Transactions"]
LC_DS["DistributedState - Multi-Node Sync"]
LC_AC["AdvancedChains - Chain Workflows"]
LC_EE["ExecutionEngines - Parallel Processing"]
LC_CO["Coordinators - Chain Coordination"]
LC_TO["LangChain Tools - Extended Tool Library"]
end
subgraph "Decision Logic Layer"
PR["PriorReasoner - Quick Heuristic Analysis"]
RAG["RAGSeedGenerator - RAG-Enhanced Seed Generation"]
PG["PathGenerator - Multi-path Thinking Generation"]
MAB["MABConverger - Meta-MAB & Learning"]
end
subgraph "Tool Abstraction Layer"
TR["ToolRegistry - Unified Tool Management"]
WST["WebSearchTool - Web Search Tool"]
IVT["IdeaVerificationTool - Idea Verification Tool"]
end
subgraph "Tools & Services Layer"
LLM["LLMManager - Multi-LLM Provider Management"]
SC["SearchClient - Web Search & Verification"]
PO["PerformanceOptimizer - Parallelization & Caching"]
CFG["Configuration - Main/Demo Configuration"]
end
subgraph "Storage Backends"
FS["FileSystem - Versioned Storage"]
SQL["SQLite - ACID Database"]
LMDB["LMDB - High-Performance KV"]
MEM["Memory - In-Memory Cache"]
REDIS["Redis - Distributed Cache"]
end
subgraph "LLM Providers Layer"
OAI["OpenAI - GPT-3.5/4/4o"]
ANT["Anthropic - Claude-3 Series"]
DS["DeepSeek - deepseek-chat/coder"]
OLL["Ollama - Local Models"]
AZ["Azure OpenAI - Enterprise Models"]
end
UI --> MC
MC --> LC_AD
LC_AD --> LC_CO
LC_CO --> LC_AC
LC_CO --> LC_EE
LC_AC --> LC_SM
LC_SM --> LC_PS
LC_DS --> LC_SM
LC_PS --> FS
LC_PS --> SQL
LC_PS --> LMDB
LC_PS --> MEM
LC_PS --> REDIS
MC --> PR
MC --> RAG
MC --> PG
MC --> MAB
MC --> TR
MAB --> LC_SM
RAG --> TR
RAG --> LLM
PG --> LLM
MAB --> PG
MC --> PO
TR --> WST
TR --> IVT
TR --> LC_TO
WST --> SC
IVT --> SC
LLM --> OAI
LLM --> ANT
LLM --> DS
LLM --> OLL
LLM --> AZ
style LC_AD fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
style LC_PS fill:#fff3e0,stroke:#f57c00,stroke-width:2px
style LC_SM fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
style LC_DS fill:#e8f5e8,stroke:#388e3c,stroke-width:2px
Component Description:
- MainController: System commander, responsible for orchestrating the complete five-stage decision process with tool-enhanced verification capabilities
- RAGSeedGenerator / PriorReasoner: Decision starting point, responsible for generating high-quality "thinking seeds"
- PathGenerator: System's "divergent thinking" module, generating diverse solutions based on seeds
- MABConverger: System's "convergent thinking" and "learning" module, responsible for evaluation, selection, and learning from experience
- LangChain Adapters: Compatibility layer enabling seamless integration with existing LangChain workflows and components
- PersistentStorage: Multi-backend storage engine supporting FileSystem, SQLite, LMDB, Memory, and Redis with enterprise features
- StateManagement: Professional state management with ACID transactions, checkpointing, and branch management
- DistributedState: Multi-node state coordination with consensus protocols for enterprise deployment
- AdvancedChains: Sophisticated chain composition supporting sequential, parallel, conditional, and tree-based workflows
- ExecutionEngines: High-performance parallel processing framework with intelligent task scheduling and fault tolerance
- Coordinators: Multi-chain coordination system managing complex workflow orchestration and resource allocation
- LangChain Tools: Extended tool ecosystem with advanced research, data processing, analysis, and workflow capabilities
- ToolRegistry: LangChain-inspired unified tool management system, providing centralized registration, discovery, and execution of tools
- WebSearchTool / IdeaVerificationTool: Specialized tools implementing the BaseTool interface for web search and idea verification capabilities
- LLMManager: Universal LLM interface manager, providing unified access to multiple AI providers with intelligent routing and fallback
- Tool Layer: Provides reusable underlying capabilities such as multi-LLM management, search engines, performance optimizers
- FileSystem: Hierarchical storage with versioning, backup, and metadata management
- SQLite: ACID-compliant relational database for complex queries and structured data
- LMDB: Lightning-fast memory-mapped database optimized for high-performance scenarios
- Memory: In-memory storage for caching and testing scenarios
- Redis: Distributed caching and session storage for enterprise scalability
Core Technologies:
- Core Language: Python 3.8+
- AI Engines: Multi-LLM Support (OpenAI, Anthropic, DeepSeek, Ollama, Azure OpenAI)
- LangChain Integration: Full LangChain compatibility with custom adapters, chains, and tools
- Tool Architecture: LangChain-inspired unified tool abstraction with BaseTool interface, ToolRegistry management, and dynamic tool discovery
- Core Algorithms: Meta Multi-Armed Bandit (Thompson Sampling, UCB, Epsilon-Greedy), Retrieval-Augmented Generation (RAG), Tool-Enhanced Decision Making
- Storage Backends: Multi-backend support (LMDB, SQLite, FileSystem, Memory, Redis) with enterprise features
- State Management: ACID transactions, distributed state coordination, and persistent workflows
- External Services: DuckDuckGo Search, Multi-provider LLM APIs, Tool-enhanced web verification
LangChain Integration Stack:
- Framework: LangChain, LangChain-Community for ecosystem compatibility
- Storage Engines: LMDB (high-performance), SQLite (ACID compliance), Redis (distributed caching)
- State Systems: Custom transaction management, distributed consensus protocols
- Chain Types: Sequential, Parallel, Conditional, Loop, and Tree-based chain execution
- Execution: Multi-threaded parallel processing with intelligent resource management
Key Libraries:
- Core: requests, numpy, typing, dataclasses, abc, asyncio
- AI/LLM: openai, anthropic, langchain, langchain-community
- Storage: lmdb, sqlite3, redis, sqlalchemy
- Search: duckduckgo-search, web scraping utilities
- Performance: threading, multiprocessing, caching mechanisms
- Distributed: aioredis, consul (optional), network coordination
- Python 3.8 or higher
- pip package manager
-
Clone Repository
git clone https://github.com/your-repo/neogenesis-system.git cd neogenesis-system
-
Install Dependencies
# (Recommended) Create and activate virtual environment python -m venv venv source venv/bin/activate # on Windows: venv\Scripts\activate # Install core dependencies pip install -r requirements.txt # (Optional) Install additional LLM provider libraries for enhanced functionality pip install openai # For OpenAI GPT models pip install anthropic # For Anthropic Claude models # Note: DeepSeek support is included in core dependencies # (Optional) Install LangChain integration dependencies for advanced features pip install langchain langchain-community # Core LangChain integration pip install lmdb # High-performance LMDB storage pip install redis # Distributed caching and state pip install sqlalchemy # Enhanced SQL operations pip install aioredis # Async Redis for distributed coordination
-
Configure API Keys (Optional but Recommended)
Create a
.env
file in the project root directory and configure your preferred LLM provider API keys:# Configure one or more LLM providers (the system will auto-detect available ones) DEEPSEEK_API_KEY="your_deepseek_api_key" OPENAI_API_KEY="your_openai_api_key" ANTHROPIC_API_KEY="your_anthropic_api_key" # For Azure OpenAI (optional) AZURE_OPENAI_API_KEY="your_azure_openai_key" AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"
Note: You only need to configure at least one provider. The system automatically:
- Detects available providers based on configured API keys
- Selects the best available provider automatically
- Falls back to other providers if the primary one fails
Without any keys, the system will run in limited simulation mode.
We provide multiple demo modes to let you intuitively experience AI's thinking process.
# Launch menu to select experience mode
python start_demo.py
# (Recommended) Run quick simulation demo directly, no configuration needed
python quick_demo.py
# Run complete interactive demo connected to real system
python run_demo.py
import os
from dotenv import load_dotenv
from meta_mab.controller import MainController
# Load environment variables
load_dotenv()
# Initialize controller (auto-detects available LLM providers)
controller = MainController()
# The system automatically selects the best available LLM provider
# You can check which providers are available
status = controller.get_llm_provider_status()
print(f"Available providers: {status['healthy_providers']}/{status['total_providers']}")
# Pose a complex question
query = "Design a scalable, low-cost cloud-native tech stack for a startup tech company"
context = {"domain": "cloud_native_architecture", "company_stage": "seed"}
# Get AI's decision (automatically uses the best available provider)
decision_result = controller.make_decision(user_query=query, execution_context=context)
# View the final chosen thinking path
chosen_path = decision_result.get('chosen_path')
if chosen_path:
print(f"๐ AI's chosen thinking path: {chosen_path.path_type}")
print(f"๐ Core approach: {chosen_path.description}")
# (Optional) Switch to a specific provider
controller.switch_llm_provider("openai") # or "anthropic", "deepseek", etc.
# (Optional) Provide execution result feedback to help AI learn
controller.update_performance_feedback(
decision_result=decision_result,
execution_success=True,
execution_time=12.5,
user_satisfaction=0.9,
rl_reward=0.85
)
print("\nโ
AI has received feedback and completed learning!")
# Tool Integration Examples
print("\n" + "="*50)
print("๐ง Tool-Enhanced Decision Making Examples")
print("="*50)
# Check available tools
from meta_mab.utils.tool_abstraction import list_available_tools, get_registry_stats
tools = list_available_tools()
stats = get_registry_stats()
print(f"๐ Available tools: {len(tools)} ({', '.join(tools)})")
print(f"๐ Tool registry stats: {stats['total_tools']} tools, {stats['success_rate']:.1%} success rate")
# Direct tool usage example
from meta_mab.utils.tool_abstraction import execute_tool
search_result = execute_tool("web_search", query="latest trends in cloud computing 2024", max_results=3)
if search_result and search_result.success:
print(f"๐ Web search successful: Found {len(search_result.data.get('results', []))} results")
else:
print(f"โ Web search failed: {search_result.error_message if search_result else 'No result'}")
# Tool-enhanced verification example
verification_result = execute_tool("idea_verification",
idea="Implement blockchain-based supply chain tracking for food safety",
context={"industry": "food_tech", "scale": "enterprise"})
if verification_result and verification_result.success:
analysis = verification_result.data.get('analysis', {})
print(f"๐ก Idea verification: Feasibility score {analysis.get('feasibility_score', 0):.2f}")
else:
print(f"โ Idea verification failed: {verification_result.error_message if verification_result else 'No result'}")
Metric | Performance | Description |
---|---|---|
๐ฏ Decision Accuracy | 85%+ | Based on historical validation data |
โก Average Response Time | 2-5 seconds | Including complete five-stage processing |
๐ง Path Generation Success Rate | 95%+ | Diverse thinking path generation |
๐ Golden Template Hit Rate | 60%+ | Successful experience reuse efficiency |
๐ก Aha-Moment Trigger Rate | 15%+ | Innovation breakthrough scenario percentage |
๐ง Tool Integration Success Rate | 92%+ | Tool-enhanced verification reliability |
๐ Tool Discovery Accuracy | 88%+ | Correct tool selection for context |
๐ Tool-Enhanced Decision Quality | +25% | Improvement over non-tool decisions |
๐ฏ Hybrid Selection Accuracy | 94%+ | MAB+LLM fusion mode precision |
๐ง Cold-Start Detection Rate | 96%+ | Accurate unfamiliar tool identification |
โก Experience Mode Efficiency | +40% | Performance boost for familiar tools |
๐ Exploration Mode Success | 89%+ | LLM-guided tool discovery effectiveness |
๐ Learning Convergence Speed | 3-5 uses | MAB optimization learning curve |
๐ค Provider Availability | 99%+ | Multi-LLM fallback reliability |
๐ Automatic Fallback Success | 98%+ | Seamless provider switching rate |
LangChain Integration Metric | Performance | Description |
---|---|---|
๐ช Storage Backend Latency | <2ms | LMDB read/write operations |
๐ State Transaction Speed | <5ms | ACID transaction completion |
๐ก Distributed Sync Latency | <50ms | Cross-node state synchronization |
โก Parallel Chain Execution | 4x faster | Compared to sequential execution |
๐พ Storage Compression Ratio | 60-80% | Space savings with GZIP compression |
๐ก๏ธ State Consistency Rate | 99.9%+ | Distributed state accuracy |
๐ง Tool Integration Success | 95%+ | LangChain tool compatibility |
๐ Chain Composition Success | 98%+ | Complex workflow execution reliability |
๐ Workflow Persistence Rate | 99.5%+ | State recovery after failures |
โ๏ธ Load Balancing Efficiency | 92%+ | Distributed workload optimization |
# Run all tests
python -m pytest tests/
# Run unit test examples
python tests/examples/simple_test_example.py
# Run performance tests
python tests/unit/test_performance.py
# Verify MAB algorithm convergence
python tests/unit/test_mab_converger.py
# Verify path generation robustness
python tests/unit/test_path_creation_robustness.py
# Verify RAG seed generation
python tests/unit/test_rag_seed_generator.py
# Product strategy decisions
result = controller.make_decision(
"How to prioritize features for our SaaS product for next quarter?",
execution_context={
"industry": "software",
"stage": "growth",
"constraints": ["budget_limited", "team_capacity"]
}
)
# Architecture design decisions
result = controller.make_decision(
"Design a real-time recommendation system supporting tens of millions of concurrent users",
execution_context={
"domain": "system_architecture",
"scale": "large",
"requirements": ["real_time", "high_availability"]
}
)
# Market analysis decisions
result = controller.make_decision(
"Analyze competitive landscape and opportunities in the AI tools market",
execution_context={
"analysis_type": "market_research",
"time_horizon": "6_months",
"focus": ["opportunities", "threats"]
}
)
# Tool-enhanced technical decisions with real-time information gathering
result = controller.make_decision(
"Should we adopt Kubernetes for our microservices architecture?",
execution_context={
"domain": "system_architecture",
"team_size": "10_engineers",
"current_stack": ["docker", "aws"],
"constraints": ["learning_curve", "migration_complexity"]
}
)
# The system automatically:
# 1. Uses WebSearchTool to gather latest Kubernetes trends and best practices
# 2. Applies IdeaVerificationTool to validate feasibility based on team constraints
# 3. Integrates real-time information into decision-making process
# 4. Provides evidence-based recommendations with source citations
print(f"Tool-enhanced decision: {result.get('chosen_path', {}).get('description', 'N/A')}")
print(f"Tools used: {result.get('tools_used', [])}")
print(f"Information sources: {result.get('verification_sources', [])}")
# Check available providers and their status
status = controller.get_llm_provider_status()
print(f"Healthy providers: {status['healthy_providers']}")
# Switch to a specific provider for particular tasks
controller.switch_llm_provider("anthropic") # Use Claude for complex reasoning
result_reasoning = controller.make_decision("Complex philosophical analysis...")
controller.switch_llm_provider("deepseek") # Use DeepSeek for coding tasks
result_coding = controller.make_decision("Optimize this Python algorithm...")
controller.switch_llm_provider("openai") # Use GPT for general tasks
result_general = controller.make_decision("Business strategy planning...")
# Get cost and usage statistics
cost_summary = controller.get_llm_cost_summary()
print(f"Total cost: ${cost_summary['total_cost_usd']:.4f}")
print(f"Requests by provider: {cost_summary['cost_by_provider']}")
# Run health check on all providers
health_status = controller.run_llm_health_check()
print(f"Provider health: {health_status}")
from neogenesis_system.langchain_integration import (
create_neogenesis_chain,
StateManager,
DistributedStateManager
)
# Create enterprise-grade persistent workflow
state_manager = StateManager(storage_backend="lmdb", enable_encryption=True)
neogenesis_chain = create_neogenesis_chain(
state_manager=state_manager,
enable_persistence=True,
session_id="enterprise_decision_2024"
)
# Execute long-running decision process with state persistence
result = neogenesis_chain.execute({
"query": "Develop comprehensive digital transformation strategy",
"context": {
"industry": "manufacturing",
"company_size": "enterprise",
"timeline": "3_years",
"budget": "10M_USD",
"current_state": "legacy_systems"
}
})
# Access persistent decision history
decision_timeline = state_manager.get_decision_timeline("enterprise_decision_2024")
print(f"Decision milestones: {len(decision_timeline)} checkpoints")
from neogenesis_system.langchain_integration.advanced_chains import AdvancedChainComposer
from neogenesis_system.langchain_integration.execution_engines import ParallelExecutionEngine
# Configure parallel analysis workflow
composer = AdvancedChainComposer()
execution_engine = ParallelExecutionEngine(max_workers=6)
# Create specialized analysis chains
market_analysis_chain = composer.create_analysis_chain("market_research")
technical_analysis_chain = composer.create_analysis_chain("technical_feasibility")
financial_analysis_chain = composer.create_analysis_chain("financial_modeling")
risk_analysis_chain = composer.create_analysis_chain("risk_assessment")
# Execute parallel comprehensive analysis
parallel_analysis = composer.create_parallel_chain([
market_analysis_chain,
technical_analysis_chain,
financial_analysis_chain,
risk_analysis_chain
])
# Run analysis with persistent state and error recovery
result = await execution_engine.execute_chain(
chain=parallel_analysis,
input_data={
"project": "AI-powered customer service platform",
"market": "enterprise_software",
"timeline": "18_months"
},
persist_state=True,
enable_recovery=True
)
print(f"Analysis completed: {result['analysis_summary']}")
print(f"Execution time: {result['execution_time']:.2f}s")
from neogenesis_system.langchain_integration.distributed_state import DistributedStateManager
from neogenesis_system.langchain_integration.coordinators import ClusterCoordinator
# Configure distributed decision cluster
distributed_state = DistributedStateManager(
node_id="decision_node_1",
cluster_nodes=["node_1:8001", "node_2:8002", "node_3:8003"],
consensus_protocol="raft"
)
cluster_coordinator = ClusterCoordinator(
distributed_state=distributed_state,
load_balancing="intelligent"
)
# Execute distributed decision with consensus
decision_result = await cluster_coordinator.execute_distributed_decision(
query="Should we enter the European market with our fintech product?",
context={
"industry": "fintech",
"target_markets": ["germany", "france", "uk"],
"regulatory_complexity": "high",
"competition_level": "intense"
},
require_consensus=True,
min_node_agreement=2
)
# Access cluster decision metrics
cluster_stats = cluster_coordinator.get_cluster_stats()
print(f"Nodes participated: {cluster_stats['active_nodes']}")
print(f"Consensus achieved: {cluster_stats['consensus_reached']}")
print(f"Decision confidence: {decision_result['confidence']:.2f}")
from langchain.chains import SequentialChain, LLMChain
from langchain.prompts import PromptTemplate
from neogenesis_system.langchain_integration.adapters import NeogenesisLangChainAdapter
# Create standard LangChain components
prompt = PromptTemplate(template="Analyze the market for {product}", input_variables=["product"])
market_chain = LLMChain(llm=llm, prompt=prompt)
# Create Neogenesis intelligent decision chain
neogenesis_adapter = NeogenesisLangChainAdapter(
storage_backend="lmdb",
enable_advanced_reasoning=True
)
neogenesis_chain = neogenesis_adapter.create_decision_chain()
# Combine in standard LangChain workflow
complete_workflow = SequentialChain(
chains=[
market_chain, # Standard LangChain analysis
neogenesis_chain, # Intelligent Neogenesis decision-making
market_chain # Follow-up LangChain processing
],
input_variables=["product"],
output_variables=["final_recommendation"]
)
# Execute integrated workflow
result = complete_workflow.run({
"product": "AI-powered legal document analyzer"
})
print(f"Integrated analysis result: {result}")
We warmly welcome community contributions! Whether bug fixes, feature suggestions, or code submissions, all help make Neogenesis System better.
- ๐ Bug Reports: Submit issues when you find problems
- โจ Feature Suggestions: Propose new feature ideas
- ๐ Documentation Improvements: Enhance documentation and examples
- ๐ง Code Contributions: Submit Pull Requests
- ๐จ Tool Development: Create new tools implementing the BaseTool interface
- ๐งช Tool Testing: Help test and validate tool integrations
# 1. Fork and clone project
git clone https://github.com/your-username/neogenesis-system.git
# 2. Create development branch
git checkout -b feature/your-feature-name
# 3. Install development dependencies
pip install -r requirements-dev.txt
# 4. Run tests to ensure baseline functionality
python -m pytest tests/
# 5. Develop new features...
# 6. Submit Pull Request
Please refer to CONTRIBUTING.md for detailed guidelines.
This project is open-sourced under the MIT License. See LICENSE file for details.
Neogenesis System is independently developed by the author.
- ๐ง Email Contact: This project is still in development. If you're interested in the project or need commercial use, please contact: [email protected]
๐ If this project helps you, please give us a Star!
๐ Get Started | ๐ View Documentation | ๐ก Suggest Ideas
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Neosgenesis
Similar Open Source Tools

Neosgenesis
Neogenesis System is an advanced AI decision-making framework that enables agents to 'think about how to think'. It implements a metacognitive approach with real-time learning, tool integration, and multi-LLM support, allowing AI to make expert-level decisions in complex environments. Key features include metacognitive intelligence, tool-enhanced decisions, real-time learning, aha-moment breakthroughs, experience accumulation, and multi-LLM support.

sgr-deep-research
This repository contains a deep learning research project focused on natural language processing tasks. It includes implementations of various state-of-the-art models and algorithms for text classification, sentiment analysis, named entity recognition, and more. The project aims to provide a comprehensive resource for researchers and developers interested in exploring deep learning techniques for NLP applications.

quantalogic
QuantaLogic is a ReAct framework for building advanced AI agents that seamlessly integrates large language models with a robust tool system. It aims to bridge the gap between advanced AI models and practical implementation in business processes by enabling agents to understand, reason about, and execute complex tasks through natural language interaction. The framework includes features such as ReAct Framework, Universal LLM Support, Secure Tool System, Real-time Monitoring, Memory Management, and Enterprise Ready components.

cua
Cua is a tool for creating and running high-performance macOS and Linux virtual machines on Apple Silicon, with built-in support for AI agents. It provides libraries like Lume for running VMs with near-native performance, Computer for interacting with sandboxes, and Agent for running agentic workflows. Users can refer to the documentation for onboarding, explore demos showcasing AI-Gradio and GitHub issue fixing, and utilize accessory libraries like Core, PyLume, Computer Server, and SOM. Contributions are welcome, and the tool is open-sourced under the MIT License.

dingo
Dingo is a data quality evaluation tool that automatically detects data quality issues in datasets. It provides built-in rules and model evaluation methods, supports text and multimodal datasets, and offers local CLI and SDK usage. Dingo is designed for easy integration into evaluation platforms like OpenCompass.

simba
Simba is an open source, portable Knowledge Management System (KMS) designed to seamlessly integrate with any Retrieval-Augmented Generation (RAG) system. It features a modern UI and modular architecture, allowing developers to focus on building advanced AI solutions without the complexities of knowledge management. Simba offers a user-friendly interface to visualize and modify document chunks, supports various vector stores and embedding models, and simplifies knowledge management for developers. It is community-driven, extensible, and aims to enhance AI functionality by providing a seamless integration with RAG-based systems.

flo-ai
Flo AI is a Python framework that enables users to build production-ready AI agents and teams with minimal code. It allows users to compose complex AI architectures using pre-built components while maintaining the flexibility to create custom components. The framework supports composable, production-ready, YAML-first, and flexible AI systems. Users can easily create AI agents and teams, manage teams of AI agents working together, and utilize built-in support for Retrieval-Augmented Generation (RAG) and compatibility with Langchain tools. Flo AI also provides tools for output parsing and formatting, tool logging, data collection, and JSON output collection. It is MIT Licensed and offers detailed documentation, tutorials, and examples for AI engineers and teams to accelerate development, maintainability, scalability, and testability of AI systems.

open-responses
OpenResponses API provides enterprise-grade AI capabilities through a powerful API, simplifying development and deployment while ensuring complete data control. It offers automated tracing, integrated RAG for contextual information retrieval, pre-built tool integrations, self-hosted architecture, and an OpenAI-compatible interface. The toolkit addresses development challenges like feature gaps and integration complexity, as well as operational concerns such as data privacy and operational control. Engineering teams can benefit from improved productivity, production readiness, compliance confidence, and simplified architecture by choosing OpenResponses.

rag-security-scanner
RAG/LLM Security Scanner is a professional security testing tool designed for Retrieval-Augmented Generation (RAG) systems and LLM applications. It identifies critical vulnerabilities in AI-powered applications such as chatbots, virtual assistants, and knowledge retrieval systems. The tool offers features like prompt injection detection, data leakage assessment, function abuse testing, context manipulation identification, professional reporting with JSON/HTML formats, and easy integration with OpenAI, HuggingFace, and custom RAG systems.

LocalAGI
LocalAGI is a powerful, self-hostable AI Agent platform that allows you to design AI automations without writing code. It provides a complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. With LocalAGI, you can create customizable AI assistants, automations, chat bots, and agents that run 100% locally, without the need for cloud services or API keys. The platform offers features like no-code agents, web-based interface, advanced agent teaming, connectors for various platforms, comprehensive REST API, short & long-term memory capabilities, planning & reasoning, periodic tasks scheduling, memory management, multimodal support, extensible custom actions, fully customizable models, observability, and more.

MassGen
MassGen is a cutting-edge multi-agent system that leverages the power of collaborative AI to solve complex tasks. It assigns a task to multiple AI agents who work in parallel, observe each other's progress, and refine their approaches to converge on the best solution to deliver a comprehensive and high-quality result. The system operates through an architecture designed for seamless multi-agent collaboration, with key features including cross-model/agent synergy, parallel processing, intelligence sharing, consensus building, and live visualization. Users can install the system, configure API settings, and run MassGen for various tasks such as question answering, creative writing, research, development & coding tasks, and web automation & browser tasks. The roadmap includes plans for advanced agent collaboration, expanded model, tool & agent integration, improved performance & scalability, enhanced developer experience, and a web interface.

arxiv-mcp-server
The ArXiv MCP Server acts as a bridge between AI assistants and arXiv's research repository, enabling AI models to search for and access papers programmatically through the Message Control Protocol (MCP). It offers features like paper search, access, listing, local storage, and research prompts. Users can install it via Smithery or manually for Claude Desktop. The server provides tools for paper search, download, listing, and reading, along with specialized prompts for paper analysis. Configuration can be done through environment variables, and testing is supported with a test suite. The tool is released under the MIT License and is developed by the Pearl Labs Team.

sparrow
Sparrow is an innovative open-source solution for efficient data extraction and processing from various documents and images. It seamlessly handles forms, invoices, receipts, and other unstructured data sources. Sparrow stands out with its modular architecture, offering independent services and pipelines all optimized for robust performance. One of the critical functionalities of Sparrow - pluggable architecture. You can easily integrate and run data extraction pipelines using tools and frameworks like LlamaIndex, Haystack, or Unstructured. Sparrow enables local LLM data extraction pipelines through Ollama or Apple MLX. With Sparrow solution you get API, which helps to process and transform your data into structured output, ready to be integrated with custom workflows. Sparrow Agents - with Sparrow you can build independent LLM agents, and use API to invoke them from your system. **List of available agents:** * **llamaindex** - RAG pipeline with LlamaIndex for PDF processing * **vllamaindex** - RAG pipeline with LLamaIndex multimodal for image processing * **vprocessor** - RAG pipeline with OCR and LlamaIndex for image processing * **haystack** - RAG pipeline with Haystack for PDF processing * **fcall** - Function call pipeline * **unstructured-light** - RAG pipeline with Unstructured and LangChain, supports PDF and image processing * **unstructured** - RAG pipeline with Weaviate vector DB query, Unstructured and LangChain, supports PDF and image processing * **instructor** - RAG pipeline with Unstructured and Instructor libraries, supports PDF and image processing. Works great for JSON response generation

MCPSpy
MCPSpy is a command-line tool leveraging eBPF technology to monitor Model Context Protocol (MCP) communication at the kernel level. It provides real-time visibility into JSON-RPC 2.0 messages exchanged between MCP clients and servers, supporting Stdio and HTTP transports. MCPSpy offers security analysis, debugging, performance monitoring, compliance assurance, and learning opportunities for understanding MCP communications. The tool consists of eBPF programs, an eBPF loader, an HTTP session manager, an MCP protocol parser, and output handlers for console display and JSONL output.

UMbreLLa
UMbreLLa is a tool designed for deploying Large Language Models (LLMs) for personal agents. It combines offloading, speculative decoding, and quantization to optimize single-user LLM deployment scenarios. With UMbreLLa, 70B-level models can achieve performance comparable to human reading speed on an RTX 4070Ti, delivering exceptional efficiency and responsiveness, especially for coding tasks. The tool supports deploying models on various GPUs and offers features like code completion and CLI/Gradio chatbots. Users can configure the LLM engine for optimal performance based on their hardware setup.
For similar tasks

Neosgenesis
Neogenesis System is an advanced AI decision-making framework that enables agents to 'think about how to think'. It implements a metacognitive approach with real-time learning, tool integration, and multi-LLM support, allowing AI to make expert-level decisions in complex environments. Key features include metacognitive intelligence, tool-enhanced decisions, real-time learning, aha-moment breakthroughs, experience accumulation, and multi-LLM support.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.

spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.