
lyraios
LYRAI is a Model Context Protocol (MCP) operating system for multi-AI AGENTs designed to extend the functionality of AI applications by enabling them to interact with financial networks and blockchain public chains. The server offers a range of advanced AI assistants, including blockchain public chain operations (SOLANA,ETH,BSC,etc.)
Stars: 202

LYRAIOS (LLM-based Your Reliable AI Operating System) is an advanced AI assistant platform built with FastAPI and Streamlit, designed to serve as an operating system for AI applications. It offers core features such as AI process management, memory system, and I/O system. The platform includes built-in tools like Calculator, Web Search, Financial Analysis, File Management, and Research Tools. It also provides specialized assistant teams for Python and research tasks. LYRAIOS is built on a technical architecture comprising FastAPI backend, Streamlit frontend, Vector Database, PostgreSQL storage, and Docker support. It offers features like knowledge management, process control, and security & access control. The roadmap includes enhancements in core platform, AI process management, memory system, tools & integrations, security & access control, open protocol architecture, multi-agent collaboration, and cross-platform support.
README:
LYRAI is a Model Context Protocol (MCP) operating system for multi-AI AGENTs designed to extend the functionality of AI applications (such as Claude Desktop and Cursor) by enabling them to interact with financial networks and blockchain public chains. The server offers a range of advanced AI assistants, including blockchain public chain operations (SOLANA, ETH, etc. - retrieving wallet addresses, listing wallet balances, transferring funds, deploying smart contracts, on-chain lending, calling contract functions, managing tokens), fintech market analysis and summary reports, and learning and training systems for the education sector.
In the future operation of LYRAIOS, advanced VIP features will exclusively support payment using LYRAI on solana, with LYRAI's CA :
A6MTWuHbXqjH3vYEfbs3mzvGThQtk5S12FjmdpVkpump
Welcome to check out the demo of our LYRA MCP-OS!
https://github.com/user-attachments/assets/479cad58-ce4b-4901-93ff-e60a98c477d4
LYRAIOS aims to create the next generation AI Agent operating system with technological breakthroughs in three dimensions:
- Open Protocol Architecture: Pioneering modular integration protocol supporting plug-and-play third-party tools/services, compatible with multi-modal interaction interfaces (API/plugins/smart hardware), with 80%+ improved extensibility compared to traditional frameworks
- Multi-Agent Collaboration Engine: Breaking through single Agent capability boundaries through distributed task orchestration system enabling dynamic multi-agent collaboration, supporting enterprise-grade complex workflow automation and conflict resolution
- Cross-Platform Runtime Environment: Building cross-terminal AI runtime environment, enabling smooth migration from personal intelligent assistants to enterprise digital employees, applicable for validating multi-scenario solutions in finance, healthcare, intelligent manufacturing and other fields
For detailed architecture information, see the Architecture Documentation.
LYRAIOS adopts a layered architecture design, from top to bottom, including the user interface layer, core OS layer, MCP integration layer, and external services layer.
The user interface layer provides multiple interaction modes, allowing users to interact with the AI OS.
- Web UI: Based on Streamlit, providing an intuitive user interface
- Mobile UI: Mobile adaptation interface, supporting mobile device access
- CLI: Command line interface, suitable for developers and advanced users
- API Clients: Provide API interfaces, supporting third-party application integration
The core OS layer implements the basic functions of the AI operating system, including process management, memory system, I/O system, and security control.
-
Process Management
- Task Scheduling: Dynamic allocation and scheduling of AI tasks
- Resource Allocation: Optimize AI resource usage
- State Management: Maintain AI process state
-
Memory System
- Short-term Memory: Session context maintenance
- Long-term Storage: Persistent knowledge storage
- Knowledge Base: Structured knowledge management
-
I/O System
- Multi-modal Input: Handle text, files, APIs, etc.
- Structured Output: Generate formatted output results
- Event Handling: Respond to system events
-
Security & Access Control
- Authentication: User authentication
- Authorization: Permission management
- Rate Limiting: Prevent abuse
MCP Integration Layer is the core innovation of the system, achieving seamless integration with external services through the Model Context Protocol.
-
MCP Client
- Protocol Handler: Process MCP protocol messages
- Connection Management: Manage connections to MCP servers
- Message Routing: Route messages to appropriate processors
-
Tool Registry
- Tool Registration: Register external tools and services
- Capability Discovery: Discover tool capabilities
- Manifest Validation: Validate tool manifests
-
Tool Executor
- Execution Environment: Provide an execution environment for tool execution
- Resource Management: Manage the resources used by tool execution
- Error Handling: Handle errors during tool execution
-
Adapters
- REST API Adapter: Connect to REST API services
- Python Plugin Adapter: Integrate Python plugins
- Custom Adapter: Support other types of integration
The external services layer includes various services integrated through the MCP protocol, which act as MCP servers providing capabilities.
- File System: Provide file read and write capabilities
- Database: Provide data storage and query capabilities
- Web Search: Provide internet search capabilities
- Code Editor: Provide code editing and execution capabilities
- Browser: Provide web browsing and interaction capabilities
- Custom Services: Support other custom services integration
The Tool Integration Protocol is a key component of LYRAIOS's Open Protocol Architecture. It provides a standardized way to integrate third-party tools and services into the LYRAIOS ecosystem.
- Standardized Tool Manifest: Define tools using a JSON schema that describes capabilities, parameters, and requirements
- Pluggable Adapter System: Support for different tool types (REST API, Python plugins, etc.)
- Secure Execution Environment: Tools run in a controlled environment with resource limits and permission checks
- Versioning and Dependency Management: Track tool versions and dependencies
- Monitoring and Logging: Comprehensive logging of tool execution
- Define Tool Manifest: Create a JSON file describing your tool's capabilities
- Implement Tool: Develop the tool functionality according to the protocol
- Register Tool: Use the API to register your tool with LYRAIOS
- Use Tool: Your tool is now available for use by LYRAIOS agents
For examples and detailed documentation, see the Tool Integration Guide.
Model Context Protocol (MCP) is a client-server architecture protocol for connecting LLM applications and integrations. In MCP:
- Hosts are LLM applications (such as Claude Desktop or IDE) that initiate connections
- Clients maintain a 1:1 connection with servers in host applications
- Servers provide context, tools, and prompts to clients
LYRAIOS supports the following MCP functions:
- Resources: Allow attaching local files and data
- Prompts: Support prompt templates
- Tools: Integrate to execute commands and scripts
- Sampling: Support sampling functions (planned)
- Roots: Support root directory functions (planned)
- User sends request through the interface layer
- Core OS layer receives the request and processes it
- If external tool support is needed, the request is forwarded to the MCP integration layer
- MCP client connects to the corresponding MCP server
- External service executes the request and returns the result
- The result is returned to the user through each layer
- AI Agent determines that a specific tool is needed
- Tool registry looks up tool definition and capabilities
- Tool executor prepares execution environment
- Adapter converts request to tool-understandable format
- Tool executes and returns the result
- The result is returned to the AI Agent for processing
LYRAIOS (LLM-based Your Reliable AI Operating System) is an advanced AI assistant platform built with Streamlit, designed to serve as an operating system for AI applications.
-
AI Process Management:
- Dynamic task allocation and scheduling
- Multi-assistant coordination and communication
- Resource optimization and load balancing
- State management and persistence
-
AI Memory System:
- Short-term conversation memory
- Long-term vector database storage
- Cross-session context preservation
- Knowledge base integration
-
AI I/O System:
- Multi-modal input processing (text, files, APIs)
- Structured output formatting
- Stream processing capabilities
- Event-driven architecture
- Calculator: Advanced mathematical operations including factorial and prime number checking
- Web Search: Integrated DuckDuckGo search with customizable result limits
-
Financial Analysis:
- Real-time stock price tracking
- Company information retrieval
- Analyst recommendations
- Financial news aggregation
- File Management: Read, write, and list files in the workspace
- Research Tools: Integration with Exa for comprehensive research capabilities
-
Python Assistant:
- Live Python code execution
- Streamlit charting capabilities
- Package management with pip
-
Research Assistant:
- NYT-style report generation
- Automated web research
- Structured output formatting
- Source citation and reference management
- FastAPI Backend: RESTful API with automatic documentation
- Streamlit Frontend: Interactive web interface
- Vector Database: PGVector for efficient knowledge storage and retrieval
- PostgreSQL Storage: Persistent storage for conversations and assistant states
- Docker Support: Containerized deployment for development and production
-
Knowledge Management:
- PDF document processing
- Website content integration
- Vector-based semantic search
- Knowledge graph construction
-
Process Control:
- Task scheduling and prioritization
- Resource allocation
- Error handling and recovery
- Performance monitoring
-
Security & Access Control:
- API key management
- Authentication and authorization
- Rate limiting and quota management
- Secure data storage
- Use TLS for remote connections
- Verify connection source
- Implement authentication when needed
- Verify all incoming messages
- Clean input
- Check message size limits
- Verify JSON-RPC format
- Implement access control
- Verify resource paths
- Monitor resource usage
- Limit request rate
- Do not leak sensitive information
- Record security-related errors
- Implement appropriate cleanup
- Handle DoS scenarios
- ✅ Basic AI Assistant Framework
- ✅ Streamlit Web Interface
- ✅ FastAPI Backend
- ✅ Database Integration (SQLite/PostgreSQL)
- ✅ OpenAI Integration
- ✅ Docker Containerization
- ✅ Environment Configuration System
- 🔄 Multi-modal Input Processing (Partial)
- 🚧 Advanced Error Handling & Recovery
- 🚧 Performance Monitoring Dashboard
- 📅 Distributed Task Queue
- 📅 Horizontal Scaling Support
- 📅 Custom Plugin Architecture
- ✅ Basic Task Allocation
- ✅ Multi-assistant Team Structure
- ✅ State Management & Persistence
- 🔄 Dynamic Task Scheduling (Partial)
- 🚧 Resource Optimization
- 🚧 Load Balancing
- 📅 Process Visualization
- 📅 Workflow Designer
- 📅 Advanced Process Analytics
- ✅ Short-term Conversation Memory
- ✅ Basic Vector Database Integration
- ✅ Session Context Preservation
- 🔄 Knowledge Base Integration (Partial)
- 🚧 Memory Optimization Algorithms
- 🚧 Cross-session Learning
- 📅 Hierarchical Memory Architecture
- 📅 Forgetting Mechanisms
- 📅 Memory Compression
- ✅ Calculator
- ✅ Web Search (DuckDuckGo)
- ✅ Financial Analysis Tools
- ✅ File Management
- ✅ Research Tools (Exa)
- ✅ PDF Document Processing
- ✅ Website Content Integration
- 🔄 Python Code Execution (Partial)
- 🚧 Advanced Data Visualization
- 🚧 External API Integration Framework
- 📅 Image Generation & Processing
- 📅 Audio Processing
- 📅 Video Analysis
- ✅ Basic API Key Management
- ✅ Simple Authentication
- 🔄 Authorization System (Partial)
- 🚧 Rate Limiting
- 🚧 Quota Management
- 📅 Role-based Access Control
- 📅 Audit Logging
- 📅 Compliance Reporting
- 🔄 Module Interface Standards (Partial)
- 🚧 Third-party Tool Integration Protocol
- 🚧 Service Discovery Mechanism
- 📅 Universal Connector Framework
- 📅 Protocol Validation System
- 📅 Compatibility Layer for Legacy Systems
- ✅ Basic Team Structure
- 🔄 Inter-agent Communication (Partial)
- 🚧 Task Decomposition Engine
- 🚧 Conflict Resolution System
- 📅 Collaborative Planning
- 📅 Emergent Behavior Analysis
- 📅 Agent Specialization Framework
- ✅ Web Interface
- 🔄 API Access (Partial)
- 🚧 Mobile Responsiveness
- 📅 Desktop Application
- 📅 CLI Interface
- 📅 IoT Device Integration
- 📅 Voice Assistant Integration
- ✅ Completed
- 🔄 Partially Implemented
- 🚧 In Development
- 📅 Planned
# Clone the repo
git clone https://github.com/GalaxyLLMCI/lyraios
cd lyraios
# Create + activate a virtual env
python3 -m venv aienv
source aienv/bin/activate
# Install phidata
pip install 'phidata[aws]'
# Setup workspace
phi ws setup
# Copy example secrets
cp workspace/example_secrets workspace/secrets
# Create .env file
cp example.env .env
# Run Lyraios locally
phi ws up
# Open [localhost:8501](http://localhost:8501) to view the Streamlit App.
# Stop Lyraios locally
phi ws down
-
Install docker desktop
-
Export credentials
We use gpt-4o as the LLM, so export your OpenAI API Key
export OPENAI_API_KEY=sk-***
# To use Exa for research, export your EXA_API_KEY (get it from [here](https://dashboard.exa.ai/api-keys))
export EXA_API_KEY=xxx
# To use Gemini for research, export your GOOGLE_API_KEY (get it from [here](https://console.cloud.google.com/apis/api/generativelanguage.googleapis.com/overview?project=lyraios))
export GOOGLE_API_KEY=xxx
# OR set them in the `.env` file
OPENAI_API_KEY=xxx
EXA_API_KEY=xxx
GOOGLE_API_KEY=xxx
# Start the workspace using:
phi ws up
# Open [localhost:8501](http://localhost:8501) to view the Streamlit App.
# Stop the workspace using:
phi ws down
-
POST /api/v1/assistant/chat
- Process chat messages with the AI assistant
- Supports context-aware conversations
- Returns structured responses with tool usage information
-
GET /api/v1/health
- Monitor system health status
- Returns version and status information
- Interactive API documentation available at
/docs
- ReDoc alternative documentation at
/redoc
- OpenAPI specification at
/openapi.json
lyraios/
├── ai/ # AI core functionality
│ ├── assistants.py # Assistant implementations
│ ├── llm/ # LLM integration
│ └── tools/ # AI tools implementations
├── app/ # Main application
│ ├── components/ # UI components
│ ├── config/ # Application configuration
│ ├── db/ # Database models and storage
│ ├── styles/ # UI styling
│ ├── utils/ # Utility functions
│ └── main.py # Main application entry point
├── assets/ # Static assets like images
├── data/ # Data storage
├── tests/ # Test suite
├── workspace/ # Workspace configuration
│ ├── dev_resources/ # Development resources
│ ├── settings.py # Workspace settings
│ └── secrets/ # Secret configuration (gitignored)
├── docker/ # Docker configuration
├── scripts/ # Utility scripts
├── .env # Environment variables
├── requirements.txt # Python dependencies
└── README.md # Project documentation
- Environment Variables Setup
# Copy the example .env file
cp example.env .env
# Required environment variables
EXA_API_KEY=your_exa_api_key_here # Get from https://dashboard.exa.ai/api-keys
OPENAI_API_KEY=your_openai_api_key_here # Get from OpenAI dashboard
OPENAI_BASE_URL=your_openai_base_url # Optional: Custom OpenAI API endpoint
# OpenAI Model Configuration
OPENAI_CHAT_MODEL=gpt-4-turbo-preview # Default chat model
OPENAI_VISION_MODEL=gpt-4-vision-preview # Model for vision tasks
OPENAI_EMBEDDING_MODEL=text-embedding-3-small # Model for embeddings
# Optional configuration
STREAMLIT_SERVER_PORT=8501 # Default Streamlit port
API_SERVER_PORT=8000 # Default FastAPI port
- OpenAI Configuration Examples
# Standard OpenAI API
OPENAI_API_KEY=sk-***
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_CHAT_MODEL=gpt-4-turbo-preview
# Azure OpenAI
OPENAI_API_KEY=your_azure_api_key
OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment
OPENAI_CHAT_MODEL=gpt-4
# Other OpenAI API providers
OPENAI_API_KEY=your_api_key
OPENAI_BASE_URL=https://your-api-endpoint.com/v1
OPENAI_CHAT_MODEL=your-model-name
- Streamlit Configuration
# Create Streamlit config directory
mkdir -p ~/.streamlit
# Create config.toml to disable usage statistics (optional)
cat > ~/.streamlit/config.toml << EOL
[browser]
gatherUsageStats = false
EOL
The project includes convenient development scripts to manage the application:
- Using dev.py Script
# Run both frontend and backend
python -m scripts.dev run
# Run only frontend
python -m scripts.dev run --no-backend
# Run only backend
python -m scripts.dev run --no-frontend
# Run with custom ports
python -m scripts.dev run --frontend-port 8502 --backend-port 8001
- Manual Service Start
# Start Streamlit frontend
streamlit run app/app.py
# Start FastAPI backend
uvicorn api.main:app --reload
- Core Dependencies
# Install production dependencies
pip install -r requirements.txt
# Install development dependencies
pip install -r requirements-dev.txt
# Install the project in editable mode
pip install -e .
- Additional Tools
# Install python-dotenv for environment management
pip install python-dotenv
# Install development tools
pip install black isort mypy pytest
- Code Style
- Follow PEP 8 guidelines
- Use type hints
- Write docstrings for functions and classes
- Use black for code formatting
- Use isort for import sorting
- Testing
# Run tests
pytest
# Run tests with coverage
pytest --cov=app tests/
- Pre-commit Hooks
# Install pre-commit hooks
pre-commit install
# Run manually
pre-commit run --all-files
- Development Environment
# Build development image
docker build -f docker/Dockerfile.dev -t lyraios:dev .
# Run development container
docker-compose -f docker-compose.dev.yml up
- Production Environment
# Build production image
docker build -f docker/Dockerfile.prod -t lyraios:prod .
# Run production container
docker-compose -f docker-compose.prod.yml up -d
- Environment Variables
# Application Settings
DEBUG=false
LOG_LEVEL=INFO
ALLOWED_HOSTS=example.com,api.example.com
# AI Settings
AI_MODEL=gpt-4
AI_TEMPERATURE=0.7
AI_MAX_TOKENS=1000
# Database Settings
DATABASE_URL=postgresql://user:pass@localhost:5432/dbname
- Scaling Options
- Configure worker processes via
GUNICORN_WORKERS
- Adjust memory limits via
MEMORY_LIMIT
- Set concurrency via
MAX_CONCURRENT_REQUESTS
- Health Checks
- Monitor
/health
endpoint - Check system metrics via Prometheus endpoints
- Review logs in
/var/log/lyraios/
- Backup and Recovery
# Backup database
python scripts/backup_db.py
# Restore from backup
python scripts/restore_db.py --backup-file backup.sql
- Troubleshooting
- Check application logs
- Verify environment variables
- Ensure database connectivity
- Monitor system resources
The system supports both SQLite and PostgreSQL databases:
- SQLite (Default)
# SQLite Configuration
DATABASE_TYPE=sqlite
DATABASE_PATH=data/lyraios.db
- PostgreSQL
# PostgreSQL Configuration
DATABASE_TYPE=postgres
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_DB=lyraios
POSTGRES_USER=postgres
POSTGRES_PASSWORD=your_password
The system will automatically use SQLite if no PostgreSQL configuration is provided.
We welcome contributions! Please see the CONTRIBUTING.md file for details.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for lyraios
Similar Open Source Tools

lyraios
LYRAIOS (LLM-based Your Reliable AI Operating System) is an advanced AI assistant platform built with FastAPI and Streamlit, designed to serve as an operating system for AI applications. It offers core features such as AI process management, memory system, and I/O system. The platform includes built-in tools like Calculator, Web Search, Financial Analysis, File Management, and Research Tools. It also provides specialized assistant teams for Python and research tasks. LYRAIOS is built on a technical architecture comprising FastAPI backend, Streamlit frontend, Vector Database, PostgreSQL storage, and Docker support. It offers features like knowledge management, process control, and security & access control. The roadmap includes enhancements in core platform, AI process management, memory system, tools & integrations, security & access control, open protocol architecture, multi-agent collaboration, and cross-platform support.

SynthLang
SynthLang is a tool designed to optimize AI prompts by reducing costs and improving processing speed. It brings academic rigor to prompt engineering, creating precise and powerful AI interactions. The tool includes core components like a Translator Engine, Performance Optimization, Testing Framework, and Technical Architecture. It offers mathematical precision, academic rigor, enhanced security, a modern interface, and instant testing. Users can integrate mathematical frameworks, model complex relationships, and apply structured prompts to various domains. Security features include API key management and data privacy. The tool also provides a CLI for prompt engineering and optimization capabilities.

DeepSeekAI
DeepSeekAI is a browser extension plugin that allows users to interact with AI by selecting text on web pages and invoking the DeepSeek large model to provide AI responses. The extension enhances browsing experience by enabling users to get summaries or answers for selected text directly on the webpage. It features context text selection, API key integration, draggable and resizable window, AI streaming replies, Markdown rendering, one-click copy, re-answer option, code copy functionality, language switching, and multi-turn dialogue support. Users can install the extension from Chrome Web Store or Edge Add-ons, or manually clone the repository, install dependencies, and build the extension. Configuration involves entering the DeepSeek API key in the extension popup window to start using the AI-driven responses.

Hacx-GPT
Hacx GPT is a cutting-edge AI tool developed by BlackTechX, inspired by WormGPT, designed to push the boundaries of natural language processing. It is an advanced broken AI model that facilitates seamless and powerful interactions, allowing users to ask questions and perform various tasks. The tool has been rigorously tested on platforms like Kali Linux, Termux, and Ubuntu, offering powerful AI conversations and the ability to do anything the user wants. Users can easily install and run Hacx GPT on their preferred platform to explore its vast capabilities.

swift-ocr-llm-powered-pdf-to-markdown
Swift OCR is a powerful tool for extracting text from PDF files using OpenAI's GPT-4 Turbo with Vision model. It offers flexible input options, advanced OCR processing, performance optimizations, structured output, robust error handling, and scalable architecture. The tool ensures accurate text extraction, resilience against failures, and efficient handling of multiple requests.

rkllama
RKLLama is a server and client tool designed for running and interacting with LLM models optimized for Rockchip RK3588(S) and RK3576 platforms. It allows models to run on the NPU, with features such as running models on NPU, partial Ollama API compatibility, pulling models from Huggingface, API REST with documentation, dynamic loading/unloading of models, inference requests with streaming modes, simplified model naming, CPU model auto-detection, and optional debug mode. The tool supports Python 3.8 to 3.12 and has been tested on Orange Pi 5 Pro and Orange Pi 5 Plus with specific OS versions.

agentneo
AgentNeo is a Python package that provides functionalities for project, trace, dataset, experiment management. It allows users to authenticate, create projects, trace agents and LangGraph graphs, manage datasets, and run experiments with metrics. The tool aims to streamline AI project management and analysis by offering a comprehensive set of features.

WatermarkRemover-AI
WatermarkRemover-AI is an advanced application that utilizes AI models for precise watermark detection and seamless removal. It leverages Florence-2 for watermark identification and LaMA for inpainting. The tool offers both a command-line interface (CLI) and a PyQt6-based graphical user interface (GUI), making it accessible to users of all levels. It supports dual modes for processing images, advanced watermark detection, seamless inpainting, customizable output settings, real-time progress tracking, dark mode support, and efficient GPU acceleration using CUDA.

CrewAI-Studio
CrewAI Studio is an application with a user-friendly interface for interacting with CrewAI, offering support for multiple platforms and various backend providers. It allows users to run crews in the background, export single-page apps, and use custom tools for APIs and file writing. The roadmap includes features like better import/export, human input, chat functionality, automatic crew creation, and multiuser environment support.

R2R
R2R (RAG to Riches) is a fast and efficient framework for serving high-quality Retrieval-Augmented Generation (RAG) to end users. The framework is designed with customizable pipelines and a feature-rich FastAPI implementation, enabling developers to quickly deploy and scale RAG-based applications. R2R was conceived to bridge the gap between local LLM experimentation and scalable production solutions. **R2R is to LangChain/LlamaIndex what NextJS is to React**. A JavaScript client for R2R deployments can be found here. ### Key Features * **🚀 Deploy** : Instantly launch production-ready RAG pipelines with streaming capabilities. * **🧩 Customize** : Tailor your pipeline with intuitive configuration files. * **🔌 Extend** : Enhance your pipeline with custom code integrations. * **⚖️ Autoscale** : Scale your pipeline effortlessly in the cloud using SciPhi. * **🤖 OSS** : Benefit from a framework developed by the open-source community, designed to simplify RAG deployment.

kweaver
KWeaver is an open-source cognitive intelligence development framework that provides data scientists, application developers, and domain experts with the ability for rapid development, comprehensive openness, and high-performance knowledge network generation and cognitive intelligence large model framework. It offers features such as automated and visual knowledge graph construction, visualization and analysis of knowledge graph data, knowledge graph integration, knowledge graph resource management, large model prompt engineering and debugging, and visual configuration for large model access.

probe
Probe is an AI-friendly, fully local, semantic code search tool designed to power the next generation of AI coding assistants. It combines the speed of ripgrep with the code-aware parsing of tree-sitter to deliver precise results with complete code blocks, making it perfect for large codebases and AI-driven development workflows. Probe is fully local, keeping code on the user's machine without relying on external APIs. It supports multiple languages, offers various search options, and can be used in CLI mode, MCP server mode, AI chat mode, and web interface. The tool is designed to be flexible, fast, and accurate, providing developers and AI models with full context and relevant code blocks for efficient code exploration and understanding.

ComfyUI-fal-API
ComfyUI-fal-API is a repository containing custom nodes for using Flux models with fal API in ComfyUI. It provides nodes for image generation, video generation, language models, and vision language models. Users can easily install and configure the repository to access various nodes for different tasks such as generating images, creating videos, processing text, and understanding images. The repository also includes troubleshooting steps and is licensed under the Apache License 2.0.

trendFinder
Trend Finder is a tool designed to help users stay updated on trending topics on social media by collecting and analyzing posts from key influencers. It sends Slack notifications when new trends or product launches are detected, saving time, keeping users informed, and enabling quick responses to emerging opportunities. The tool features AI-powered trend analysis, social media and website monitoring, instant Slack notifications, and scheduled monitoring using cron jobs. Built with Node.js and Express.js, Trend Finder integrates with Together AI, Twitter/X API, Firecrawl, and Slack Webhooks for notifications.

AgentNeo
AgentNeo is an advanced, open-source Agentic AI Application Observability, Monitoring, and Evaluation Framework designed to provide deep insights into AI agents, Large Language Model (LLM) calls, and tool interactions. It offers robust logging, visualization, and evaluation capabilities to help debug and optimize AI applications with ease. With features like tracing LLM calls, monitoring agents and tools, tracking interactions, detailed metrics collection, flexible data storage, simple instrumentation, interactive dashboard, project management, execution graph visualization, and evaluation tools, AgentNeo empowers users to build efficient, cost-effective, and high-quality AI-driven solutions.

curiso
Curiso AI is an infinite canvas platform that connects nodes and AI services to explore ideas without repetition. It empowers advanced users to unlock richer AI interactions. Features include multi OS support, infinite canvas, multiple AI provider integration, local AI inference provider integration, custom model support, model metrics, RAG support, local Transformers.js embedding models, inference parameters customization, multiple boards, vision model support, customizable interface, node-based conversations, and secure local encrypted storage. Curiso also offers a Solana token for exclusive access to premium features and enhanced AI capabilities.
For similar tasks

financial-datasets
Financial Datasets is an open-source Python library that allows users to create question and answer financial datasets using Large Language Models (LLMs). With this library, users can easily generate realistic financial datasets from 10-K, 10-Q, PDF, and other financial texts. The library provides three main methods for generating datasets: from any text, from a 10-K filing, or from a PDF URL. Financial Datasets can be used for a variety of tasks, including financial analysis, research, and education.

zillionare
This repository contains a collection of articles and tutorials on quantitative finance, including topics such as machine learning, statistical arbitrage, and risk management. The articles are written in a clear and concise style, and they are suitable for both beginners and experienced practitioners. The repository also includes a number of Jupyter notebooks that demonstrate how to use Python for quantitative finance.

finagg
finagg is a Python package that provides implementations of popular and free financial APIs, tools for aggregating historical data from those APIs into SQL databases, and tools for transforming aggregated data into features useful for analysis and AI/ML. It offers documentation, installation instructions, and basic usage examples for exploring various financial APIs and features. Users can install recommended datasets from 3rd party APIs into a local SQL database, access Bureau of Economic Analysis (BEA) data, Federal Reserve Economic Data (FRED), Securities and Exchange Commission (SEC) filings, and more. The package also allows users to explore raw data features, install refined data features, and perform refined aggregations of raw data. Configuration options for API keys, user agents, and data locations are provided, along with information on dependencies and related projects.

lyraios
LYRAIOS (LLM-based Your Reliable AI Operating System) is an advanced AI assistant platform built with FastAPI and Streamlit, designed to serve as an operating system for AI applications. It offers core features such as AI process management, memory system, and I/O system. The platform includes built-in tools like Calculator, Web Search, Financial Analysis, File Management, and Research Tools. It also provides specialized assistant teams for Python and research tasks. LYRAIOS is built on a technical architecture comprising FastAPI backend, Streamlit frontend, Vector Database, PostgreSQL storage, and Docker support. It offers features like knowledge management, process control, and security & access control. The roadmap includes enhancements in core platform, AI process management, memory system, tools & integrations, security & access control, open protocol architecture, multi-agent collaboration, and cross-platform support.

llm-random
This repository contains code for research conducted by the LLM-Random research group at IDEAS NCBR in Warsaw, Poland. The group focuses on developing and using this repository to conduct research. For more information about the group and its research, refer to their blog, llm-random.github.io.

InternLM-XComposer
InternLM-XComposer2 is a groundbreaking vision-language large model (VLLM) based on InternLM2-7B excelling in free-form text-image composition and comprehension. It boasts several amazing capabilities and applications: * **Free-form Interleaved Text-Image Composition** : InternLM-XComposer2 can effortlessly generate coherent and contextual articles with interleaved images following diverse inputs like outlines, detailed text requirements and reference images, enabling highly customizable content creation. * **Accurate Vision-language Problem-solving** : InternLM-XComposer2 accurately handles diverse and challenging vision-language Q&A tasks based on free-form instructions, excelling in recognition, perception, detailed captioning, visual reasoning, and more. * **Awesome performance** : InternLM-XComposer2 based on InternLM2-7B not only significantly outperforms existing open-source multimodal models in 13 benchmarks but also **matches or even surpasses GPT-4V and Gemini Pro in 6 benchmarks** We release InternLM-XComposer2 series in three versions: * **InternLM-XComposer2-4KHD-7B** 🤗: The high-resolution multi-task trained VLLM model with InternLM-7B as the initialization of the LLM for _High-resolution understanding_ , _VL benchmarks_ and _AI assistant_. * **InternLM-XComposer2-VL-7B** 🤗 : The multi-task trained VLLM model with InternLM-7B as the initialization of the LLM for _VL benchmarks_ and _AI assistant_. **It ranks as the most powerful vision-language model based on 7B-parameter level LLMs, leading across 13 benchmarks.** * **InternLM-XComposer2-VL-1.8B** 🤗 : A lightweight version of InternLM-XComposer2-VL based on InternLM-1.8B. * **InternLM-XComposer2-7B** 🤗: The further instruction tuned VLLM for _Interleaved Text-Image Composition_ with free-form inputs. Please refer to Technical Report and 4KHD Technical Reportfor more details.

awesome-llm
Awesome LLM is a curated list of resources related to Large Language Models (LLMs), including models, projects, datasets, benchmarks, materials, papers, posts, GitHub repositories, HuggingFace repositories, and reading materials. It provides detailed information on various LLMs, their parameter sizes, announcement dates, and contributors. The repository covers a wide range of LLM-related topics and serves as a valuable resource for researchers, developers, and enthusiasts interested in the field of natural language processing and artificial intelligence.

LLM-Agent-Survey
Autonomous agents are designed to achieve specific objectives through self-guided instructions. With the emergence and growth of large language models (LLMs), there is a growing trend in utilizing LLMs as fundamental controllers for these autonomous agents. This repository conducts a comprehensive survey study on the construction, application, and evaluation of LLM-based autonomous agents. It explores essential components of AI agents, application domains in natural sciences, social sciences, and engineering, and evaluation strategies. The survey aims to be a resource for researchers and practitioners in this rapidly evolving field.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.