LMForge-End-to-End-LLMOps-Platform-for-Multi-Model-Agents

LMForge-End-to-End-LLMOps-Platform-for-Multi-Model-Agents

AI Agent Development Platform - Supports multiple models (OpenAI/DeepSeek/Wenxin/Tongyi), knowledge base management, workflow automation, and enterprise-grade security. Built with Flask + Vue3 + LangChain, featuring one-click Docker deployment.

Stars: 175

Visit
 screenshot

LMForge is an end-to-end LLMOps platform designed for multi-model agents. It provides a comprehensive solution for managing and deploying large language models efficiently. The platform offers tools for training, fine-tuning, and deploying various types of language models, enabling users to streamline the development and deployment process. With LMForge, users can easily experiment with different model architectures, optimize hyperparameters, and scale their models to meet specific requirements. The platform also includes features for monitoring model performance, managing datasets, and collaborating with team members, making it a versatile tool for researchers and developers working with language models.

README:

🚀 LLMOps - Large Language Model Operations Platform

image License Python Docker English | 中文 Online address: http://82.157.66.198/

🔐 Critical Configuration

Before deployment, you MUST configure:

  1. Copy the environment template:

    cp .env.example .env
  2. Edit .env with your actual credentials:

# ===== REQUIRED =====
# PostgreSQL Database
SQLALCHEMY_DATABASE_URI=postgresql://postgres:your_strong_password@db:5432/llmops

# Redis Configuration
REDIS_PASSWORD=your_redis_password

# JWT Secret (Generate with: openssl rand -hex 32)
JWT_SECRET_KEY=your_jwt_secret_key_here

# ===== AI PROVIDERS =====
# Configure at least one LLM provider
MOONSHOT_API_KEY=sk-your-moonshot-key
DEEPSEEK_API_KEY=sk-your-deepseek-key
OPENAI_API_KEY=sk-your-openai-key
DASHSCOPE_API_KEY=sk-your-dashscope-key


# ===== OPTIONAL SERVICES =====
# Vector DB (Choose one)
PINECONE_API_KEY=your-pinecone-key
WEAVIATE_API_KEY=your-weaviate-key

# Third-party Services
GAODE_API_KEY=your-gaode-map-key
GITHUB_CLIENT_ID=your-github-oauth-id
GITHUB_CLIENT_SECRET=your-github-oauth-secret

🚀 Quick Deployment

Prerequisites

  • Docker 20.10+
  • Docker Compose 2.0+
  • Minimum 8GB RAM

One-Command Setup

# Clone repository
git clone https://github.com/Haohao-end/LMForge-End-to-End-LLMOps-Platform-for-Multi-Model-Agents.git
cd Open-Coze/docker

# Configure environment
nano .env  # Fill with your actual credentials

# Launch services
docker compose up -d --build

Service Endpoints

Service Access URL
Web UI http://localhost:3000
API Gateway http://localhost:80
Swagger Docs http://localhost:80/docs

🛠️ Configuration Guide

1. Database Setup

Ensure persistence in docker-compose.yaml:

services:
  db:
    volumes:
      - pg_data:/var/lib/postgresql/data

volumes:
  pg_data:

2. Multi-Provider Setup

Comment unused providers in .env:

# Enable OpenAI
OPENAI_API_KEY=sk-xxx
# OPENAI_API_BASE=https://your-proxy.com/v1

# Disable Wenxin
# WENXIN_YIYAN_API_KEY=sk-xxx

3. Security Best Practices

  • Always change default passwords in production

  • Enable CSRF protection:

    WTF_CSRF_ENABLED=True
    WTF_CSRF_SECRET_KEY=your_csrf_secret

📊 System Architecture

graph TD
    A[Client] --> B[Nginx 80]
    B --> C[Frontend Vue.js 3000]
    B --> D[Backend Flask 5001]
    D --> E[(PostgreSQL)]
    D --> F[(Redis)]
    D --> G[Celery Worker]
    G --> H[AI Providers]

🔧 Troubleshooting

Q: How to check service logs?

docker compose logs -f

Q: How to update environment variables?

docker compose down
nano .env  # Modify configurations
docker compose up -d

Q: Port conflicts? Modify port mappings in docker-compose.yaml:

ports:
  - "8080:80"  # Change host port to 8080

📜 License

MIT License | Copyright © 2025 Open-CozeTeam

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for LMForge-End-to-End-LLMOps-Platform-for-Multi-Model-Agents

Similar Open Source Tools

For similar tasks

For similar jobs