llmgateway

llmgateway

Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.

Stars: 485

Visit
 screenshot

The llmgateway repository is a tool that provides a gateway for interacting with various LLM (Large Language Model) models. It allows users to easily access and utilize pre-trained language models for tasks such as text generation, sentiment analysis, and language translation. The tool simplifies the process of integrating LLMs into applications and workflows, enabling developers to leverage the power of state-of-the-art language models for various natural language processing tasks.

README:

LLM Gateway

LLM Gateway is an open-source API gateway for Large Language Models (LLMs). It acts as a middleware between your applications and various LLM providers, allowing you to:

  • Route requests to multiple LLM providers (OpenAI, Anthropic, Google Vertex AI, and others)
  • Manage API keys for different providers in one place
  • Track token usage and costs across all your LLM interactions
  • Analyze performance metrics to optimize your LLM usage

Features

  • Unified API Interface: Compatible with the OpenAI API format for seamless migration
  • Usage Analytics: Track requests, tokens used, response times, and costs
  • Multi-provider Support: Connect to various LLM providers through a single gateway
  • Performance Monitoring: Compare different models' performance and cost-effectiveness

Getting Started

You can use LLM Gateway in two ways:

  • Hosted Version: For immediate use without setup, visit llmgateway.io to create an account and get an API key.
  • Self-Hosted: Deploy LLM Gateway on your own infrastructure for complete control over your data and configuration.

Using LLM Gateway API

curl -X POST https://api.llmgateway.io/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \
  -d '{
  "model": "gpt-4o",
  "messages": [
    {"role": "user", "content": "Hello, how are you?"}
  ]
}'

Development Setup

  1. Install dependencies and set up the development environment:

    pnpm i && pnpm run setup

    This will install all dependencies, start Docker services, sync the database schema, and seed initial data.

    Note for WSL2 users: Ensure Docker Desktop is running with WSL integration enabled.

  2. Start development servers:

    pnpm dev
  3. Build for production:

    pnpm build

Folder Structure

  • apps/ui: Vite + React frontend
  • apps/api: Hono backend
  • apps/gateway: API gateway for routing LLM requests
  • apps/docs: Documentation site
  • packages/db: Drizzle ORM schema and migrations
  • packages/models: Model and provider definitions
  • packages/shared: Shared types and utilities

License

LLMGateway is available under a dual license:

  • Open Source: Core functionality is licensed under AGPLv3 - see the LICENSE file for details.
  • Enterprise: Commercial features in the ee/ directory require an Enterprise license - see ee/LICENSE for details.

Enterprise features include:

  • Advanced billing and subscription management
  • Extended data retention (90 days vs 3 days)
  • Provider API key management (Pro plan)
  • Team and organization management
  • Priority support
  • And more to be defined

For enterprise licensing, please contact us at [email protected]

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for llmgateway

Similar Open Source Tools

For similar tasks

For similar jobs