bifrost

bifrost

The Fastest LLM Gateway with built in OTel observability and MCP gateway

Stars: 525

Visit
 screenshot

Bifrost is a high-performance AI gateway that unifies access to multiple providers through a single OpenAI-compatible API. It offers features like automatic failover, load balancing, semantic caching, and enterprise-grade functionalities. Users can deploy Bifrost in seconds with zero configuration, benefiting from its core infrastructure, advanced features, enterprise and security capabilities, and developer experience. The repository structure is modular, allowing for maximum flexibility. Bifrost is designed for quick setup, easy configuration, and seamless integration with various AI models and tools.

README:

Bifrost

Go Report Card Discord badge Known Vulnerabilities codecov Docker Pulls Run In Postman License

The fastest way to build AI applications that never go down

Bifrost is a high-performance AI gateway that unifies access to 12+ providers (OpenAI, Anthropic, AWS Bedrock, Google Vertex, and more) through a single OpenAI-compatible API. Deploy in seconds with zero configuration and get automatic failover, load balancing, semantic caching, and enterprise-grade features.

Quick Start

Get started

Go from zero to production-ready AI gateway in under a minute.

Step 1: Start Bifrost Gateway

# Install and run locally
npx -y @maximhq/bifrost

# Or use Docker
docker run -p 8080:8080 maximhq/bifrost

Step 2: Configure via Web UI

# Open the built-in web interface
open http://localhost:8080

Step 3: Make your first API call

curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello, Bifrost!"}]
  }'

That's it! Your AI gateway is running with a web interface for visual configuration, real-time monitoring, and analytics.

Complete Setup Guides:


Key Features

Core Infrastructure

  • Unified Interface - Single OpenAI-compatible API for all providers
  • Multi-Provider Support - OpenAI, Anthropic, AWS Bedrock, Google Vertex, Azure, Cohere, Mistral, Ollama, Groq, and more
  • Automatic Fallbacks - Seamless failover between providers and models with zero downtime
  • Load Balancing - Intelligent request distribution across multiple API keys and providers

Advanced Features

  • Model Context Protocol (MCP) - Enable AI models to use external tools (filesystem, web search, databases)
  • Semantic Caching - Intelligent response caching based on semantic similarity to reduce costs and latency
  • Multimodal Support - Support for text,images, audio, and streaming, all behind a common interface.
  • Custom Plugins - Extensible middleware architecture for analytics, monitoring, and custom logic
  • Governance - Usage tracking, rate limiting, and fine-grained access control

Enterprise & Security

  • Budget Management - Hierarchical cost control with virtual keys, teams, and customer budgets
  • SSO Integration - Google and GitHub authentication support
  • Observability - Native Prometheus metrics, distributed tracing, and comprehensive logging
  • Vault Support - Secure API key management with HashiCorp Vault integration

Developer Experience


Repository Structure

Bifrost uses a modular architecture for maximum flexibility:

bifrost/
├── npx/                 # NPX script for easy installation
├── core/                # Core functionality and shared components
│   ├── providers/       # Provider-specific implementations (OpenAI, Anthropic, etc.)
│   ├── schemas/         # Interfaces and structs used throughout Bifrost
│   └── bifrost.go       # Main Bifrost implementation
├── framework/           # Framework components for data persistence
│   ├── configstore/     # Configuration storages
│   ├── logstore/        # Request logging storages
│   └── vectorstore/     # Vector storages
├── transports/          # HTTP gateway and other interface layers
│   └── bifrost-http/    # HTTP transport implementation
├── ui/                  # Web interface for HTTP gateway
├── plugins/             # Extensible plugin system
│   ├── governance/      # Budget management and access control
│   ├── jsonparser/      # JSON parsing and manipulation utilities
│   ├── logging/         # Request logging and analytics
│   ├── maxim/           # Maxim's observability integration
│   ├── mocker/          # Mock responses for testing and development
│   ├── semanticcache/   # Intelligent response caching
│   └── telemetry/       # Monitoring and observability
├── docs/                # Documentation and guides
└── tests/               # Comprehensive test suites

Getting Started Options

Choose the deployment method that fits your needs:

1. Gateway (HTTP API)

Best for: Language-agnostic integration, microservices, and production deployments

# NPX - Get started in 30 seconds
npx -y @maximhq/bifrost

# Docker - Production ready
docker run -p 8080:8080 -v $(pwd)/data:/app/data maximhq/bifrost

Features: Web UI, real-time monitoring, multi-provider management, zero-config startup

Learn More: Gateway Setup Guide

2. Go SDK

Best for: Direct Go integration with maximum performance and control

go get github.com/maximhq/bifrost/core

Features: Native Go APIs, embedded deployment, custom middleware integration

Learn More: Go SDK Guide

3. Drop-in Replacement

Best for: Migrating existing applications with zero code changes

# OpenAI SDK
- base_url = "https://api.openai.com"
+ base_url = "http://localhost:8080/openai"

# Anthropic SDK  
- base_url = "https://api.anthropic.com"
+ base_url = "http://localhost:8080/anthropic"

# Google GenAI SDK
- api_endpoint = "https://generativelanguage.googleapis.com"
+ api_endpoint = "http://localhost:8080/genai"

Learn More: Integration Guides


Performance

Bifrost adds virtually zero overhead to your AI requests. In sustained 5,000 RPS benchmarks, the gateway added only 11 µs of overhead per request.

Metric t3.medium t3.xlarge Improvement
Added latency (Bifrost overhead) 59 µs 11 µs -81%
Success rate @ 5k RPS 100% 100% No failed requests
Avg. queue wait time 47 µs 1.67 µs -96%
Avg. request latency (incl. provider) 2.12 s 1.61 s -24%

Key Performance Highlights:

  • Perfect Success Rate - 100% request success rate even at 5k RPS
  • Minimal Overhead - Less than 15 µs additional latency per request
  • Efficient Queuing - Sub-microsecond average wait times
  • Fast Key Selection - ~10 ns to pick weighted API keys

Complete Benchmarks: Performance Analysis


Documentation

Complete Documentation: https://docs.getbifrost.ai

Quick Start

Features

Integrations

Enterprise


Need Help?

Join our Discord for community support and discussions.

Get help with:

  • Quick setup assistance and troubleshooting
  • Best practices and configuration tips
  • Community discussions and support
  • Real-time help with integrations

Contributing

We welcome contributions of all kinds! See our Contributing Guide for:

  • Setting up the development environment
  • Code conventions and best practices
  • How to submit pull requests
  • Building and testing locally

For development requirements and build instructions, see our Development Setup Guide.


License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

Built with ❤️ by Maxim

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for bifrost

Similar Open Source Tools

For similar tasks

For similar jobs