any-llm

any-llm

Communicate with an LLM provider using a single interface

Stars: 976

Visit
 screenshot

The `any-llm` repository provides a unified API to access different LLM (Large Language Model) providers. It offers a simple and developer-friendly interface, leveraging official provider SDKs for compatibility and maintenance. The tool is framework-agnostic, actively maintained, and does not require a proxy or gateway server. It addresses challenges in API standardization and aims to provide a consistent interface for various LLM providers, overcoming limitations of existing solutions like LiteLLM, AISuite, and framework-specific integrations.

README:

Project logo

any-llm

Read the Blog Post

Docs Linting Unit Tests Integration Tests

Python 3.11+ PyPI Discord

A single interface to use different llm providers.

Key Features

any-llm offers:

  • Simple, unified interface - one function for all providers, switch models with just a string change
  • Developer friendly - full type hints for better IDE support and clear, actionable error messages
  • Leverages official provider SDKs when available, reducing maintenance burden and ensuring compatibility
  • Stays framework-agnostic so it can be used across different projects and use cases
  • Actively maintained - we use this in our own product (any-agent) ensuring continued support
  • No Proxy or Gateway server required so you don't need to deal with setting up any other service to talk to whichever LLM provider you need.

Motivation

The landscape of LLM provider interfaces presents a fragmented ecosystem with several challenges that any-llm aims to address:

The Challenge with API Standardization:

While the OpenAI API has become the de facto standard for LLM provider interfaces, providers implement slight variations. Some providers are fully OpenAI-compatible, while others may have different parameter names, response formats, or feature sets. This creates a need for light wrappers that can gracefully handle these differences while maintaining a consistent interface.

Existing Solutions and Their Limitations:

  • LiteLLM: While popular, it reimplements provider interfaces rather than leveraging official SDKs, which can lead to compatibility issues and unexpected behavior modifications
  • AISuite: Offers a clean, modular approach but lacks active maintenance, comprehensive testing, and modern Python typing standards.
  • Framework-specific solutions: Some agent frameworks either depend on LiteLLM or implement their own provider integrations, creating fragmentation
  • Proxy Only Solutions: solutions like OpenRouter and Portkey require a hosted proxy to serve as the interface between your code and the LLM provider.

Demos

Try any-llm in action with our interactive demos:

💬 Chat Demo

📂 Run the Chat Demo

An interactive chat interface showcasing streaming completions and provider switching:

  • Real-time streaming responses with character-by-character display
  • Support for multiple LLM providers with easy switching
  • Collapsible "thinking" content display for supported models
  • Clean chat interface with auto-scrolling

🔍 Model Finder Demo

📂 Run the Model Finder Demo

A model discovery tool that helps you find AI models across different providers:

  • Search and filter models across all your configured providers
  • Provider status dashboard showing which APIs you have configured

Quickstart

Requirements

  • Python 3.11 or newer
  • API_KEYS to access to whichever LLM you choose to use.

Installation

In your pip install, include the supported providers that you plan on using, or use the all option if you want to install support for all any-llm supported providers.

pip install 'any-llm-sdk[mistral,ollama]'

Make sure you have the appropriate API key environment variable set for your provider. Alternatively, you could use the api_key parameter when making a completion call instead of setting an environment variable.

export MISTRAL_API_KEY="YOUR_KEY_HERE"  # or OPENAI_API_KEY, etc

Basic Usage

any-llm offers two main approaches for interacting with LLM providers:

Option 1: Direct API Functions (Recommended for Bootstrapping and Experimentation)

Recommended approach: Use separate provider and model parameters:

from any_llm import completion
import os

# Make sure you have the appropriate environment variable set
assert os.environ.get('MISTRAL_API_KEY')

response = completion(
    model="mistral-small-latest",
    provider="mistral",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

Alternative syntax: You can also use the combined provider:model format:

response = completion(
    model="mistral:mistral-small-latest", # <provider_id>:<model_id>
    messages=[{"role": "user", "content": "Hello!"}]
)

Option 2: AnyLLM Class (Recommended for Production Use Cases)

For applications that need to reuse providers, perform multiple operations, or require more control:

from any_llm import AnyLLM

llm = AnyLLM.create("mistral", api_key="your-mistral-api-key")

response = llm.completion({
    "model_id": "mistral-small-latest",
    "messages": [{"role": "user", "content": "Hello!"}]
})

When to Use Which Approach

Use Direct API Functions when:

  • Making simple, one-off requests
  • Prototyping or quick scripts
  • You want the simplest possible interface

Use Provider Class when:

  • Building applications that make multiple requests with the same provider
  • You want to avoid repeated provider instantiation overhead

The provider_id should be specified according to the provider ids supported by any-llm. The model_id portion is passed directly to the provider internals: to understand what model ids are available for a provider, you will need to refer to the provider documentation or use our list_models API if the provider supports that API.

Responses API

For providers that implement the OpenAI-style Responses API, use responses or aresponses:

from any_llm import responses

result = responses(
    model="gpt-4o-mini",
    provider="openai",
    input_data=[
        {"role": "user", "content": [
            {"type": "text", "text": "Summarize this in one sentence."}
        ]}
    ],
)

# Non-streaming returns an OpenAI-compatible Responses object alias
print(result.output_text)

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for any-llm

Similar Open Source Tools

For similar tasks

For similar jobs