celeste-python

celeste-python

Open source, type-safe primitives for multi-modal AI. All modelities, all providers, one interface 🌟

Stars: 204

Visit
 screenshot

Celeste AI is a type-safe, modality/provider-agnostic tool that offers unified interface for various providers like OpenAI, Anthropic, Gemini, Mistral, and more. It supports multiple modalities including text, image, audio, video, and embeddings, with full Pydantic validation and IDE autocomplete. Users can switch providers instantly, ensuring zero lock-in and a lightweight architecture. The tool provides primitives, not frameworks, for clean I/O operations.

README:

Celeste AI

Celeste Logo

The primitive layer for multi-modal AI

All modalities. All providers. One interface.

Primitives, not frameworks.

Python License PyPI

Follow @withceleste on LinkedIn

Quick Start β€’ Request Provider

πŸš€ This is the v1 Beta release. We're validating the new architecture before the stable v1.0 release. Feedback welcome!

Celeste AI

Type-safe, modality/provider-agnostic primitives.

  • Unified Interface: One API for OpenAI, Anthropic, Gemini, Mistral, and 14+ others.
  • True Multi-Modal: Text, Image, Audio, Video, Embeddings, Search β€”all first-class citizens.
  • Type-Safe by Design: Full Pydantic validation and IDE autocomplete.
  • Zero Lock-In: Switch providers instantly by changing a single config string.
  • Primitives, Not Frameworks: No agents, no chains, no magic. Just clean I/O.
  • Lightweight Architecture: No vendor SDKs. Pure, fast HTTP.

πŸš€ Quick Start

import celeste

# One SDK. Every modality. Any provider.
text   = await celeste.text.generate("Explain quantum computing", model="claude-opus-4-5")
image  = await celeste.images.generate("A serene mountain lake at dawn", model="flux-2-pro")
speech = await celeste.audio.speak("Welcome to the future", model="eleven_v3")
video  = await celeste.videos.analyze(video_file, prompt="Summarize this clip", model="gemini-3-pro")
embeddings = await celeste.text.embed(["lorep ipsum", "dolor sit amet"], model="gemini-embedding-001")

15+ providers. Zero lock-in.

Google OpenAI Mistral Anthropic Cohere xAI DeepSeek Ollama Groq ElevenLabs BytePlus Black Forest Labs

and many more

Missing a provider? Request it – ⚑ we ship fast.


Operations by Domain

Action Text Images Audio Video
Generate βœ“ βœ“ β—‹ βœ“
Edit β€” βœ“ β€” β€”
Analyze β€” βœ“ βœ“ βœ“
Upscale β€” β—‹ β€” β—‹
Speak β€” β€” βœ“ β€”
Transcribe β€” β€” βœ“ β€”
Embed βœ“ β—‹ β€” β—‹

βœ“ Available Β· β—‹ Planned

πŸ”„ Switch providers in one line

from pydantic import BaseModel

class User(BaseModel):
    name: str
    age: int

# Model IDs
anthropic_model_id = "claude-4-5-sonnet"
google_model_id = "gemini-2.5-flash"
# ❌ Anthropic Way
from anthropic import Anthropic
import json

client = Anthropic()
response = client.messages.create(
    model=anthropic_model_id,
    messages=[
        {"role": "user",
         "content": "Extract user info: John is 30"}
    ],
    output_format={
        "type": "json_schema",
        "schema": User.model_json_schema()
    }
)
user_data = json.loads(response.content[0].text)
# ❌ Google Gemini Way
from google import genai
from google.genai import types

client = genai.Client()
response = await client.aio.models.generate_content(
    model=gemini_model_id,
    contents="Extract user info: John is 30",
    config=types.GenerateContentConfig(
        response_mime_type="application/json",
        response_schema=User
    )
)
user = response.parsed
# βœ… Celeste Way
import celeste

response = await celeste.text.generate(
    "Extract user info: John is 30",
    model=google_model_id,  # <--- Choose any model from any provider
    output_schema=User,  # <--- Unified parameter working across all providers
)
user = response.content  # Already parsed as User instance

βš™οΈ Advanced: Create Client

For explicit configuration or client reuse, use create_client with modality + operation. This is modality-first: you choose the output type and operation explicitly.

from celeste import create_client, Modality, Operation, Provider

client = create_client(
    modality=Modality.TEXT,
    operation=Operation.GENERATE,
    provider=Provider.OLLAMA,
    model="llama3.2",
)
response = await client.generate("Extract user info: John is 30", output_schema=User)

capability is still supported but deprecated. Prefer modality + operation.


πŸͺΆ Install

uv add celeste-ai
# or
pip install celeste-ai

πŸ”§ Type-Safe by Design

# Full IDE autocomplete
import celeste

response = await celeste.text.generate(
    "Explain AI",
    model="gpt-4o-mini",
    temperature=0.7,    # βœ… Validated (0.0-2.0)
    max_tokens=100,     # βœ… Validated (int)
)

# Typed response
print(response.content)              # str (IDE knows the type)
print(response.usage.input_tokens)   # int
print(response.metadata["model"])     # str

Catch errors before production.


🀝 Contributing

We welcome contributions! See CONTRIBUTING.md.

Request a provider: GitHub Issues Report bugs: GitHub Issues


πŸ“„ License

MIT license – see LICENSE for details.

Get Started β€’ Documentation β€’ GitHub

Made with ❀️ by developers tired of framework lock-in

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for celeste-python

Similar Open Source Tools

For similar tasks

For similar jobs