Skip to content

Open source, type-safe primitives for multi-modal AI. All capabilities, all providers, one interface 🌟

License

Notifications You must be signed in to change notification settings

fanff/celeste-python

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

87 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Celeste AI

Celeste Logo

The primitive layer for multi-modal AI

All capabilities. All providers. One interface.

Primitives, not frameworks.

Python License PyPI

Follow @withceleste on LinkedIn

Quick Start β€’ Request Provider

Celeste AI

Type-safe, capability-provider-agnostic primitives .

  • Unified Interface: One API for OpenAI, Anthropic, Gemini, Mistral, and 14+ others.
  • True Multi-Modal: Text, Image, Audio, Video, Embeddings, Search β€”all first-class citizens.
  • Type-Safe by Design: Full Pydantic validation and IDE autocomplete.
  • Zero Lock-In: Switch providers instantly by changing a single config string.
  • Primitives, Not Frameworks: No agents, no chains, no magic. Just clean I/O.
  • Lightweight Architecture: No vendor SDKs. Pure, fast HTTP.

πŸš€ Quick Start

from celeste import create_client


# "We need a catchy slogan for our new eco-friendly sneaker."
client = create_client(
    capability="text-generation",
    model="gpt-5"
)
slogan = await client.generate("Write a slogan for an eco-friendly sneaker.")
print(slogan.content)

🎨 Multimodal example

from pydantic import BaseModel, Field

class ProductCampaign(BaseModel):
    visual_prompt: str
    audio_script: str

# 2. Extract Campaign Assets (Anthropic)
# -----------------------------------------------------
extract_client = create_client(Capability.TEXT_GENERATION, model="claude-opus-4-1")
campaign_output = await extract_client.generate(
    f"Create campaign assets for slogan: {slogan.content}",
    output_schema=ProductCampaign
)
campaign = campaign_output.content

# 3. Generate Ad Visual (Flux)
# -----------------------------------------------------
image_client = create_client(Capability.IMAGE_GENERATION, model="flux-2-flex")
image_output = await image_client.generate(
    campaign.visual_prompt,
    aspect_ratio="1:1"
)
image = image_output.content

# 4. Generate Radio Spot (ElevenLabs)
# -----------------------------------------------------
speech_client = create_client(Capability.SPEECH_GENERATION, model="eleven_v3")
speech_output = await speech_client.generate(
    campaign.audio_script,
    voice="adam"
)
speech = speech_output.content

No special cases. No separate libraries. One consistent interface.


15+ providers. Zero lock-in.

Google Anthropic OpenAI Mistral Cohere xAI DeepSeek Groq Perplexity Ollama Hugging Face Replicate Stability AI Runway ElevenLabs

and many more

Missing a provider? Request it – ⚑ we ship fast.


πŸ”„ Switch providers in one line

from pydantic import BaseModel

class User(BaseModel):
    name: str
    age: int

# Model IDs
anthropic_model_id = "claude-4-5-sonnet"
google_model_id = "gemini-2.5-flash"
# ❌ Anthropic Way
from anthropic import Anthropic
import json

client = Anthropic()
response = client.messages.create(
    model=anthropic_model_id,
    messages=[
        {"role": "user",
         "content": "Extract user info: John is 30"}
    ],
    output_format={
        "type": "json_schema",
        "schema": User.model_json_schema()
    }
)
user_data = json.loads(response.content[0].text)
# ❌ Google Gemini Way
from google import genai
from google.genai import types

client = genai.Client()
response = await client.aio.models.generate_content(
    model=gemini_model_id,
    contents="Extract user info: John is 30",
    config=types.GenerateContentConfig(
        response_mime_type="application/json",
        response_schema=User
    )
)
user = response.parsed
# βœ… Celeste Way
from celeste import create_client, Capability


client = create_client(
    Capability.TEXT_GENERATION,
    model=google_model_id  # <--- Choose any model from any provider
)

response = await client.generate(
    prompt="Extract user info: John is 30",
    output_schema=User  # <--- Unified parameter working across all providers
)
user = response.content  # Already parsed as User instance

πŸͺΆ Install what you need

uv add "celeste-ai[text-generation]"  # Text only
uv add "celeste-ai[image-generation]" # Image generation
uv add "celeste-ai[all]"              # Everything

πŸ”§ Type-Safe by Design

# Full IDE autocomplete
response = await client.generate(
    prompt="Explain AI",
    temperature=0.7,    # βœ… Validated (0.0-2.0)
    max_tokens=100,     # βœ… Validated (int)
)

# Typed response
print(response.content)              # str (IDE knows the type)
print(response.usage.input_tokens)   # int
print(response.metadata["model"])     # str

Catch errors before production.


🀝 Contributing

We welcome contributions! See CONTRIBUTING.md.

Request a provider: GitHub Issues Report bugs: GitHub Issues


πŸ“„ License

MIT license – see LICENSE for details.


Get Started β€’ Documentation β€’ GitHub

Made with ❀️ by developers tired of framework lock-in

About

Open source, type-safe primitives for multi-modal AI. All capabilities, all providers, one interface 🌟

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.5%
  • Makefile 0.5%