Skip to content

Commit e7eb953

Browse files
feat: Add multi-turn conversation support and OpenResponses/Ollama providers (withceleste#111)
* feat: add multi-turn conversation support with Message and Role types Adds first-class conversation primitives for multi-turn workflows: - New types: `Message` (role + content) and `Role` enum (user/assistant/system/developer) - All text entrypoints now accept `messages=` parameter: * Namespace: `celeste.text.generate(...)`, `celeste.text.stream.generate(...)`, `celeste.text.sync.generate(...)` * Client: `create_client(modality="text", ...).generate(...)` and `.analyze(...)` - When `messages` are provided, they take precedence over `prompt` (which becomes optional) - Provider normalization handles different API formats: * Anthropic: system/developer messages lifted to top-level `system` * Google: system/developer messages lifted to `system_instruction`; assistant role becomes `model` * Chat-style providers: `messages=[...]` arrays (Cohere/Mistral/Groq/DeepSeek/Moonshot) * Responses-style providers: `input=[...]` arrays (OpenAI/xAI) - Foundation for agents and multi-step workflows requiring conversation state persistence Files changed: - Core types: `src/celeste/types.py` - Text IO: `src/celeste/modalities/text/io.py` - Text client: `src/celeste/modalities/text/client.py` - All 9 text provider clients updated - Namespace API: `src/celeste/namespaces/domains.py` - Public exports: `src/celeste/__init__.py` * feat: add OpenResponses provider and Ollama local support Adds support for OpenAI Responses-compatible APIs and local model hosting: OpenResponses Provider: - New `Provider.OPENRESPONSES` implementing OpenAI Responses API surface - Compatible with `POST /v1/responses` endpoint + SSE parsing - Supports structured outputs via `output_schema` → `text.format = json_schema` - Normalizes usage fields to Celeste's unified usage model - Supports `base_url=` parameter for custom API gateways Ollama Provider: - New `Provider.OLLAMA` as a local wrapper over OpenResponses protocol - Default base URL: `http://localhost:11434` - Uses `NoAuth` authentication (no headers required) - Supports unregistered local models with parameter validation warnings Infrastructure: - Added `NoAuth` class for local providers that don't require authentication - Added `base_url=` plumbing on text APIs for targeting local gateways - Supports rapid local iteration with unregistered models Files added: - `src/celeste/providers/openresponses/` (full provider implementation) - `src/celeste/modalities/text/providers/openresponses/` (text modality client) - `src/celeste/providers/ollama/` (Ollama wrapper) - `src/celeste/modalities/text/providers/ollama/` (Ollama text client) Files modified: - `src/celeste/core.py` (Provider enums) - `src/celeste/auth.py` (NoAuth) - `src/celeste/client.py` (base_url support) - `src/celeste/modalities/text/models.py` (OLLAMA_MODELS) - Provider exports updated * fix: resolve mypy error and apply formatting fixes for messages feature - Fix variable name collision in anthropic client (content -> prompt_content) - Apply ruff formatting fixes to provider clients and namespace API * fix: add class definition to ollama client for template contract - Convert OllamaClient from alias to proper class inheriting OpenResponsesClient - Apply ruff formatting fixes to openresponses provider files * fix: exclude wrapper providers from template contract test - Revert OllamaClient to simple alias (it's just a wrapper) - Update test to skip wrapper providers like ollama that re-export another provider's client - Wrapper providers don't need to match the template contract since they delegate to the wrapped provider * chore: bump version to 0.9.1
1 parent b849276 commit e7eb953

File tree

41 files changed

+1152
-128
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+1152
-128
lines changed

CHANGELOG_V1.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ Date: 2026-01-15
9191
- File updated: `README.md`.
9292

9393
## Release Prep
94-
- Set package version to `0.9.0` for the public v1 beta.
94+
- Set package version to `0.9.1` for the public v1 beta.
9595
- Updated development status classifier to Beta.
9696
- Removed notebook/scraping-only runtime deps from install requirements (ipykernel, matplotlib, beautifulsoup4).
9797
- File updated: `pyproject.toml`.

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "celeste-ai"
3-
version = "0.9.0"
3+
version = "0.9.1"
44
description = "Open source, type-safe primitives for multi-modal AI. All capabilities, all providers, one interface"
55
authors = [{name = "Kamilbenkirane", email = "kamil@withceleste.ai"}]
66
readme = "README.md"

src/celeste/__init__.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@
5050
StrictJsonSchemaGenerator,
5151
StrictRefResolvingJsonSchemaGenerator,
5252
)
53-
from celeste.types import JsonValue
53+
from celeste.types import Content, JsonValue, Message, Role
5454
from celeste.websocket import WebSocketClient, WebSocketConnection, close_all_ws_clients
5555

5656
logger = logging.getLogger(__name__)
@@ -244,10 +244,12 @@ def create_client(
244244
"Capability",
245245
"ClientNotFoundError",
246246
"ConstraintViolationError",
247+
"Content",
247248
"Error",
248249
"HTTPClient",
249250
"Input",
250251
"JsonValue",
252+
"Message",
251253
"MissingCredentialsError",
252254
"Modality",
253255
"ModalityClient",
@@ -259,6 +261,7 @@ def create_client(
259261
"Parameters",
260262
"Provider",
261263
"RefResolvingJsonSchemaGenerator",
264+
"Role",
262265
"StreamEmptyError",
263266
"StreamNotExhaustedError",
264267
"StreamingNotSupportedError",

src/celeste/auth.py

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,14 @@ def get_headers(self) -> dict[str, str]:
4545
return {self.header: f"{self.prefix}{self.secret.get_secret_value()}"}
4646

4747

48+
class NoAuth(Authentication):
49+
"""Authentication that returns no headers (local providers)."""
50+
51+
def get_headers(self) -> dict[str, str]:
52+
"""Return empty headers for unauthenticated requests."""
53+
return {}
54+
55+
4856
# Backwards compatibility alias
4957
APIKey = AuthHeader
5058

@@ -78,4 +86,11 @@ def get_auth_class(auth_type: str) -> type[Authentication]:
7886
return _auth_classes[auth_type]
7987

8088

81-
__all__ = ["APIKey", "AuthHeader", "Authentication", "get_auth_class", "register_auth"]
89+
__all__ = [
90+
"APIKey",
91+
"AuthHeader",
92+
"Authentication",
93+
"NoAuth",
94+
"get_auth_class",
95+
"register_auth",
96+
]

src/celeste/client.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -180,6 +180,9 @@ def _stream(
180180
self,
181181
inputs: In,
182182
stream_class: type[Stream[Out, Params, Chunk]],
183+
*,
184+
endpoint: str | None = None,
185+
base_url: str | None = None,
183186
extra_body: dict[str, Any] | None = None,
184187
**parameters: Unpack[Params], # type: ignore[misc]
185188
) -> Stream[Out, Params, Chunk]:
@@ -207,7 +210,9 @@ def _stream(
207210
request_body = self._build_request(
208211
inputs, extra_body=extra_body, streaming=True, **parameters
209212
)
210-
sse_iterator = self._make_stream_request(request_body, **parameters)
213+
sse_iterator = self._make_stream_request(
214+
request_body, endpoint=endpoint, base_url=base_url, **parameters
215+
)
211216
return stream_class(
212217
sse_iterator,
213218
transform_output=self._transform_output,

src/celeste/core.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,8 @@ class Provider(StrEnum):
2525
ELEVENLABS = "elevenlabs"
2626
GROQ = "groq"
2727
GRADIUM = "gradium"
28+
OPENRESPONSES = "openresponses"
29+
OLLAMA = "ollama"
2830

2931

3032
class Modality(StrEnum):

src/celeste/modalities/text/client.py

Lines changed: 66 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
"""Text modality client."""
22

3-
from typing import Unpack
3+
from typing import Any, Unpack
44

55
from asgiref.sync import async_to_sync
66

77
from celeste.client import ModalityClient
88
from celeste.core import InputType, Modality
9-
from celeste.types import AudioContent, ImageContent, TextContent, VideoContent
9+
from celeste.types import AudioContent, ImageContent, Message, TextContent, VideoContent
1010

1111
from .io import TextInput, TextOutput
1212
from .parameters import TextParameters
@@ -69,7 +69,11 @@ def __init__(self, client: TextClient) -> None:
6969

7070
def generate(
7171
self,
72-
prompt: str,
72+
prompt: str | None = None,
73+
*,
74+
messages: list[Message] | None = None,
75+
base_url: str | None = None,
76+
extra_body: dict[str, Any] | None = None,
7377
**parameters: Unpack[TextParameters],
7478
) -> TextStream:
7579
"""Stream text generation.
@@ -78,20 +82,25 @@ def generate(
7882
async for chunk in client.stream.generate("Hello"):
7983
print(chunk.content)
8084
"""
81-
inputs = TextInput(prompt=prompt)
85+
inputs = TextInput(prompt=prompt, messages=messages)
8286
return self._client._stream(
8387
inputs,
8488
stream_class=self._client._stream_class(),
89+
base_url=base_url,
90+
extra_body=extra_body,
8591
**parameters,
8692
)
8793

8894
def analyze(
8995
self,
90-
prompt: str,
96+
prompt: str | None = None,
9197
*,
98+
messages: list[Message] | None = None,
9299
image: ImageContent | None = None,
93100
video: VideoContent | None = None,
94101
audio: AudioContent | None = None,
102+
base_url: str | None = None,
103+
extra_body: dict[str, Any] | None = None,
95104
**parameters: Unpack[TextParameters],
96105
) -> TextStream:
97106
"""Stream media analysis (image, video, or audio).
@@ -106,11 +115,16 @@ def analyze(
106115
async for chunk in client.stream.analyze("Transcribe", audio=aud):
107116
print(chunk.content)
108117
"""
109-
self._client._check_media_support(image=image, video=video, audio=audio)
110-
inputs = TextInput(prompt=prompt, image=image, video=video, audio=audio)
118+
if messages is None:
119+
self._client._check_media_support(image=image, video=video, audio=audio)
120+
inputs = TextInput(
121+
prompt=prompt, messages=messages, image=image, video=video, audio=audio
122+
)
111123
return self._client._stream(
112124
inputs,
113125
stream_class=self._client._stream_class(),
126+
base_url=base_url,
127+
extra_body=extra_body,
114128
**parameters,
115129
)
116130

@@ -126,7 +140,11 @@ def __init__(self, client: TextClient) -> None:
126140

127141
def generate(
128142
self,
129-
prompt: str,
143+
prompt: str | None = None,
144+
*,
145+
messages: list[Message] | None = None,
146+
base_url: str | None = None,
147+
extra_body: dict[str, Any] | None = None,
130148
**parameters: Unpack[TextParameters],
131149
) -> TextOutput:
132150
"""Blocking text generation.
@@ -135,16 +153,21 @@ def generate(
135153
result = client.sync.generate("Hello")
136154
print(result.content)
137155
"""
138-
inputs = TextInput(prompt=prompt)
139-
return async_to_sync(self._client._predict)(inputs, **parameters)
156+
inputs = TextInput(prompt=prompt, messages=messages)
157+
return async_to_sync(self._client._predict)(
158+
inputs, base_url=base_url, extra_body=extra_body, **parameters
159+
)
140160

141161
def analyze(
142162
self,
143-
prompt: str,
163+
prompt: str | None = None,
144164
*,
165+
messages: list[Message] | None = None,
145166
image: ImageContent | None = None,
146167
video: VideoContent | None = None,
147168
audio: AudioContent | None = None,
169+
base_url: str | None = None,
170+
extra_body: dict[str, Any] | None = None,
148171
**parameters: Unpack[TextParameters],
149172
) -> TextOutput:
150173
"""Blocking media analysis (image, video, or audio).
@@ -159,9 +182,14 @@ def analyze(
159182
result = client.sync.analyze("Transcribe", audio=aud)
160183
print(result.content)
161184
"""
162-
self._client._check_media_support(image=image, video=video, audio=audio)
163-
inputs = TextInput(prompt=prompt, image=image, video=video, audio=audio)
164-
return async_to_sync(self._client._predict)(inputs, **parameters)
185+
if messages is None:
186+
self._client._check_media_support(image=image, video=video, audio=audio)
187+
inputs = TextInput(
188+
prompt=prompt, messages=messages, image=image, video=video, audio=audio
189+
)
190+
return async_to_sync(self._client._predict)(
191+
inputs, base_url=base_url, extra_body=extra_body, **parameters
192+
)
165193

166194
@property
167195
def stream(self) -> "TextSyncStreamNamespace":
@@ -177,7 +205,11 @@ def __init__(self, client: TextClient) -> None:
177205

178206
def generate(
179207
self,
180-
prompt: str,
208+
prompt: str | None = None,
209+
*,
210+
messages: list[Message] | None = None,
211+
base_url: str | None = None,
212+
extra_body: dict[str, Any] | None = None,
181213
**parameters: Unpack[TextParameters],
182214
) -> TextStream:
183215
"""Sync streaming text generation.
@@ -191,15 +223,24 @@ def generate(
191223
print(stream.output.usage)
192224
"""
193225
# Return same stream as async version - __iter__/__next__ handle sync iteration
194-
return self._client.stream.generate(prompt, **parameters)
226+
return self._client.stream.generate(
227+
prompt,
228+
messages=messages,
229+
base_url=base_url,
230+
extra_body=extra_body,
231+
**parameters,
232+
)
195233

196234
def analyze(
197235
self,
198-
prompt: str,
236+
prompt: str | None = None,
199237
*,
238+
messages: list[Message] | None = None,
200239
image: ImageContent | None = None,
201240
video: VideoContent | None = None,
202241
audio: AudioContent | None = None,
242+
base_url: str | None = None,
243+
extra_body: dict[str, Any] | None = None,
203244
**parameters: Unpack[TextParameters],
204245
) -> TextStream:
205246
"""Sync streaming media analysis (image, video, or audio).
@@ -224,7 +265,14 @@ def analyze(
224265
"""
225266
# Return same stream as async version - __iter__/__next__ handle sync iteration
226267
return self._client.stream.analyze(
227-
prompt, image=image, video=video, audio=audio, **parameters
268+
prompt,
269+
messages=messages,
270+
image=image,
271+
video=video,
272+
audio=audio,
273+
base_url=base_url,
274+
extra_body=extra_body,
275+
**parameters,
228276
)
229277

230278

src/celeste/modalities/text/io.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,13 +10,14 @@
1010
from pydantic import Field
1111

1212
from celeste.io import Chunk, FinishReason, Input, Output, Usage
13-
from celeste.types import AudioContent, ImageContent, TextContent, VideoContent
13+
from celeste.types import AudioContent, ImageContent, Message, TextContent, VideoContent
1414

1515

1616
class TextInput(Input):
1717
"""Input for text operations."""
1818

19-
prompt: str
19+
prompt: str | None = None
20+
messages: list[Message] | None = None
2021
text: str | list[str] | None = None
2122
image: ImageContent | None = None
2223
video: VideoContent | None = None

src/celeste/modalities/text/models.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
from .providers.groq.models import MODELS as GROQ_MODELS
1010
from .providers.mistral.models import MODELS as MISTRAL_MODELS
1111
from .providers.moonshot.models import MODELS as MOONSHOT_MODELS
12+
from .providers.ollama.models import MODELS as OLLAMA_MODELS
1213
from .providers.openai.models import MODELS as OPENAI_MODELS
1314
from .providers.xai.models import MODELS as XAI_MODELS
1415

@@ -18,6 +19,7 @@
1819
*DEEPSEEK_MODELS,
1920
*GOOGLE_MODELS,
2021
*GROQ_MODELS,
22+
*OLLAMA_MODELS,
2123
*MISTRAL_MODELS,
2224
*MOONSHOT_MODELS,
2325
*OPENAI_MODELS,

src/celeste/modalities/text/providers/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,9 @@
1010
from .groq import GroqTextClient
1111
from .mistral import MistralTextClient
1212
from .moonshot import MoonshotTextClient
13+
from .ollama import OllamaTextClient
1314
from .openai import OpenAITextClient
15+
from .openresponses import OpenResponsesTextClient
1416
from .xai import XAITextClient
1517

1618
PROVIDERS: dict[Provider, type[TextClient]] = {
@@ -19,6 +21,8 @@
1921
Provider.DEEPSEEK: DeepSeekTextClient,
2022
Provider.GOOGLE: GoogleTextClient,
2123
Provider.GROQ: GroqTextClient,
24+
Provider.OLLAMA: OllamaTextClient,
25+
Provider.OPENRESPONSES: OpenResponsesTextClient,
2226
Provider.MISTRAL: MistralTextClient,
2327
Provider.MOONSHOT: MoonshotTextClient,
2428
Provider.OPENAI: OpenAITextClient,

0 commit comments

Comments
 (0)