Skip to content

Commit 17d7525

Browse files
author
LittleMouse
committed
[upload] Add Docs
1 parent 6dd2b78 commit 17d7525

File tree

4 files changed

+142
-0
lines changed

4 files changed

+142
-0
lines changed

docs/Chat_Completions.md

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
# Chat Completions
2+
The Chat Completions API endpoint will generate a model response from a list of messages comprising a conversation.
3+
4+
# Create chat completion
5+
`post https://api.openai.com/v1/chat/completions`
6+
7+
```python
8+
from openai import OpenAI
9+
openai = OpenAI(
10+
api_key="sk-",
11+
base_url="http://192.168.20.186:8000/v1"
12+
)
13+
14+
completion = client.chat.completions.create(
15+
model="qwen2.5-0.5b-p256-ax630c",
16+
messages=[
17+
{"role": "developer", "content": "You are a helpful assistant."},
18+
{"role": "user", "content": "Hello!"}
19+
]
20+
)
21+
22+
print(completion.choices[0].message)
23+
```
24+
25+
## Request body
26+
27+
### messages `array` <span style="color: red;">Required</span>
28+
A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, images, and audio.
29+
30+
### model `string` <span style="color: red;">Required</span>
31+
Model ID used to generate the response, like `qwen2.5-0.5b-p256-ax630c` or `deepseek-r1-1.5b-p256-ax630c`. StackFlow offers a wide range of models with different capabilities, performance characteristics. Refer to the model Docs to browse and compare available models.
32+
33+
### audio
34+
`Audio output is not currently supported`
35+
36+
### function_call
37+
`function_call is not currently supported`
38+
39+
### max_tokens `integer` Optional
40+
The maximum number of tokens that can be generated in the chat completion.
41+
42+
### response_format `object` Optional
43+
An object specifying the format that the model must output.
44+
`Currently only supported format is json_object.`

docs/Models.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# Models
2+
List and describe the various models available in the API. You can refer to the Models documentation to understand what models are available and the differences between them.
3+
4+
# List models
5+
`get http://192.168.20.186:8000/v1/models`
6+
7+
Lists the currently available models, and provides basic information about each one such as the owner and availability.
8+
9+
```python
10+
from openai import OpenAI
11+
openai = OpenAI(
12+
api_key="sk-",
13+
base_url="http://192.168.20.186:8000/v1"
14+
)
15+
16+
client.models.list()
17+
```
18+
19+
## Returns
20+
A list of model objects.

docs/Speech_to_text.md

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
# Audio
2+
Learn how to turn audio into text or text into audio.
3+
4+
# Create speech
5+
post https://192.168.20.186:8000/v1/audio/speech
6+
7+
Generates audio from the input text.
8+
9+
```python
10+
from pathlib import Path
11+
from openai import OpenAI
12+
13+
openai = OpenAI(
14+
api_key="sk-",
15+
base_url="http://192.168.20.186:8000/v1"
16+
)
17+
18+
speech_file_path = Path(__file__).parent / "speech.mp3"
19+
with openai.audio.speech.with_streaming_response.create(
20+
model="gpt-4o-mini-tts",
21+
voice="alloy",
22+
input="The quick brown fox jumped over the lazy dog."
23+
) as response:
24+
response.stream_to_file(speech_file_path)
25+
```
26+
27+
## Request body
28+
29+
### input `string` <span style="color: red;">Required</span>
30+
The text to generate audio for. The maximum length is `1024` characters.
31+
32+
### model `string` <span style="color: red;">Required</span>
33+
One of the available TTS models: `melotts-zh-cn`, `melotts-en-us`.
34+
35+
### voice
36+
`Voice selection is not currently supported`
37+
38+
### response_format `string` Optional Defaults to mp3
39+
The format to audio in. Supported formats are `mp3`, `opus`, `aac`, `flac`, `wav`, and `pcm`.
40+
41+
### speed `number` Optional Defaults to 1
42+
The speed of the generated audio. Select a value from `0.25` to `2.0`. `1.0` is the default.
43+
44+
## Returns
45+
The audio file content.

docs/Text_to_speech.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
# Create transcription
2+
`post http://192.168.20.186:8000/v1/audio/transcriptions`
3+
Transcribes audio into the input language.
4+
5+
```python
6+
from openai import OpenAI
7+
client = OpenAI(
8+
api_key="sk-",
9+
base_url="http://192.168.20.186:8000/v1"
10+
)
11+
12+
audio_file = open("speech.mp3", "rb")
13+
transcript = client.audio.transcriptions.create(
14+
model="whisper-tiny",
15+
language="en",
16+
file=audio_file
17+
)
18+
```
19+
20+
## Request body
21+
22+
### file file Required
23+
The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
24+
25+
### model `string` <span style="color: red;">Required</span>
26+
ID of the model to use. The options are `whisper-tiny`, `whisper-base`, and `whisper-small`.
27+
28+
### language `string` <span style="color: red;">Required</span>
29+
The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.
30+
31+
### response_format string Optional
32+
Defaults to json
33+
`Currently only supported format is json.`

0 commit comments

Comments
 (0)