You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+74-7
Original file line number
Diff line number
Diff line change
@@ -109,11 +109,70 @@ const result1 = await predictEmoji("Back to the drawing board");
109
109
constresult2=awaitpredictEmoji("This code is so good you should get promoted");
110
110
```
111
111
112
+
(Note that merely creating a session does not cause any new responses from the language model. We need to call `prompt()` or `promptStreaming()` to get a response.)
113
+
112
114
Some details on error cases:
113
115
114
116
* Using both `systemPrompt` and a `{ role: "system" }` prompt in `initialPrompts`, or using multiple `{ role: "system" }` prompts, or placing the `{ role: "system" }` prompt anywhere besides at the 0th position in `initialPrompts`, will reject with a `TypeError`.
115
117
* If the combined token length of all the initial prompts (including the separate `systemPrompt`, if provided) is too large, then the promise will be rejected with a `"QuotaExceededError"``DOMException`.
116
118
119
+
### Customizing the role per prompt
120
+
121
+
Our examples so far have provided `prompt()` and `promptStreaming()` with a single string. Such cases assume messages will come from the user role. These methods can also take in objects in the `{ role, content }` format, or arrays of such objects, in case you want to provide multiple user or assistant messages before getting another assistant message:
systemPrompt:"You are a mediator in a discussion between two departments."
126
+
});
127
+
128
+
constresult=awaitmultiUserSession.prompt([
129
+
{ role:"user", content:"Marketing: We need more budget for advertising campaigns." },
130
+
{ role:"user", content:"Finance: We need to cut costs and advertising is on the list." },
131
+
{ role:"assistant", content:"Let's explore a compromise that satisfies both departments." }
132
+
]);
133
+
134
+
// `result` will contain a compromise proposal from the assistant.
135
+
```
136
+
137
+
Because of their special behavior of being preserved on context window overflow, system prompts cannot be provided this way.
138
+
139
+
### Emulating tool use or function-calling via assistant-role prompts
140
+
141
+
A special case of the above is using the assistant role to emulate tool use or function-calling, by marking a response as coming from the assistant side of the conversation:
142
+
143
+
```js
144
+
constsession=awaitai.assistant.create({
145
+
systemPrompt:`
146
+
You are a helpful assistant. You have access to the following tools:
147
+
- calculator: A calculator. To use it, write "CALCULATOR: <expression>" where <expression> is a valid mathematical expression.
148
+
`
149
+
});
150
+
151
+
asyncfunctionpromptWithCalculator(prompt) {
152
+
constresult=awaitsession.prompt("What is 2 + 2?");
153
+
154
+
// Check if the assistant wants to use the calculator tool.
// Return it as if that's what the assistant said to the user.
164
+
return mathResult;
165
+
}
166
+
167
+
// The assistant didn't want to use the calculator. Just return its response.
168
+
return result;
169
+
}
170
+
171
+
console.log(awaitpromptWithCalculator("What is 2 + 2?"));
172
+
```
173
+
174
+
We'll likely explore more specific APIs for tool- and function-calling in the future; follow along in [issue #7](https://github.com/explainers-by-googlers/prompt-api/issues/7).
175
+
117
176
### Configuration of per-session options
118
177
119
178
In addition to the `systemPrompt` and `initialPrompts` options shown above, the currently-configurable options are [temperature](https://huggingface.co/blog/how-to-generate#sampling) and [top-K](https://huggingface.co/blog/how-to-generate#top-k-sampling). More information about the values for these parameters can be found using the `capabilities()` API explained [below](#capabilities-detection).
0 commit comments