Skip to content

Commit adf7435

Browse files
committedOct 9, 2024
Allow assistant-role prompts and multiple prompts
Closes #28. See also #7.
1 parent 1289135 commit adf7435

File tree

1 file changed

+74
-7
lines changed

1 file changed

+74
-7
lines changed
 

‎README.md

+74-7
Original file line numberDiff line numberDiff line change
@@ -109,11 +109,70 @@ const result1 = await predictEmoji("Back to the drawing board");
109109
const result2 = await predictEmoji("This code is so good you should get promoted");
110110
```
111111

112+
(Note that merely creating a session does not cause any new responses from the language model. We need to call `prompt()` or `promptStreaming()` to get a response.)
113+
112114
Some details on error cases:
113115

114116
* Using both `systemPrompt` and a `{ role: "system" }` prompt in `initialPrompts`, or using multiple `{ role: "system" }` prompts, or placing the `{ role: "system" }` prompt anywhere besides at the 0th position in `initialPrompts`, will reject with a `TypeError`.
115117
* If the combined token length of all the initial prompts (including the separate `systemPrompt`, if provided) is too large, then the promise will be rejected with a `"QuotaExceededError"` `DOMException`.
116118

119+
### Customizing the role per prompt
120+
121+
Our examples so far have provided `prompt()` and `promptStreaming()` with a single string. Such cases assume messages will come from the user role. These methods can also take in objects in the `{ role, content }` format, or arrays of such objects, in case you want to provide multiple user or assistant messages before getting another assistant message:
122+
123+
```js
124+
const multiUserSession = await ai.languageModel.create({
125+
systemPrompt: "You are a mediator in a discussion between two departments."
126+
});
127+
128+
const result = await multiUserSession.prompt([
129+
{ role: "user", content: "Marketing: We need more budget for advertising campaigns." },
130+
{ role: "user", content: "Finance: We need to cut costs and advertising is on the list." },
131+
{ role: "assistant", content: "Let's explore a compromise that satisfies both departments." }
132+
]);
133+
134+
// `result` will contain a compromise proposal from the assistant.
135+
```
136+
137+
Because of their special behavior of being preserved on context window overflow, system prompts cannot be provided this way.
138+
139+
### Emulating tool use or function-calling via assistant-role prompts
140+
141+
A special case of the above is using the assistant role to emulate tool use or function-calling, by marking a response as coming from the assistant side of the conversation:
142+
143+
```js
144+
const session = await ai.assistant.create({
145+
systemPrompt: `
146+
You are a helpful assistant. You have access to the following tools:
147+
- calculator: A calculator. To use it, write "CALCULATOR: <expression>" where <expression> is a valid mathematical expression.
148+
`
149+
});
150+
151+
async function promptWithCalculator(prompt) {
152+
const result = await session.prompt("What is 2 + 2?");
153+
154+
// Check if the assistant wants to use the calculator tool.
155+
const match = /^CALCULATOR: (.*)$/.exec(result);
156+
if (match) {
157+
const expression = match[1];
158+
const mathResult = evaluateMathExpression(expression);
159+
160+
// Add the result to the session so it's in context going forward.
161+
await session.prompt({ role: "assistant", content: mathResult });
162+
163+
// Return it as if that's what the assistant said to the user.
164+
return mathResult;
165+
}
166+
167+
// The assistant didn't want to use the calculator. Just return its response.
168+
return result;
169+
}
170+
171+
console.log(await promptWithCalculator("What is 2 + 2?"));
172+
```
173+
174+
We'll likely explore more specific APIs for tool- and function-calling in the future; follow along in [issue #7](https://github.com/explainers-by-googlers/prompt-api/issues/7).
175+
117176
### Configuration of per-session options
118177

119178
In addition to the `systemPrompt` and `initialPrompts` options shown above, the currently-configurable options are [temperature](https://huggingface.co/blog/how-to-generate#sampling) and [top-K](https://huggingface.co/blog/how-to-generate#top-k-sampling). More information about the values for these parameters can be found using the `capabilities()` API explained [below](#capabilities-detection).
@@ -345,10 +404,10 @@ interface AILanguageModelFactory {
345404
346405
[Exposed=(Window,Worker), SecureContext]
347406
interface AILanguageModel : EventTarget {
348-
Promise<DOMString> prompt(DOMString input, optional AILanguageModelPromptOptions options = {});
349-
ReadableStream promptStreaming(DOMString input, optional AILanguageModelPromptOptions options = {});
407+
Promise<DOMString> prompt(AILanguageModelPromptInput input, optional AILanguageModelPromptOptions options = {});
408+
ReadableStream promptStreaming(AILanguageModelPromptInput input, optional AILanguageModelPromptOptions options = {});
350409
351-
Promise<unsigned long long> countPromptTokens(DOMString input, optional AILanguageModelPromptOptions options = {});
410+
Promise<unsigned long long> countPromptTokens(AILanguageModelPromptInput input, optional AILanguageModelPromptOptions options = {});
352411
readonly attribute unsigned long long maxTokens;
353412
readonly attribute unsigned long long tokensSoFar;
354413
readonly attribute unsigned long long tokensLeft;
@@ -380,14 +439,19 @@ dictionary AILanguageModelCreateOptions {
380439
AICreateMonitorCallback monitor;
381440
382441
DOMString systemPrompt;
383-
sequence<AILanguageModelPrompt> initialPrompts;
442+
sequence<AILanguageModelInitialPrompt> initialPrompts;
384443
[EnforceRange] unsigned long topK;
385444
float temperature;
386445
};
387446
447+
dictionary AILanguageModelInitialPrompt {
448+
required AILanguageModelInitialPromptRole role;
449+
required DOMString content;
450+
};
451+
388452
dictionary AILanguageModelPrompt {
389-
AILanguageModelPromptRole role;
390-
DOMString content;
453+
required AILanguageModelPromptRole role;
454+
required DOMString content;
391455
};
392456
393457
dictionary AILanguageModelPromptOptions {
@@ -398,7 +462,10 @@ dictionary AILanguageModelCloneOptions {
398462
AbortSignal signal;
399463
};
400464
401-
enum AILanguageModelPromptRole { "system", "user", "assistant" };
465+
typedef (DOMString or AILanguageModelPrompt or sequence<AILanguageModelPrompt>) AILanguageModelPromptInput;
466+
467+
enum AILanguageModelInitialPromptRole { "system", "user", "assistant" };
468+
enum AILanguageModelPromptRole { "user", "assistant" };
402469
```
403470

404471
### Instruction-tuned versus base models

0 commit comments

Comments
 (0)