Skip to content

Commit 2703a9c

Browse files
authored
Update token-counting and context overflow API surface
This is mostly to align with webmachinelearning/writing-assistance-apis#43: * Use the proposed QuotaExceededError when appropriate, instead of a "QuotaExceededError" DOMException. * Rename maxTokens/tokensSoFar to inputQuota/inputUsage. * Rename countPromptTokens() to measureInputUsage(). * Remove tokensLeft. * Rename "contextoverflow" to "quotaoverflow".
1 parent 331914a commit 2703a9c

File tree

1 file changed

+23
-35
lines changed

1 file changed

+23
-35
lines changed

README.md

+23-35
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ console.log(await session.prompt("What is your favorite food?"));
8181

8282
The system prompt is special, in that the language model will not respond to it, and it will be preserved even if the context window otherwise overflows due to too many calls to `prompt()`.
8383

84-
If the system prompt is too large (see [below](#tokenization-context-window-length-limits-and-overflow)), then the promise will be rejected with a `"QuotaExceededError"` `DOMException`.
84+
If the system prompt is too large, then the promise will be rejected with a `QuotaExceededError` exception. See [below](#tokenization-context-window-length-limits-and-overflow) for more details on token counting and this new exception type.
8585

8686
### N-shot prompting
8787

@@ -114,7 +114,7 @@ const result2 = await predictEmoji("This code is so good you should get promoted
114114
Some details on error cases:
115115

116116
* Using both `systemPrompt` and a `{ role: "system" }` prompt in `initialPrompts`, or using multiple `{ role: "system" }` prompts, or placing the `{ role: "system" }` prompt anywhere besides at the 0th position in `initialPrompts`, will reject with a `TypeError`.
117-
* If the combined token length of all the initial prompts (including the separate `systemPrompt`, if provided) is too large, then the promise will be rejected with a `"QuotaExceededError"` `DOMException`.
117+
* If the combined token length of all the initial prompts (including the separate `systemPrompt`, if provided) is too large, then the promise will be rejected with a [`QuotaExceededError` exception](#tokenization-context-window-length-limits-and-overflow).
118118

119119
### Customizing the role per prompt
120120

@@ -389,31 +389,37 @@ Note that because sessions are stateful, and prompts can be queued, aborting a s
389389
A given language model session will have a maximum number of tokens it can process. Developers can check their current usage and progress toward that limit by using the following properties on the session object:
390390

391391
```js
392-
console.log(`${session.tokensSoFar}/${session.maxTokens} (${session.tokensLeft} left)`);
392+
console.log(`${session.inputUsage} tokens used, out of ${session.inputQuota} tokens available.`);
393393
```
394394

395-
To know how many tokens a string will consume, without actually processing it, developers can use the `countPromptTokens()` method:
395+
To know how many tokens a string will consume, without actually processing it, developers can use the `measureInputUsage()` method:
396396

397397
```js
398-
const numTokens = await session.countPromptTokens(promptString);
398+
const usage = await session.measureInputUsage(promptString);
399399
```
400400

401401
Some notes on this API:
402402

403403
* We do not expose the actual tokenization to developers since that would make it too easy to depend on model-specific details.
404404
* Implementations must include in their count any control tokens that will be necessary to process the prompt, e.g. ones indicating the start or end of the input.
405-
* The counting process can be aborted by passing an `AbortSignal`, i.e. `session.countPromptTokens(promptString, { signal })`.
405+
* The counting process can be aborted by passing an `AbortSignal`, i.e. `session.measureInputUsage(promptString, { signal })`.
406+
* We use the phrases "input usage" and "input quota" in the API, to avoid being specific to the current language model tokenization paradigm. In the future, even if we change paradigms, we anticipate some concept of usage and quota still being applicable, even if it's just string length.
406407

407-
It's possible to send a prompt that causes the context window to overflow. That is, consider a case where `session.countPromptTokens(promptString) > session.tokensLeft` before calling `session.prompt(promptString)`, and then the web developer calls `session.prompt(promptString)` anyway. In such cases, the initial portions of the conversation with the language model will be removed, one prompt/response pair at a time, until enough tokens are available to process the new prompt. The exception is the [system prompt](#system-prompts), which is never removed. If it's not possible to remove enough tokens from the conversation history to process the new prompt, then the `prompt()` or `promptStreaming()` call will fail with an `"QuotaExceededError"` `DOMException` and nothing will be removed.
408+
It's possible to send a prompt that causes the context window to overflow. That is, consider a case where `session.measureInputUsage(promptString) > session.inputQuota - session.inputUsage` before calling `session.prompt(promptString)`, and then the web developer calls `session.prompt(promptString)` anyway. In such cases, the initial portions of the conversation with the language model will be removed, one prompt/response pair at a time, until enough tokens are available to process the new prompt. The exception is the [system prompt](#system-prompts), which is never removed.
408409

409-
Such overflows can be detected by listening for the `"contextoverflow"` event on the session:
410+
Such overflows can be detected by listening for the `"quotaoverflow"` event on the session:
410411

411412
```js
412-
session.addEventListener("contextoverflow", () => {
413-
console.log("Context overflow!");
413+
session.addEventListener("quotaoverflow", () => {
414+
console.log("We've gone past the quota, and some inputs will be dropped!");
414415
});
415416
```
416417

418+
If it's not possible to remove enough tokens from the conversation history to process the new prompt, then the `prompt()` or `promptStreaming()` call will fail with a `QuotaExceededError` exception and nothing will be removed. This is a proposed new type of exception, which subclasses `DOMException`, and replaces the web platform's existing `"QuotaExceededError"` `DOMException`. See [whatwg/webidl#1465](https://github.com/whatwg/webidl/pull/1465) for this proposal. For our purposes, the important part is that it has the following properties:
419+
420+
* `requested`: how many tokens the input consists of
421+
* `quota`: how many tokens were available (which will be less than `requested`, and equal to the value of `session.inputQuota - session.inputUsage` at the time of the call)
422+
417423
### Multilingual content and expected languages
418424

419425
The default behavior for a language model session assumes that the input languages are unknown. In this case, implementations will use whatever "base" capabilities they have available for the language model, and might throw `"NotSupportedError"` `DOMException`s if they encounter languages they don't support.
@@ -533,28 +539,12 @@ Finally, note that there is a sort of precedent in the (never-shipped) [`FetchOb
533539
### Full API surface in Web IDL
534540

535541
```webidl
536-
// Shared self.ai APIs
537-
538-
partial interface WindowOrWorkerGlobalScope {
539-
[Replaceable, SecureContext] readonly attribute AI ai;
540-
};
542+
// Shared self.ai APIs:
543+
// See https://webmachinelearning.github.io/writing-assistance-apis/#shared-ai-api for most of them.
541544
542-
[Exposed=(Window,Worker), SecureContext]
543-
interface AI {
545+
partial interface AI {
544546
readonly attribute AILanguageModelFactory languageModel;
545547
};
546-
547-
[Exposed=(Window,Worker), SecureContext]
548-
interface AICreateMonitor : EventTarget {
549-
attribute EventHandler ondownloadprogress;
550-
551-
// Might get more stuff in the future, e.g. for
552-
// https://github.com/webmachinelearning/prompt-api/issues/4
553-
};
554-
555-
callback AICreateMonitorCallback = undefined (AICreateMonitor monitor);
556-
557-
enum AIAvailability { "unavailable", "downloadable", "downloading", "available" };
558548
```
559549

560550
```webidl
@@ -579,19 +569,17 @@ interface AILanguageModel : EventTarget {
579569
optional AILanguageModelPromptOptions options = {}
580570
);
581571
582-
Promise<unsigned long long> countPromptTokens(
572+
Promise<double> measureInputUsage(
583573
AILanguageModelPromptInput input,
584574
optional AILanguageModelPromptOptions options = {}
585575
);
586-
readonly attribute unsigned long long maxTokens;
587-
readonly attribute unsigned long long tokensSoFar;
588-
readonly attribute unsigned long long tokensLeft;
576+
readonly attribute double inputUsage;
577+
readonly attribute unrestricted double inputQuota;
578+
attribute EventHandler onquotaoverflow;
589579
590580
readonly attribute unsigned long topK;
591581
readonly attribute float temperature;
592582
593-
attribute EventHandler oncontextoverflow;
594-
595583
Promise<AILanguageModel> clone(optional AILanguageModelCloneOptions options = {});
596584
undefined destroy();
597585
};

0 commit comments

Comments
 (0)