Skip to content

Commit 12bb7fe

Browse files
authored
Fix t5 doc typos (#3978)
* Fix tpo in into and add line under * Add missing blank line under * Correct types under
1 parent 97a3754 commit 12bb7fe

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

docs/source/model_doc/t5.rst

+4-2
Original file line numberDiff line numberDiff line change
@@ -20,13 +20,14 @@ Training
2020
~~~~~~~~~~~~~~~~~~~~
2121
T5 is an encoder-decoder model and converts all NLP problems into a text-to-text format. It is trained using teacher forcing.
2222
This means that for training we always need an input sequence and a target sequence.
23-
The input sequence is fed to the model using ``input_ids``. The target sequence is shifted to the right, *i.e.* perprended by a start-sequence token and fed to the decoder using the `decoder_input_ids`. In teacher-forcing style, the target sequence is then appended by the EOS token and corresponds to the ``lm_labels``. The PAD token is hereby used as the start-sequence token.
23+
The input sequence is fed to the model using ``input_ids``. The target sequence is shifted to the right, *i.e.* prepended by a start-sequence token and fed to the decoder using the `decoder_input_ids`. In teacher-forcing style, the target sequence is then appended by the EOS token and corresponds to the ``lm_labels``. The PAD token is hereby used as the start-sequence token.
2424
T5 can be trained / fine-tuned both in a supervised and unsupervised fashion.
2525

2626
- Unsupervised denoising training
27+
2728
In this setup spans of the input sequence are masked by so-called sentinel tokens (*a.k.a* unique mask tokens)
2829
and the output sequence is formed as a concatenation of the same sentinel tokens and the *real* masked tokens.
29-
Each sentinel tokens represents a unique mask token for this sentence and should start with ``<extra_id_1>``, ``<extrac_id_2>``, ... up to ``<extra_id_100>``. As a default 100 sentinel tokens are available in ``T5Tokenizer``.
30+
Each sentinel token represents a unique mask token for this sentence and should start with ``<extra_id_1>``, ``<extra_id_2>``, ... up to ``<extra_id_100>``. As a default 100 sentinel tokens are available in ``T5Tokenizer``.
3031
*E.g.* the sentence "The cute dog walks in the park" with the masks put on "cute dog" and "the" should be processed as follows:
3132

3233
::
@@ -37,6 +38,7 @@ T5 can be trained / fine-tuned both in a supervised and unsupervised fashion.
3738
model(input_ids=input_ids, lm_labels=lm_labels)
3839

3940
- Supervised training
41+
4042
In this setup the input sequence and output sequence are standard sequence to sequence input output mapping.
4143
In translation, *e.g.* the input sequence "The house is wonderful." and output sequence "Das Haus ist wunderbar." should
4244
be processed as follows:

0 commit comments

Comments
 (0)