Skip to content

Commit 5a64082

Browse files
Updated Transformer doc notebooks with commit 3f05e7c98710bd6131f846197635676ff8146e0e \n\nSee: huggingface/transformers@3f05e7c
1 parent ebf9702 commit 5a64082

24 files changed

+291
-291
lines changed

transformers_doc/custom_datasets.ipynb

+16-16
Original file line numberDiff line numberDiff line change
@@ -126,7 +126,7 @@
126126
"source": [
127127
"The next step is to tokenize the text into a readable format by the model. It is important to load the same tokenizer a\n",
128128
"model was trained with to ensure appropriately tokenized words. Load the DistilBERT tokenizer with the\n",
129-
"[AutoTokenizer](https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.AutoTokenizer) because we will eventually train a classifier using a pretrained [DistilBERT](https://huggingface.co/distilbert-base-uncased) model:"
129+
"[AutoTokenizer](https://huggingface.co/docs/transformers/doc-builder-html/en/model_doc/auto#transformers.AutoTokenizer) because we will eventually train a classifier using a pretrained [DistilBERT](https://huggingface.co/distilbert-base-uncased) model:"
130130
]
131131
},
132132
{
@@ -207,7 +207,7 @@
207207
"cell_type": "markdown",
208208
"metadata": {},
209209
"source": [
210-
"Now load your model with the [AutoModelForSequenceClassification](https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.AutoModelForSequenceClassification) class along with the number of expected labels:"
210+
"Now load your model with the [AutoModelForSequenceClassification](https://huggingface.co/docs/transformers/doc-builder-html/en/model_doc/auto#transformers.AutoModelForSequenceClassification) class along with the number of expected labels:"
211211
]
212212
},
213213
{
@@ -227,8 +227,8 @@
227227
"source": [
228228
"At this point, only three steps remain:\n",
229229
"\n",
230-
"1. Define your training hyperparameters in [TrainingArguments](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.TrainingArguments).\n",
231-
"2. Pass the training arguments to a [Trainer](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, tokenizer, and data collator.\n",
230+
"1. Define your training hyperparameters in [TrainingArguments](https://huggingface.co/docs/transformers/doc-builder-html/en/main_classes/trainer#transformers.TrainingArguments).\n",
231+
"2. Pass the training arguments to a [Trainer](https://huggingface.co/docs/transformers/doc-builder-html/en/main_classes/trainer#transformers.Trainer) along with the model, dataset, tokenizer, and data collator.\n",
232232
"3. Call `Trainer.train()` to fine-tune your model."
233233
]
234234
},
@@ -274,7 +274,7 @@
274274
"source": [
275275
"Fine-tuning with TensorFlow is just as easy, with only a few differences.\n",
276276
"\n",
277-
"Start by batching the processed examples together with dynamic padding using the [DataCollatorWithPadding](https://huggingface.co/docs/transformers/master/en/main_classes/data_collator#transformers.DataCollatorWithPadding) function.\n",
277+
"Start by batching the processed examples together with dynamic padding using the [DataCollatorWithPadding](https://huggingface.co/docs/transformers/doc-builder-html/en/main_classes/data_collator#transformers.DataCollatorWithPadding) function.\n",
278278
"Make sure you set `return_tensors=\"tf\"` to return `tf.Tensor` outputs instead of PyTorch tensors!"
279279
]
280280
},
@@ -345,7 +345,7 @@
345345
"cell_type": "markdown",
346346
"metadata": {},
347347
"source": [
348-
"Load your model with the [TFAutoModelForSequenceClassification](https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.TFAutoModelForSequenceClassification) class along with the number of expected labels:"
348+
"Load your model with the [TFAutoModelForSequenceClassification](https://huggingface.co/docs/transformers/doc-builder-html/en/model_doc/auto#transformers.TFAutoModelForSequenceClassification) class along with the number of expected labels:"
349349
]
350350
},
351351
{
@@ -548,7 +548,7 @@
548548
"cell_type": "markdown",
549549
"metadata": {},
550550
"source": [
551-
"Now you need to tokenize the text. Load the DistilBERT tokenizer with an [AutoTokenizer](https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.AutoTokenizer):"
551+
"Now you need to tokenize the text. Load the DistilBERT tokenizer with an [AutoTokenizer](https://huggingface.co/docs/transformers/doc-builder-html/en/model_doc/auto#transformers.AutoTokenizer):"
552552
]
553553
},
554554
{
@@ -680,7 +680,7 @@
680680
"cell_type": "markdown",
681681
"metadata": {},
682682
"source": [
683-
"Load your model with the [AutoModelForTokenClassification](https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.AutoModelForTokenClassification) class along with the number of expected labels:"
683+
"Load your model with the [AutoModelForTokenClassification](https://huggingface.co/docs/transformers/doc-builder-html/en/model_doc/auto#transformers.AutoModelForTokenClassification) class along with the number of expected labels:"
684684
]
685685
},
686686
{
@@ -698,7 +698,7 @@
698698
"cell_type": "markdown",
699699
"metadata": {},
700700
"source": [
701-
"Gather your training arguments in [TrainingArguments](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.TrainingArguments):"
701+
"Gather your training arguments in [TrainingArguments](https://huggingface.co/docs/transformers/doc-builder-html/en/main_classes/trainer#transformers.TrainingArguments):"
702702
]
703703
},
704704
{
@@ -722,7 +722,7 @@
722722
"cell_type": "markdown",
723723
"metadata": {},
724724
"source": [
725-
"Collect your model, training arguments, dataset, data collator, and tokenizer in [Trainer](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.Trainer):"
725+
"Collect your model, training arguments, dataset, data collator, and tokenizer in [Trainer](https://huggingface.co/docs/transformers/doc-builder-html/en/main_classes/trainer#transformers.Trainer):"
726726
]
727727
},
728728
{
@@ -814,7 +814,7 @@
814814
"cell_type": "markdown",
815815
"metadata": {},
816816
"source": [
817-
"Load the model with the [TFAutoModelForTokenClassification](https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.TFAutoModelForTokenClassification) class along with the number of expected labels:"
817+
"Load the model with the [TFAutoModelForTokenClassification](https://huggingface.co/docs/transformers/doc-builder-html/en/model_doc/auto#transformers.TFAutoModelForTokenClassification) class along with the number of expected labels:"
818818
]
819819
},
820820
{
@@ -990,7 +990,7 @@
990990
"cell_type": "markdown",
991991
"metadata": {},
992992
"source": [
993-
"Load the DistilBERT tokenizer with an [AutoTokenizer](https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.AutoTokenizer):"
993+
"Load the DistilBERT tokenizer with an [AutoTokenizer](https://huggingface.co/docs/transformers/doc-builder-html/en/model_doc/auto#transformers.AutoTokenizer):"
994994
]
995995
},
996996
{
@@ -1123,7 +1123,7 @@
11231123
"cell_type": "markdown",
11241124
"metadata": {},
11251125
"source": [
1126-
"Load your model with the [AutoModelForQuestionAnswering](https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.AutoModelForQuestionAnswering) class:"
1126+
"Load your model with the [AutoModelForQuestionAnswering](https://huggingface.co/docs/transformers/doc-builder-html/en/model_doc/auto#transformers.AutoModelForQuestionAnswering) class:"
11271127
]
11281128
},
11291129
{
@@ -1141,7 +1141,7 @@
11411141
"cell_type": "markdown",
11421142
"metadata": {},
11431143
"source": [
1144-
"Gather your training arguments in [TrainingArguments](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.TrainingArguments):"
1144+
"Gather your training arguments in [TrainingArguments](https://huggingface.co/docs/transformers/doc-builder-html/en/main_classes/trainer#transformers.TrainingArguments):"
11451145
]
11461146
},
11471147
{
@@ -1165,7 +1165,7 @@
11651165
"cell_type": "markdown",
11661166
"metadata": {},
11671167
"source": [
1168-
"Collect your model, training arguments, dataset, data collator, and tokenizer in [Trainer](https://huggingface.co/docs/transformers/master/en/main_classes/trainer#transformers.Trainer):"
1168+
"Collect your model, training arguments, dataset, data collator, and tokenizer in [Trainer](https://huggingface.co/docs/transformers/doc-builder-html/en/main_classes/trainer#transformers.Trainer):"
11691169
]
11701170
},
11711171
{
@@ -1284,7 +1284,7 @@
12841284
"cell_type": "markdown",
12851285
"metadata": {},
12861286
"source": [
1287-
"Load your model with the [TFAutoModelForQuestionAnswering](https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.TFAutoModelForQuestionAnswering) class:"
1287+
"Load your model with the [TFAutoModelForQuestionAnswering](https://huggingface.co/docs/transformers/doc-builder-html/en/model_doc/auto#transformers.TFAutoModelForQuestionAnswering) class:"
12881288
]
12891289
},
12901290
{

0 commit comments

Comments
 (0)