You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|[TensorFlow 2.0 models on GLUE](./text-classification)| Examples running BERT TensorFlow 2.0 model on the GLUE tasks. |
30
+
|[Running on TPUs](#running-on-tpus)| Examples on running fine-tuning tasks on Google TPUs to accelerate workloads. |
31
+
|[Language Model training](./language-modeling)| Fine-tuning (or training from scratch) the library models for language modeling on a text dataset. Causal language modeling for GPT/GPT-2, masked language modeling for BERT/RoBERTa. |
32
+
|[Language Generation](./text-generation)| Conditional text generation using the auto-regressive models of the library: GPT, GPT-2, Transformer-XL and XLNet. |
33
+
|[GLUE](./text-classification)| Examples running BERT/XLM/XLNet/RoBERTa on the 9 GLUE tasks. Examples feature distributed training as well as half-precision. |
34
+
|[SQuAD](./question-answering)| Using BERT/RoBERTa/XLNet/XLM for question answering, examples with distributed training. |
35
+
|[Multiple Choice](./multiple-choice)| Examples running BERT/XLNet/RoBERTa on the SWAG/RACE/ARC tasks. |
36
+
|[Named Entity Recognition](./token-classification)| Using BERT for Named Entity Recognition (NER) on the CoNLL 2003 dataset, examples with distributed training. |
37
+
|[XNLI](./text-classification)| Examples running BERT/XLM on the XNLI benchmark. |
38
+
|[Adversarial evaluation of model performances](./adversarial)| Testing a model with adversarial evaluation of natural language inference on the Heuristic Analysis for NLI Systems (HANS) dataset (McCoy et al., 2019.) |
39
+
40
+
## Important note
5
41
6
42
**Important**
7
-
To run the latest versions of the examples, you have to install from source and install some specific requirements for the examples.
43
+
To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements.
8
44
Execute the following steps in a new virtual environment:
|[TensorFlow 2.0 models on GLUE](#TensorFlow-2.0-Bert-models-on-GLUE)| Examples running BERT TensorFlow 2.0 model on the GLUE tasks. |
20
-
|[Running on TPUs](#running-on-tpus)| Examples on running fine-tuning tasks on Google TPUs to accelerate workloads. |
21
-
|[Language Model training](#language-model-training)| Fine-tuning (or training from scratch) the library models for language modeling on a text dataset. Causal language modeling for GPT/GPT-2, masked language modeling for BERT/RoBERTa. |
22
-
|[Language Generation](#language-generation)| Conditional text generation using the auto-regressive models of the library: GPT, GPT-2, Transformer-XL and XLNet. |
23
-
|[GLUE](#glue)| Examples running BERT/XLM/XLNet/RoBERTa on the 9 GLUE tasks. Examples feature distributed training as well as half-precision. |
24
-
|[SQuAD](#squad)| Using BERT/RoBERTa/XLNet/XLM for question answering, examples with distributed training. |
25
-
|[Multiple Choice](#multiple-choice)| Examples running BERT/XLNet/RoBERTa on the SWAG/RACE/ARC tasks. |
26
-
|[Named Entity Recognition](https://github.com/huggingface/transformers/tree/master/examples/ner)| Using BERT for Named Entity Recognition (NER) on the CoNLL 2003 dataset, examples with distributed training. |
27
-
|[XNLI](#xnli)| Examples running BERT/XLM on the XNLI benchmark. |
28
-
|[Adversarial evaluation of model performances](#adversarial-evaluation-of-model-performances)| Testing a model with adversarial evaluation of natural language inference on the Heuristic Analysis for NLI Systems (HANS) dataset (McCoy et al., 2019.) |
0 commit comments