Skip to content

Commit 612fa1b

Browse files
authored
Examples readme.md (#4215)
* README * Update README.md
1 parent 2e57824 commit 612fa1b

File tree

2 files changed

+42
-16
lines changed

2 files changed

+42
-16
lines changed

docs/source/usage.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Sequence classification is the task of classifying sequences according to a give
4545
of sequence classification is the GLUE dataset, which is entirely based on that task. If you would like to fine-tune
4646
a model on a GLUE sequence classification task, you may leverage the
4747
`run_glue.py <https://github.com/huggingface/transformers/tree/master/examples/text-classification/run_glue.py>`_ or
48-
`run_tf_glue.py <https://github.com/huggingface/transformers/tree/master/examples/run_tf_glue.py>`_ scripts.
48+
`run_tf_glue.py <https://github.com/huggingface/transformers/tree/master/examples/text-classification/run_tf_glue.py>`_ scripts.
4949

5050
Here is an example using the pipelines do to sentiment analysis: identifying if a sequence is positive or negative.
5151
It leverages a fine-tuned model on sst2, which is a GLUE task.

examples/README.md

+41-15
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,46 @@
11
# Examples
22

3-
In this section a few examples are put together. All of these examples work for several models, making use of the very
4-
similar API between the different models.
3+
Version 2.9 of `transformers` introduces a new `Trainer` class for PyTorch, and its equivalent `TFTrainer` for TF 2.
4+
5+
Here is the list of all our examples:
6+
- **grouped by task** (all official examples work for multiple models)
7+
- with information on whether they are **built on top of `Trainer`/`TFTrainer`** (if not, they still work, they might just lack some features),
8+
- links to **Colab notebooks** to walk through the scripts and run them easily,
9+
- links to **Cloud deployments** to be able to deploy large-scale trainings in the Cloud with little to no setup.
10+
11+
This is still a work-in-progress – in particular documentation is still sparse – so please **contribute improvements/pull requests.**
12+
13+
14+
## Tasks built on Trainer
15+
16+
| Task | Example datasets | Trainer support | TFTrainer support | Colab | One-click Deploy to Azure (wip) |
17+
|---|---|:---:|:---:|:---:|:---:|
18+
| [`language-modeling`](./language-modeling) | Raw text || - | - | - |
19+
| [`text-classification`](./text-classification) | GLUE, XNLI ||| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/transformers/blob/master/notebooks/trainer/01_text_classification.ipynb) | [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-storage-account-create%2Fazuredeploy.json) |
20+
| [`token-classification`](./token-classification) | CoNLL NER ||| - | - |
21+
| [`multiple-choice`](./multiple-choice) | SWAG, RACE, ARC || - | - | - |
22+
23+
24+
25+
## Other examples and how-to's
26+
27+
| Section | Description |
28+
|---|---|
29+
| [TensorFlow 2.0 models on GLUE](./text-classification) | Examples running BERT TensorFlow 2.0 model on the GLUE tasks. |
30+
| [Running on TPUs](#running-on-tpus) | Examples on running fine-tuning tasks on Google TPUs to accelerate workloads. |
31+
| [Language Model training](./language-modeling) | Fine-tuning (or training from scratch) the library models for language modeling on a text dataset. Causal language modeling for GPT/GPT-2, masked language modeling for BERT/RoBERTa. |
32+
| [Language Generation](./text-generation) | Conditional text generation using the auto-regressive models of the library: GPT, GPT-2, Transformer-XL and XLNet. |
33+
| [GLUE](./text-classification) | Examples running BERT/XLM/XLNet/RoBERTa on the 9 GLUE tasks. Examples feature distributed training as well as half-precision. |
34+
| [SQuAD](./question-answering) | Using BERT/RoBERTa/XLNet/XLM for question answering, examples with distributed training. |
35+
| [Multiple Choice](./multiple-choice) | Examples running BERT/XLNet/RoBERTa on the SWAG/RACE/ARC tasks. |
36+
| [Named Entity Recognition](./token-classification) | Using BERT for Named Entity Recognition (NER) on the CoNLL 2003 dataset, examples with distributed training. |
37+
| [XNLI](./text-classification) | Examples running BERT/XLM on the XNLI benchmark. |
38+
| [Adversarial evaluation of model performances](./adversarial) | Testing a model with adversarial evaluation of natural language inference on the Heuristic Analysis for NLI Systems (HANS) dataset (McCoy et al., 2019.) |
39+
40+
## Important note
541

642
**Important**
7-
To run the latest versions of the examples, you have to install from source and install some specific requirements for the examples.
43+
To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements.
844
Execute the following steps in a new virtual environment:
945

1046
```bash
@@ -14,16 +50,6 @@ pip install .
1450
pip install -r ./examples/requirements.txt
1551
```
1652

17-
| Section | Description |
18-
|----------------------------|-----------------------------------------------------
19-
| [TensorFlow 2.0 models on GLUE](#TensorFlow-2.0-Bert-models-on-GLUE) | Examples running BERT TensorFlow 2.0 model on the GLUE tasks. |
20-
| [Running on TPUs](#running-on-tpus) | Examples on running fine-tuning tasks on Google TPUs to accelerate workloads. |
21-
| [Language Model training](#language-model-training) | Fine-tuning (or training from scratch) the library models for language modeling on a text dataset. Causal language modeling for GPT/GPT-2, masked language modeling for BERT/RoBERTa. |
22-
| [Language Generation](#language-generation) | Conditional text generation using the auto-regressive models of the library: GPT, GPT-2, Transformer-XL and XLNet. |
23-
| [GLUE](#glue) | Examples running BERT/XLM/XLNet/RoBERTa on the 9 GLUE tasks. Examples feature distributed training as well as half-precision. |
24-
| [SQuAD](#squad) | Using BERT/RoBERTa/XLNet/XLM for question answering, examples with distributed training. |
25-
| [Multiple Choice](#multiple-choice) | Examples running BERT/XLNet/RoBERTa on the SWAG/RACE/ARC tasks. |
26-
| [Named Entity Recognition](https://github.com/huggingface/transformers/tree/master/examples/ner) | Using BERT for Named Entity Recognition (NER) on the CoNLL 2003 dataset, examples with distributed training. |
27-
| [XNLI](#xnli) | Examples running BERT/XLM on the XNLI benchmark. |
28-
| [Adversarial evaluation of model performances](#adversarial-evaluation-of-model-performances) | Testing a model with adversarial evaluation of natural language inference on the Heuristic Analysis for NLI Systems (HANS) dataset (McCoy et al., 2019.) |
53+
## Running on TPUs
2954

55+
Documentation to come.

0 commit comments

Comments
 (0)