Skip to content

Commit 7e22609

Browse files
Tensorflow LM examples (#12358)
* Tensorflow MLM example * Add CLM example * Style fixes, adding missing checkpoint code from the CLM example * Fix TPU training, avoid massive dataset warnings * Fix incorrect training length calculation for multi-GPU training * Fix incorrect training length calculation for multi-GPU training * Refactors and nitpicks from the review * Style pass * Adding README
1 parent 2d70c91 commit 7e22609

File tree

3 files changed

+1212
-0
lines changed

3 files changed

+1212
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
<!---
2+
Copyright 2021 The HuggingFace Team. All rights reserved.
3+
4+
Licensed under the Apache License, Version 2.0 (the "License");
5+
you may not use this file except in compliance with the License.
6+
You may obtain a copy of the License at
7+
8+
http://www.apache.org/licenses/LICENSE-2.0
9+
10+
Unless required by applicable law or agreed to in writing, software
11+
distributed under the License is distributed on an "AS IS" BASIS,
12+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
See the License for the specific language governing permissions and
14+
limitations under the License.
15+
-->
16+
17+
# Language modelling examples
18+
19+
This folder contains some scripts showing examples of *language model pre-training* with the 🤗 Transformers library.
20+
For straightforward use-cases you may be able to use these scripts without modification, although we have also
21+
included comments in the code to indicate areas that you may need to adapt to your own projects. The two scripts
22+
have almost identical arguments, but they differ in the type of LM they train - a causal language model (like GPT) or a
23+
masked language model (like BERT). Masked language models generally train more quickly and perform better when
24+
fine-tuned on new tasks with a task-specific output head, like text classification. However, their ability to generate
25+
text is weaker than causal language models.
26+
27+
## Pre-training versus fine-tuning
28+
29+
These scripts can be used to both *pre-train* a language model completely from scratch, as well as to *fine-tune*
30+
a language model on text from your domain of interest. To start with an existing pre-trained language model you
31+
can use the `--model_name_or_path` argument, or to train from scratch you can use the `--model_type` argument
32+
to indicate the class of model architecture to initialize.
33+
34+
### Multi-GPU and TPU usage
35+
36+
By default, these scripts use a `MirroredStrategy` and will use multiple GPUs effectively if they are available. TPUs
37+
can also be used by passing the name of the TPU resource with the `--tpu` argument.
38+
39+
## run_mlm.py
40+
41+
This script trains a masked language model.
42+
43+
### Example command
44+
```
45+
python run_mlm.py \
46+
--model_name_or_path distilbert-base-cased \
47+
--output_dir output \
48+
--dataset_name wikitext \
49+
--dataset_config_name wikitext-103-raw-v1
50+
```
51+
52+
## run_clm.py
53+
54+
This script trains a causal language model.
55+
56+
### Example command
57+
```
58+
python run_clm.py \
59+
--model_name_or_path distilgpt2 \
60+
--output_dir output \
61+
--dataset_name wikitext \
62+
--dataset_config_name wikitext-103-raw-v1
63+
```

0 commit comments

Comments
 (0)