You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|[Quick tour: TF 2.0 and PyTorch ](#Quick-tour-TF-20-training-and-PyTorch-interoperability)| Train a TF 2.0 model in 10 lines of code, load it in PyTorch |
58
58
|[Quick tour: Fine-tuning/usage scripts](#quick-tour-of-the-fine-tuningusage-scripts)| Using provided scripts: GLUE, SQuAD and Text generation |
59
+
|[Quick tour: Share your models ](#Quick-tour-of-model-sharing)| Upload and share your fine-tuned models with the community |
59
60
|[Migrating from pytorch-transformers to transformers](#Migrating-from-pytorch-transformers-to-transformers)| Migrating your code from pytorch-transformers to transformers |
60
61
|[Migrating from pytorch-pretrained-bert to pytorch-transformers](#Migrating-from-pytorch-pretrained-bert-to-transformers)| Migrating your code from pytorch-pretrained-bert to transformers |
61
-
|[Documentation][(v2.2.0/v2.2.1)](https://huggingface.co/transformers/v2.2.0) [(v2.1.1)](https://huggingface.co/transformers/v2.1.1)[(v2.0.0)](https://huggingface.co/transformers/v2.0.0)[(v1.2.0)](https://huggingface.co/transformers/v1.2.0)[(v1.1.0)](https://huggingface.co/transformers/v1.1.0)[(v1.0.0)](https://huggingface.co/transformers/v1.0.0)[(master)](https://huggingface.co/transformers)| Full API documentation and more |
62
+
|[Documentation][(v2.2.0/v2.2.1/v2.2.2)](https://huggingface.co/transformers/v2.2.0) [(v2.1.1)](https://huggingface.co/transformers/v2.1.1)[(v2.0.0)](https://huggingface.co/transformers/v2.0.0)[(v1.2.0)](https://huggingface.co/transformers/v1.2.0)[(v1.1.0)](https://huggingface.co/transformers/v1.1.0)[(v1.0.0)](https://huggingface.co/transformers/v1.0.0)[(master)](https://huggingface.co/transformers)| Full API documentation and more |
62
63
63
64
## Installation
64
65
@@ -144,7 +145,8 @@ At some point in the future, you'll be able to seamlessly move from pre-training
144
145
9.**[CTRL](https://github.com/salesforce/ctrl/)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
145
146
10.**[CamemBERT](https://camembert-model.fr)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
146
147
11.**[ALBERT](https://github.com/google-research/ALBERT)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
147
-
11. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
148
+
12.**[T5](https://github.com/google-research/text-to-text-transfer-transformer)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
149
+
13. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
148
150
149
151
These implementations have been tested on several datasets (see the example scripts) and should match the performances of the original implementations (e.g. ~93 F1 on SQuAD for BERT Whole-Word-Masking, ~88 F1 on RocStories for OpenAI GPT, ~18.3 perplexity on WikiText 103 for Transformer-XL, ~0.916 Peason R coefficient on STS-B for XLNet). You can find more details on the performances in the Examples section of the [documentation](https://huggingface.co/transformers/examples.html).
New in `v2.2.2`: you can now upload and share your fine-tuned models with the community, using the <abbrtitle="Command-line interface">CLI</abbr> that's built-in to the library.
453
+
454
+
**First, create an account on [https://huggingface.co/join](https://huggingface.co/join)**. Then:
455
+
456
+
```shell
457
+
transformers-cli login
458
+
# log in using the same credentials as on huggingface.co
Starting with `v2.2.2`, you can now upload and share your fine-tuned models with the community, using the <abbrtitle="Command-line interface">CLI</abbr> that's built-in to the library.
4
+
5
+
**First, create an account on [https://huggingface.co/join](https://huggingface.co/join)**. Then:
6
+
7
+
```shell
8
+
transformers-cli login
9
+
# log in using the same credentials as on huggingface.co
0 commit comments