@@ -54,105 +54,117 @@ The documentation is organized in five parts:
54
54
The library currently contains PyTorch and Tensorflow implementations, pre-trained model weights, usage scripts and
55
55
conversion utilities for the following models:
56
56
57
- 1. `ALBERT <https://github.com/google-research/ALBERT >`_ (from Google Research), released together with the paper
58
- `ALBERT: A Lite BERT for Self-supervised Learning of Language Representations <https://arxiv.org/abs/1909.11942 >`_
59
- by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
60
- 2. `BART <https://github.com/pytorch/fairseq/tree/master/examples/bart >`_ (from Facebook) released with the paper
61
- `BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
62
- <https://arxiv.org/pdf/1910.13461.pdf> `_ by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman
63
- Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer.
64
- 3. `BERT <https://github.com/google-research/bert >`_ (from Google) released with the paper `BERT: Pre-training of Deep
65
- Bidirectional Transformers for Language Understanding <https://arxiv.org/abs/1810.04805> `_ by Jacob Devlin, Ming-Wei
66
- Chang, Kenton Lee, and Kristina Toutanova.
67
- 4. `BERT For Sequence Generation <https://tfhub.dev/s?module-type=text-generation&subtype=module,placeholder >`_
68
- (from Google) released with the paper `Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
69
- <https://arxiv.org/abs/1907.12461> `_ by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
70
- 5. `CamemBERT <https://huggingface.co/transformers/model_doc/camembert.html >`_ (from FAIR, Inria, Sorbonne Université)
71
- released together with the paper `CamemBERT: a Tasty French Language Model <https://arxiv.org/abs/1911.03894 >`_ by
72
- Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suarez, Yoann Dupont, Laurent Romary, Eric Villemonte de la
73
- Clergerie, Djame Seddah, and Benoît Sagot.
74
- 6. `CTRL <https://github.com/pytorch/fairseq/tree/master/examples/ctrl >`_ (from Salesforce), released together with the
75
- paper `CTRL: A Conditional Transformer Language Model for Controllable Generation
76
- <https://www.github.com/salesforce/ctrl> `_ by Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong,
77
- and Richard Socher.
78
- 7. `DeBERTa <https://huggingface.co/transformers/model_doc/deberta.html >`_ (from Microsoft Research) released with the
79
- paper `DeBERTa: Decoding-enhanced BERT with Disentangled Attention <https://arxiv.org/abs/2006.03654 >`_ by Pengcheng
80
- He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
81
- 8. `DialoGPT <https://github.com/microsoft/DialoGPT >`_ (from Microsoft Research) released with the paper `DialoGPT:
82
- Large-Scale Generative Pre-training for Conversational Response Generation <https://arxiv.org/abs/1911.00536> `_ by
83
- Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu,
84
- and Bill Dolan.
85
- 9. `DistilBERT <https://huggingface.co/transformers/model_doc/distilbert.html >`_ (from HuggingFace) released together
57
+ ..
58
+ This list is updated automatically from the README with `make fix-copies`. Do not update manually!
59
+
60
+ 1. `ALBERT <https://huggingface.co/transformers/model_doc/albert.html >`__ (from Google Research and the Toyota
61
+ Technological Institute at Chicago) released with the paper `ALBERT: A Lite BERT for Self-supervised Learning of
62
+ Language Representations <https://arxiv.org/abs/1909.11942> `__, by Zhenzhong Lan, Mingda Chen, Sebastian Goodman,
63
+ Kevin Gimpel, Piyush Sharma, Radu Soricut.
64
+ 2. `BART <https://huggingface.co/transformers/model_doc/bart.html >`__ (from Facebook) released with the paper `BART:
65
+ Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
66
+ <https://arxiv.org/pdf/1910.13461.pdf> `__ by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman
67
+ Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
68
+ 3. `BERT <https://huggingface.co/transformers/model_doc/bert.html >`__ (from Google) released with the paper `BERT:
69
+ Pre-training of Deep Bidirectional Transformers for Language Understanding <https://arxiv.org/abs/1810.04805> `__ by
70
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
71
+ 4. `BERT For Sequence Generation <https://tfhub.dev/s?module-type=text-generation&subtype=module,placeholder >`__ (from
72
+ Google) released with the paper `Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
73
+ <https://arxiv.org/abs/1907.12461> `__ by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
74
+ 5. `CamemBERT <https://huggingface.co/transformers/model_doc/camembert.html >`__ (from Inria/Facebook/Sorbonne) released
75
+ with the paper `CamemBERT: a Tasty French Language Model <https://arxiv.org/abs/1911.03894 >`__ by Louis Martin*,
76
+ Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé
77
+ Seddah and Benoît Sagot.
78
+ 6. `CTRL <https://huggingface.co/transformers/model_doc/ctrl.html >`__ (from Salesforce) released with the paper `CTRL:
79
+ A Conditional Transformer Language Model for Controllable Generation <https://arxiv.org/abs/1909.05858> `__ by Nitish
80
+ Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
81
+ 7. `DeBERTa <https://huggingface.co/transformers/model_doc/deberta.html >`__ (from Microsoft Research) released with the
82
+ paper `DeBERTa: Decoding-enhanced BERT with Disentangled Attention <https://arxiv.org/abs/2006.03654 >`__ by
83
+ Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
84
+ 8. `DialoGPT <https://huggingface.co/transformers/model_doc/dialogpt.html >`__ (from Microsoft Research) released with
85
+ the paper `DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation
86
+ <https://arxiv.org/abs/1911.00536> `__ by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang
87
+ Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
88
+ 9. `DistilBERT <https://huggingface.co/transformers/model_doc/distilbert.html >`__ (from HuggingFace), released together
86
89
with the paper `DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
87
- <https://arxiv.org/abs/1910.01108> `_ by Victor Sanh, Lysandre Debut, and Thomas Wolf. The same method has been
88
- applied to compress GPT2 into
89
- `DistilGPT2 <https://github.com/huggingface/transformers/tree/master/examples/distillation >`_.
90
- 10. `DPR <https://github.com/facebookresearch/DPR >`_ (from Facebook) released with the paper `Dense Passage Retrieval
91
- for Open-Domain Question Answering <https://arxiv.org/abs/2004.04906> `_ by Vladimir Karpukhin, Barlas Oğuz, Sewon
90
+ <https://arxiv.org/abs/1910.01108> `__ by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been
91
+ applied to compress GPT2 into `DistilGPT2
92
+ <https://github.com/huggingface/transformers/tree/master/examples/distillation> `__, RoBERTa into `DistilRoBERTa
93
+ <https://github.com/huggingface/transformers/tree/master/examples/distillation> `__, Multilingual BERT into
94
+ `DistilmBERT <https://github.com/huggingface/transformers/tree/master/examples/distillation >`__ and a German version
95
+ of DistilBERT.
96
+ 10. `DPR <https://github.com/facebookresearch/DPR >`__ (from Facebook) released with the paper `Dense Passage Retrieval
97
+ for Open-Domain Question Answering <https://arxiv.org/abs/2004.04906> `__ by Vladimir Karpukhin, Barlas Oğuz, Sewon
92
98
Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
93
- 11. `ELECTRA <https://github.com/google-research/ electra >`_ (from Google Research/Stanford University) released with
94
- the paper `ELECTRA: Pre-training text encoders as discriminators rather than generators
95
- <https://arxiv.org/abs/2003.10555> `_ by Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning.
96
- 12. `FlauBERT <https://github.com/getalp/Flaubert >`_ (from CNRS) released with the paper ` FlauBERT: Unsupervised
97
- Language Model Pre-training for French <https://arxiv.org/abs/1912.05372> `_ by Hang Le, Loïc Vial, Jibril Frej ,
98
- Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, and
99
- Didier Schwab.
100
- 13. `Funnel Transformer <https://github.com/laiguokun/Funnel-Transformer >`_ (from CMU/Google Brain) released with the paper
101
- `Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing
102
- <https://arxiv.org/abs/2006.03236> `_ by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
103
- 14. `GPT <https://github.com/openai/finetune-transformer-lm >`_ (from OpenAI) released with the paper `Improving Language
104
- Understanding by Generative Pre-Training <https://blog.openai.com/language-unsupervised> `_ by Alec Radford, Karthik
105
- Narasimhan, Tim Salimans, and Ilya Sutskever.
106
- 15. `GPT-2 <https://blog.openai.com/better-language-models >`_ (from OpenAI) released with the paper `Language Models are
107
- Unsupervised Multitask Learners <https://blog.openai.com/better-language-models> `_ by Alec Radford, Jeffrey Wu ,
108
- Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.
109
- 16. `LayoutLM <https://github.com/microsoft/unilm/tree/master/layoutlm >`_ (from Microsoft Research Asia) released with
99
+ 11. `ELECTRA <https://huggingface.co/transformers/model_doc/ electra.html >`__ (from Google Research/Stanford University)
100
+ released with the paper `ELECTRA: Pre-training text encoders as discriminators rather than generators
101
+ <https://arxiv.org/abs/2003.10555> `__ by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
102
+ 12. `FlauBERT <https://huggingface.co/transformers/model_doc/flaubert.html >`__ (from CNRS) released with the paper
103
+ ` FlauBERT: Unsupervised Language Model Pre-training for French <https://arxiv.org/abs/1912.05372 >`__ by Hang Le,
104
+ Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé,
105
+ Laurent Besacier, Didier Schwab.
106
+ 13. `Funnel Transformer <https://github.com/laiguokun/Funnel-Transformer >`__ (from CMU/Google Brain) released with the
107
+ paper `Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing
108
+ <https://arxiv.org/abs/2006.03236> `__ by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
109
+ 14. `GPT <https://huggingface.co/transformers/model_doc/gpt.html >`__ (from OpenAI) released with the paper `Improving
110
+ Language Understanding by Generative Pre-Training <https://blog.openai.com/language-unsupervised/> `__ by Alec
111
+ Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
112
+ 15. `GPT-2 <https://huggingface.co/transformers/model_doc/gpt2.html >`__ (from OpenAI) released with the paper `Language
113
+ Models are Unsupervised Multitask Learners <https://blog.openai.com/better-language-models/> `__ by Alec Radford* ,
114
+ Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** .
115
+ 16. `LayoutLM <https://github.com/microsoft/unilm/tree/master/layoutlm >`__ (from Microsoft Research Asia) released with
110
116
the paper `LayoutLM: Pre-training of Text and Layout for Document Image Understanding
111
- <https://arxiv.org/abs/1912.13318> `_ by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
112
- 17. `Longformer <https://github.com/allenai/longformer >`_ (from AllenAI) released with the paper `Longformer: The
113
- Long-Document Transformer <https://arxiv.org/abs/2004.05150> `_ by Iz Beltagy, Matthew E. Peters, and Arman Cohan.
114
- 18. `LXMERT <https://github.com/airsplay/lxmert >`_ (from UNC Chapel Hill) released with the paper `LXMERT: Learning
115
- Cross-Modality Encoder Representations from Transformers for Open-Domain Question
116
- Answering <https://arxiv.org/abs/1908.07490> `_ by Hao Tan and Mohit Bansal.
117
- 19. `MarianMT <https://marian-nmt.github.io/ >`_ (developed by the Microsoft Translator Team) machine translation models
118
- trained using `OPUS <http://opus.nlpl.eu/ >`_ pretrained_models data by Jörg Tiedemann.
119
- 20. `MBart <https://github.com/pytorch/fairseq/tree/master/examples/mbart >`_ (from Facebook) released with the paper
120
- `Multilingual Denoising Pre-training for Neural Machine Translation <https://arxiv.org/abs/2001.08210 >`_ by Yinhan
117
+ <https://arxiv.org/abs/1912.13318> `__ by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
118
+ 17. `Longformer <https://huggingface.co/transformers/model_doc/longformer.html >`__ (from AllenAI) released with the
119
+ paper `Longformer: The Long-Document Transformer <https://arxiv.org/abs/2004.05150 >`__ by Iz Beltagy, Matthew E.
120
+ Peters, Arman Cohan.
121
+ 18. `LXMERT <https://github.com/airsplay/lxmert >`__ (from UNC Chapel Hill) released with the paper `LXMERT: Learning
122
+ Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering
123
+ <https://arxiv.org/abs/1908.07490> `__ by Hao Tan and Mohit Bansal.
124
+ 19. `MarianMT <https://huggingface.co/transformers/model_doc/marian.html >`__ Machine translation models trained using
125
+ `OPUS <http://opus.nlpl.eu/ >`__ data by Jörg Tiedemann. The `Marian Framework <https://marian-nmt.github.io/ >`__ is
126
+ being developed by the Microsoft Translator Team.
127
+ 20. `MBart <https://github.com/pytorch/fairseq/tree/master/examples/mbart >`__ (from Facebook) released with the paper
128
+ `Multilingual Denoising Pre-training for Neural Machine Translation <https://arxiv.org/abs/2001.08210 >`__ by Yinhan
121
129
Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
122
- 21. `MMBT <https://github.com/facebookresearch/mmbt/ >`_ (from Facebook), released together with the paper a ` Supervised
123
- Multimodal Bitransformers for Classifying Images and Text <https://arxiv.org/pdf/1909.02950.pdf> `_ by Douwe Kiela,
124
- Suvrat Bhooshan, Hamed Firooz, and Davide Testuggine.
125
- 22. `Pegasus <https://github.com/google-research/pegasus >`_ (from Google) released with the paper `PEGASUS:
126
- Pre-training with Extracted Gap-sentences for Abstractive Summarization <https://arxiv.org/abs/1912.08777> `_ by
130
+ 21. `MMBT <https://github.com/facebookresearch/mmbt/ >`__ (from Facebook), released together with the paper a
131
+ ` Supervised Multimodal Bitransformers for Classifying Images and Text <https://arxiv.org/pdf/1909.02950.pdf >`__ by
132
+ Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, Davide Testuggine.
133
+ 22. `Pegasus <https://github.com/google-research/pegasus >`__ (from Google) released with the paper `PEGASUS:
134
+ Pre-training with Extracted Gap-sentences for Abstractive Summarization <https://arxiv.org/abs/1912.08777> `__> by
127
135
Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
128
- 23. `Reformer <https://github.com/google/trax/tree/master/trax/models/reformer >`_ (from Google Research) released with
129
- the paper `Reformer: The Efficient Transformer <https://arxiv.org/abs/2001.04451 >`_ by Nikita Kitaev, Łukasz
130
- Kaiser, and Anselm Levskaya.
131
- 24. `RoBERTa <https://github.com/pytorch/fairseq/tree/master/examples/roberta >`_ (from Facebook), released together with
132
- the paper a `Robustly Optimized BERT Pretraining Approach <https://arxiv.org/abs/1907.11692 >`_ by Yinhan Liu, Myle
133
- Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin
134
- Stoyanov.
135
- 25. `T5 <https://github.com/google-research/text-to-text-transfer-transformer >`_ (from Google) released with the paper
136
- `Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
137
- <https://arxiv.org/abs/1910.10683> `_ by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
138
- Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu.
139
- 26. `Transformer-XL <https://github.com/kimiyoung/transformer-xl >`_ (from Google/CMU) released with the paper
140
- `Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context <https://arxiv.org/abs/1901.02860 >`_ by
141
- Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, and Ruslan Salakhutdinov.
142
- 27. `XLM <https://github.com/facebookresearch/XLM >`_ (from Facebook) released together with the paper `Cross-lingual
143
- Language Model Pretraining <https://arxiv.org/abs/1901.07291> `_ by Guillaume Lample and Alexis Conneau.
144
- 28. `XLM-RoBERTa <https://github.com/pytorch/fairseq/tree/master/examples/xlmr >`_ (from Facebook AI), released together
145
- with the paper `Unsupervised Cross-lingual Representation Learning at Scale <https://arxiv.org/abs/1911.02116 >`_ by
146
- Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard
147
- Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov.
148
- 29. `XLNet <https://github.com/zihangdai/xlnet >`_ (from Google/CMU) released with the paper `XLNet: Generalized
149
- Autoregressive Pretraining for Language Understanding <https://arxiv.org/abs/1906.08237> `_ by Zhilin Yang, Zihang
150
- Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le.
151
- 30. SqueezeBERT (from UC Berkeley) released with the paper
152
- `SqueezeBERT: What can computer vision teach NLP about efficient neural networks? <https://arxiv.org/abs/2006.11316 >`_
153
- by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
154
- 31. `Other community models <https://huggingface.co/models >`_, contributed by the `community
155
- <https://huggingface.co/users> `_.
136
+ 23. `Reformer <https://huggingface.co/transformers/model_doc/reformer.html >`__ (from Google Research) released with the
137
+ paper `Reformer: The Efficient Transformer <https://arxiv.org/abs/2001.04451 >`__ by Nikita Kitaev, Łukasz Kaiser,
138
+ Anselm Levskaya.
139
+ 24. `RoBERTa <https://huggingface.co/transformers/model_doc/roberta.html >`__ (from Facebook), released together with
140
+ the paper a `Robustly Optimized BERT Pretraining Approach <https://arxiv.org/abs/1907.11692 >`__ by Yinhan Liu, Myle
141
+ Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
142
+ ultilingual BERT into `DistilmBERT
143
+ <https://github.com/huggingface/transformers/tree/master/examples/distillation> `__ and a German version of
144
+ DistilBERT.
145
+ 25. `SqueezeBert <https://huggingface.co/transformers/model_doc/squeezebert.html >`__ released with the paper
146
+ `SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
147
+ <https://arxiv.org/abs/2006.11316> `__ by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
148
+ 26. `T5 <https://huggingface.co/transformers/model_doc/t5.html >`__ (from Google AI) released with the paper `Exploring
149
+ the Limits of Transfer Learning with a Unified Text-to-Text Transformer <https://arxiv.org/abs/1910.10683> `__ by
150
+ Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi
151
+ Zhou and Wei Li and Peter J. Liu.
152
+ 27. `Transformer-XL <https://huggingface.co/transformers/model_doc/transformerxl.html >`__ (from Google/CMU) released
153
+ with the paper `Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
154
+ <https://arxiv.org/abs/1901.02860> `__ by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le,
155
+ Ruslan Salakhutdinov.
156
+ 28. `XLM <https://huggingface.co/transformers/model_doc/xlm.html >`__ (from Facebook) released together with the paper
157
+ `Cross-lingual Language Model Pretraining <https://arxiv.org/abs/1901.07291 >`__ by Guillaume Lample and Alexis
158
+ Conneau.
159
+ 29. `XLM-RoBERTa <https://huggingface.co/transformers/model_doc/xlmroberta.html >`__ (from Facebook AI), released
160
+ together with the paper `Unsupervised Cross-lingual Representation Learning at Scale
161
+ <https://arxiv.org/abs/1911.02116> `__ by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary,
162
+ Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
163
+ 30. `XLNet <https://huggingface.co/transformers/model_doc/xlnet.html >`__ (from Google/CMU) released with the paper
164
+ `XLNet: Generalized Autoregressive Pretraining for Language Understanding <https://arxiv.org/abs/1906.08237 >`__ by
165
+ Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
166
+ 31. `Other community models <https://huggingface.co/models >`__, contributed by the `community
167
+ <https://huggingface.co/users> `__.
156
168
157
169
.. toctree ::
158
170
:maxdepth: 2
0 commit comments