Skip to content

Commit ab3ef80

Browse files
authored
Update and rename 2021-6-14-pytorch-1.9-new-library-releases.md to 2021-6-15-pytorch-1.9-new-library-releases.md
1 parent 6f0bccc commit ab3ef80

File tree

1 file changed

+13
-9
lines changed

1 file changed

+13
-9
lines changed

_posts/2021-6-14-pytorch-1.9-new-library-releases.md renamed to _posts/2021-6-15-pytorch-1.9-new-library-releases.md

Lines changed: 13 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -103,25 +103,26 @@ We have added the model architectures from [Wav2Vec2.0](https://arxiv.org/abs/20
103103
The following code snippet illustrates such a use case. Please check out our [c++ example directory](https://github.com/pytorch/audio/tree/master/examples/libtorchaudio) for the complete example. Currently, it is designed for running inference. If you would like more support for training, please file a feature request.
104104

105105
```python
106-
{:.table.table-striped.table-bordered}
107-
|# Import fine-tuned model from Hugging Face Hub
106+
# Import fine-tuned model from Hugging Face Hub
108107
import transformers
109108
from torchaudio.models.wav2vec2.utils import import_huggingface_model
110109

111110
original = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
112-
imported = import_huggingface_model(original)|
111+
imported = import_huggingface_model(original)
112+
```
113113

114-
{:.table.table-striped.table-bordered}
115-
|# Import fine-tuned model from fairseq
114+
```python
115+
# Import fine-tuned model from fairseq
116116
import fairseq
117117
from torchaudio.models.wav2vec2.utils import import_fairseq_model
118118

119119
original, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task(
120120
["wav2vec_small_960h.pt"], arg_overrides={'data': "<data_dir>"})
121-
imported = import_fairseq_model(original[0].w2v_encoder)|
121+
imported = import_fairseq_model(original[0].w2v_encoder)
122+
```
122123

123-
{:.table.table-striped.table-bordered}
124-
|# Build uninitialized model and load state dict
124+
```python
125+
# Build uninitialized model and load state dict
125126
from torchaudio.models import wav2vec2_base
126127

127128
model = wav2vec2_base(num_out=32)
@@ -132,7 +133,7 @@ quantized_model = torch.quantization.quantize_dynamic(
132133
model, qconfig_spec={torch.nn.Linear}, dtype=torch.qint8)
133134
scripted_model = torch.jit.script(quantized_model)
134135
optimized_model = optimize_for_mobile(scripted_model)
135-
optimized_model.save("model_for_deployment.pt")|
136+
optimized_model.save("model_for_deployment.pt")
136137
```
137138

138139
For more details, see [the documentation](https://pytorch.org/audio/0.9.0/models.html#wav2vec2-0).
@@ -198,3 +199,6 @@ For more details, refer to [the documentation](https://pytorch.org/text/stable/v
198199

199200
Thanks for reading. If you’re interested in these updates and want to join the PyTorch community, we encourage you to join [the discussion](https://discuss.pytorch.org/) forums and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Facebook](https://www.facebook.com/pytorch/), [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch) or [LinkedIn](https://www.linkedin.com/company/pytorch).
200201

202+
Cheers!
203+
204+
-Team PyTorch

0 commit comments

Comments
 (0)