We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent c2575d1 commit 61650b0Copy full SHA for 61650b0
advanced_source/dynamic_quantization_tutorial.py
@@ -13,7 +13,7 @@
13
to int, which can result in smaller model size and faster inference with only a small
14
hit to accuracy.
15
16
-In this tutorial, we'll apply the easiest form of quantization -
+In this tutorial, we will apply the easiest form of quantization -
17
`dynamic quantization <https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamic>`_ -
18
to an LSTM-based next word-prediction model, closely following the
19
`word language model <https://github.com/pytorch/examples/tree/master/word_language_model>`_
0 commit comments