Skip to content

Commit 0b1644d

Browse files
committed
auto-generating sphinx docs
1 parent f47c4a3 commit 0b1644d

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/master/quantization.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -941,7 +941,7 @@ <h3>Top-level quantization APIs<a class="headerlink" href="#top-level-quantizati
941941

942942
<dl class="function">
943943
<dt id="torch.quantization.prepare">
944-
<code class="sig-prename descclassname">torch.quantization.</code><code class="sig-name descname">prepare</code><span class="sig-paren">(</span><em class="sig-param">model</em>, <em class="sig-param">inplace=False</em>, <em class="sig-param">white_list={&lt;class 'torch.nn.intrinsic.modules.fused.ConvReLU1d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.ConvReLU2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.conv.Conv2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.conv.Conv2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.normalization.LayerNorm'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.qat.modules.conv_fused.ConvBn2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.quantized.modules.functional_modules.FloatFunctional'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.normalization.InstanceNorm1d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.ConvBn2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.normalization.InstanceNorm3d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.qat.modules.linear_relu.LinearReLU'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.LinearReLU'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.rnn.RNNCell'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.qat.modules.conv_fused.ConvReLU2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.ConvBnReLU2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.activation.ReLU'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.rnn.GRUCell'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.qat.modules.conv_fused.ConvBnReLU2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.batchnorm.BatchNorm2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.activations.Hardswish'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.linear.Linear'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.BNReLU3d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.normalization.LayerNorm'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.conv.Conv1d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.ConvReLU3d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.instancenorm.InstanceNorm2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.normalization.InstanceNorm2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.rnn.LSTM'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.activation.ELU'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.normalization.GroupNorm'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.activation.ReLU6'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.BNReLU2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.conv.Conv3d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.normalization.GroupNorm'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.rnn.LSTMCell'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.activation.Hardswish'&gt;</em>, <em class="sig-param">&lt;class 'torch.quantization.stubs.QuantStub'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.linear.Linear'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.activations.ELU'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.batchnorm.BatchNorm3d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.container.Sequential'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.instancenorm.InstanceNorm1d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.instancenorm.InstanceNorm3d'&gt;}</em>, <em class="sig-param">observer_non_leaf_module_list=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/torch/quantization/quantize.html#prepare"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#torch.quantization.prepare" title="Permalink to this definition"></a></dt>
944+
<code class="sig-prename descclassname">torch.quantization.</code><code class="sig-name descname">prepare</code><span class="sig-paren">(</span><em class="sig-param">model</em>, <em class="sig-param">inplace=False</em>, <em class="sig-param">white_list={&lt;class 'torch.nn.intrinsic.modules.fused.ConvReLU1d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.ConvReLU2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.conv.Conv2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.conv.Conv2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.normalization.LayerNorm'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.qat.modules.conv_fused.ConvBn2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.quantized.modules.functional_modules.FloatFunctional'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.normalization.InstanceNorm1d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.ConvBn2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.qat.modules.linear_relu.LinearReLU'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.LinearReLU'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.rnn.RNNCell'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.normalization.InstanceNorm3d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.qat.modules.conv_fused.ConvReLU2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.activation.ReLU'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.ConvBnReLU2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.rnn.GRUCell'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.qat.modules.conv_fused.ConvBnReLU2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.batchnorm.BatchNorm2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.activations.Hardswish'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.linear.Linear'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.BNReLU3d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.normalization.LayerNorm'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.conv.Conv1d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.ConvReLU3d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.instancenorm.InstanceNorm2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.normalization.InstanceNorm2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.rnn.LSTM'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.activation.ELU'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.normalization.GroupNorm'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.activation.ReLU6'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.intrinsic.modules.fused.BNReLU2d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.conv.Conv3d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.normalization.GroupNorm'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.rnn.LSTMCell'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.activation.Hardswish'&gt;</em>, <em class="sig-param">&lt;class 'torch.quantization.stubs.QuantStub'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.linear.Linear'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.qat.modules.activations.ELU'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.batchnorm.BatchNorm3d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.container.Sequential'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.instancenorm.InstanceNorm1d'&gt;</em>, <em class="sig-param">&lt;class 'torch.nn.modules.instancenorm.InstanceNorm3d'&gt;}</em>, <em class="sig-param">observer_non_leaf_module_list=None</em><span class="sig-paren">)</span><a class="reference internal" href="_modules/torch/quantization/quantize.html#prepare"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#torch.quantization.prepare" title="Permalink to this definition"></a></dt>
945945
<dd><p>Prepares a copy of the model for quantization calibration or quantization-aware training.</p>
946946
<p>Quantization configuration should be assigned preemptively
947947
to individual submodules in <cite>.qconfig</cite> attribute.</p>

0 commit comments

Comments
 (0)