Skip to content

Commit 9656735

Browse files
committed
Generate Python docs from pytorch/pytorch@145a20b
1 parent ab255ad commit 9656735

File tree

1,701 files changed

+1900
-1864
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,701 files changed

+1900
-1864
lines changed

docs/master/__config__.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@
193193
<div class="pytorch-left-menu-search">
194194

195195
<div class="version">
196-
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git7fce0bb ) &#x25BC</a>
196+
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git145a20b ) &#x25BC</a>
197197
</div>
198198

199199

docs/master/_modules/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@
192192
<div class="pytorch-left-menu-search">
193193

194194
<div class="version">
195-
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git7fce0bb ) &#x25BC</a>
195+
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git145a20b ) &#x25BC</a>
196196
</div>
197197

198198

docs/master/_modules/torch.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@
192192
<div class="pytorch-left-menu-search">
193193

194194
<div class="version">
195-
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git7fce0bb ) &#x25BC</a>
195+
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git145a20b ) &#x25BC</a>
196196
</div>
197197

198198

docs/master/_modules/torch/__config__.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@
192192
<div class="pytorch-left-menu-search">
193193

194194
<div class="version">
195-
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git7fce0bb ) &#x25BC</a>
195+
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git145a20b ) &#x25BC</a>
196196
</div>
197197

198198

docs/master/_modules/torch/_jit_internal.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@
192192
<div class="pytorch-left-menu-search">
193193

194194
<div class="version">
195-
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git7fce0bb ) &#x25BC</a>
195+
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git145a20b ) &#x25BC</a>
196196
</div>
197197

198198

docs/master/_modules/torch/_lobpcg.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@
192192
<div class="pytorch-left-menu-search">
193193

194194
<div class="version">
195-
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git7fce0bb ) &#x25BC</a>
195+
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git145a20b ) &#x25BC</a>
196196
</div>
197197

198198

docs/master/_modules/torch/_lowrank.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@
192192
<div class="pytorch-left-menu-search">
193193

194194
<div class="version">
195-
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git7fce0bb ) &#x25BC</a>
195+
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git145a20b ) &#x25BC</a>
196196
</div>
197197

198198

docs/master/_modules/torch/_tensor.html

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@
192192
<div class="pytorch-left-menu-search">
193193

194194
<div class="version">
195-
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git7fce0bb ) &#x25BC</a>
195+
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git145a20b ) &#x25BC</a>
196196
</div>
197197

198198

@@ -526,6 +526,16 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
526526
<span class="nb">str</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">device</span><span class="p">),</span>
527527
<span class="bp">self</span><span class="o">.</span><span class="n">requires_grad</span><span class="p">)</span>
528528
<span class="k">return</span> <span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">_utils</span><span class="o">.</span><span class="n">_rebuild_mlc_tensor</span><span class="p">,</span> <span class="n">arg_mlc</span><span class="p">)</span>
529+
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">device</span><span class="o">.</span><span class="n">type</span> <span class="o">==</span> <span class="s1">&#39;meta&#39;</span><span class="p">:</span>
530+
<span class="c1"># NB: This implementation BREAKS storage sharing. Current</span>
531+
<span class="c1"># hypothesis is that no one cares for meta tensors.</span>
532+
<span class="n">arg_meta</span> <span class="o">=</span> <span class="p">(</span>
533+
<span class="bp">self</span><span class="o">.</span><span class="n">dtype</span><span class="p">,</span>
534+
<span class="nb">tuple</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">size</span><span class="p">()),</span>
535+
<span class="bp">self</span><span class="o">.</span><span class="n">stride</span><span class="p">(),</span>
536+
<span class="bp">self</span><span class="o">.</span><span class="n">requires_grad</span><span class="p">,</span>
537+
<span class="p">)</span>
538+
<span class="k">return</span> <span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">_utils</span><span class="o">.</span><span class="n">_rebuild_meta_tensor_no_storage</span><span class="p">,</span> <span class="n">arg_meta</span><span class="p">)</span>
529539
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">is_quantized</span><span class="p">:</span>
530540
<span class="c1"># quantizer_params can be different type based on torch attribute</span>
531541
<span class="n">quantizer_params</span><span class="p">:</span> <span class="n">Union</span><span class="p">[</span><span class="n">Tuple</span><span class="p">[</span><span class="n">torch</span><span class="o">.</span><span class="n">qscheme</span><span class="p">,</span> <span class="nb">float</span><span class="p">,</span> <span class="nb">int</span><span class="p">],</span> <span class="n">Tuple</span><span class="p">[</span><span class="n">Any</span><span class="p">,</span> <span class="n">Tensor</span><span class="p">,</span> <span class="n">Tensor</span><span class="p">,</span> <span class="nb">int</span><span class="p">]]</span>
@@ -595,7 +605,7 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
595605
<span class="c1"># All strings are unicode in Python 3.</span>
596606
<span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">_tensor_str</span><span class="o">.</span><span class="n">_str</span><span class="p">(</span><span class="bp">self</span><span class="p">)</span>
597607

598-
<div class="viewcode-block" id="Tensor.backward"><a class="viewcode-back" href="../../generated/torch.Tensor.backward.html#torch.Tensor.backward">[docs]</a> <span class="k">def</span> <span class="nf">backward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">gradient</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">retain_graph</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">create_graph</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">inputs</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
608+
<span class="k">def</span> <span class="nf">backward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">gradient</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">retain_graph</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">create_graph</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">inputs</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
599609
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Computes the gradient of current tensor w.r.t. graph leaves.</span>
600610

601611
<span class="sd"> The graph is differentiated using the chain rule. If the tensor is</span>
@@ -651,7 +661,7 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
651661
<span class="n">retain_graph</span><span class="o">=</span><span class="n">retain_graph</span><span class="p">,</span>
652662
<span class="n">create_graph</span><span class="o">=</span><span class="n">create_graph</span><span class="p">,</span>
653663
<span class="n">inputs</span><span class="o">=</span><span class="n">inputs</span><span class="p">)</span>
654-
<span class="n">torch</span><span class="o">.</span><span class="n">autograd</span><span class="o">.</span><span class="n">backward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">gradient</span><span class="p">,</span> <span class="n">retain_graph</span><span class="p">,</span> <span class="n">create_graph</span><span class="p">,</span> <span class="n">inputs</span><span class="o">=</span><span class="n">inputs</span><span class="p">)</span></div>
664+
<span class="n">torch</span><span class="o">.</span><span class="n">autograd</span><span class="o">.</span><span class="n">backward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">gradient</span><span class="p">,</span> <span class="n">retain_graph</span><span class="p">,</span> <span class="n">create_graph</span><span class="p">,</span> <span class="n">inputs</span><span class="o">=</span><span class="n">inputs</span><span class="p">)</span>
655665

656666
<span class="k">def</span> <span class="nf">register_hook</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">hook</span><span class="p">):</span>
657667
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Registers a backward hook.</span>

docs/master/_modules/torch/_tensor_str.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@
192192
<div class="pytorch-left-menu-search">
193193

194194
<div class="version">
195-
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git7fce0bb ) &#x25BC</a>
195+
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git145a20b ) &#x25BC</a>
196196
</div>
197197

198198

docs/master/_modules/torch/_utils.html

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@
192192
<div class="pytorch-left-menu-search">
193193

194194
<div class="version">
195-
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git7fce0bb ) &#x25BC</a>
195+
<a href='https://pytorch.org/docs/versions.html'>master (1.10.0a0+git145a20b ) &#x25BC</a>
196196
</div>
197197

198198

@@ -575,6 +575,10 @@ <h1>Source code for torch._utils</h1><div class="highlight"><pre>
575575
<span class="k">return</span> <span class="n">tensor</span>
576576

577577

578+
<span class="k">def</span> <span class="nf">_rebuild_meta_tensor_no_storage</span><span class="p">(</span><span class="n">dtype</span><span class="p">,</span> <span class="n">size</span><span class="p">,</span> <span class="n">stride</span><span class="p">,</span> <span class="n">requires_grad</span><span class="p">):</span>
579+
<span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">empty_strided</span><span class="p">(</span><span class="n">size</span><span class="p">,</span> <span class="n">stride</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">dtype</span><span class="p">,</span> <span class="n">device</span><span class="o">=</span><span class="s1">&#39;meta&#39;</span><span class="p">,</span> <span class="n">requires_grad</span><span class="o">=</span><span class="n">requires_grad</span><span class="p">)</span>
580+
581+
578582
<span class="k">def</span> <span class="nf">_rebuild_qtensor</span><span class="p">(</span><span class="n">storage</span><span class="p">,</span> <span class="n">storage_offset</span><span class="p">,</span> <span class="n">size</span><span class="p">,</span> <span class="n">stride</span><span class="p">,</span> <span class="n">quantizer_params</span><span class="p">,</span> <span class="n">requires_grad</span><span class="p">,</span> <span class="n">backward_hooks</span><span class="p">):</span>
579583
<span class="n">qscheme</span> <span class="o">=</span> <span class="n">quantizer_params</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
580584
<span class="k">if</span> <span class="n">qscheme</span> <span class="o">==</span> <span class="n">torch</span><span class="o">.</span><span class="n">per_tensor_affine</span><span class="p">:</span>

0 commit comments

Comments
 (0)