Skip to content

Commit cef1c37

Browse files
committed
Generate Python docs from pytorch/pytorch@bcee215
1 parent d611c4a commit cef1c37

File tree

1,954 files changed

+2311
-2257
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,954 files changed

+2311
-2257
lines changed

docs/master/_images/RReLU.png

209 Bytes
Loading

docs/master/_modules/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@
217217
<div class="pytorch-left-menu-search">
218218

219219
<div class="version">
220-
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+git48ea440 ) &#x25BC</a>
220+
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+gitbcee215 ) &#x25BC</a>
221221
</div>
222222

223223

docs/master/_modules/torch.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@
217217
<div class="pytorch-left-menu-search">
218218

219219
<div class="version">
220-
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+git48ea440 ) &#x25BC</a>
220+
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+gitbcee215 ) &#x25BC</a>
221221
</div>
222222

223223

docs/master/_modules/torch/__config__.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@
217217
<div class="pytorch-left-menu-search">
218218

219219
<div class="version">
220-
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+git48ea440 ) &#x25BC</a>
220+
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+gitbcee215 ) &#x25BC</a>
221221
</div>
222222

223223

docs/master/_modules/torch/_jit_internal.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@
217217
<div class="pytorch-left-menu-search">
218218

219219
<div class="version">
220-
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+git48ea440 ) &#x25BC</a>
220+
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+gitbcee215 ) &#x25BC</a>
221221
</div>
222222

223223

docs/master/_modules/torch/_lobpcg.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@
217217
<div class="pytorch-left-menu-search">
218218

219219
<div class="version">
220-
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+git48ea440 ) &#x25BC</a>
220+
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+gitbcee215 ) &#x25BC</a>
221221
</div>
222222

223223

docs/master/_modules/torch/_lowrank.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@
217217
<div class="pytorch-left-menu-search">
218218

219219
<div class="version">
220-
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+git48ea440 ) &#x25BC</a>
220+
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+gitbcee215 ) &#x25BC</a>
221221
</div>
222222

223223

docs/master/_modules/torch/_tensor.html

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@
217217
<div class="pytorch-left-menu-search">
218218

219219
<div class="version">
220-
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+git48ea440 ) &#x25BC</a>
220+
<a href='https://pytorch.org/docs/versions.html'>master (1.12.0a0+gitbcee215 ) &#x25BC</a>
221221
</div>
222222

223223

@@ -519,7 +519,7 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
519519
<span class="c1"># doesn&#39;t work because of</span>
520520
<span class="c1"># https://github.com/pytorch/pytorch/issues/47442</span>
521521
<span class="c1"># Update the test in test_serialization if you remove &#39;meta&#39; from here</span>
522-
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">is_sparse</span> <span class="ow">or</span> <span class="bp">self</span><span class="o">.</span><span class="n">device</span><span class="o">.</span><span class="n">type</span> <span class="ow">in</span> <span class="p">[</span><span class="s1">&#39;lazy&#39;</span><span class="p">,</span> <span class="s1">&#39;xla&#39;</span><span class="p">,</span> <span class="s1">&#39;mlc&#39;</span><span class="p">,</span> <span class="s1">&#39;ort&#39;</span><span class="p">,</span> <span class="s1">&#39;meta&#39;</span><span class="p">,</span> <span class="s1">&#39;hpu&#39;</span><span class="p">]</span> <span class="ow">or</span> \
522+
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">is_sparse</span> <span class="ow">or</span> <span class="bp">self</span><span class="o">.</span><span class="n">device</span><span class="o">.</span><span class="n">type</span> <span class="ow">in</span> <span class="p">[</span><span class="s1">&#39;lazy&#39;</span><span class="p">,</span> <span class="s1">&#39;xla&#39;</span><span class="p">,</span> <span class="s1">&#39;mps&#39;</span><span class="p">,</span> <span class="s1">&#39;ort&#39;</span><span class="p">,</span> <span class="s1">&#39;meta&#39;</span><span class="p">,</span> <span class="s1">&#39;hpu&#39;</span><span class="p">]</span> <span class="ow">or</span> \
523523
<span class="p">(</span><span class="nb">type</span><span class="p">(</span><span class="bp">self</span><span class="p">)</span> <span class="ow">is</span> <span class="ow">not</span> <span class="n">Tensor</span> <span class="ow">and</span> <span class="bp">self</span><span class="o">.</span><span class="n">data_ptr</span><span class="p">()</span> <span class="o">==</span> <span class="mi">0</span><span class="p">):</span>
524524
<span class="n">new_tensor</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">clone</span><span class="p">()</span>
525525
<span class="k">if</span> <span class="nb">type</span><span class="p">(</span><span class="n">new_tensor</span><span class="p">)</span> <span class="ow">is</span> <span class="ow">not</span> <span class="nb">type</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
@@ -635,7 +635,7 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
635635
<span class="c1"># See Note [Don&#39;t serialize hooks]</span>
636636
<span class="n">torch</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">hooks</span><span class="o">.</span><span class="n">warn_if_has_hooks</span><span class="p">(</span><span class="bp">self</span><span class="p">)</span>
637637
<span class="n">backward_hooks</span><span class="p">:</span> <span class="n">Dict</span><span class="p">[</span><span class="n">Any</span><span class="p">,</span> <span class="n">Any</span><span class="p">]</span> <span class="o">=</span> <span class="n">OrderedDict</span><span class="p">()</span>
638-
<span class="c1"># Note: Numpy array is chosen to be the rebuild component for XLA, ORT, MLC Tensors.</span>
638+
<span class="c1"># Note: Numpy array is chosen to be the rebuild component for XLA, ORT Tensors.</span>
639639
<span class="c1"># We considered a few options:</span>
640640
<span class="c1"># 1. CPU tensor can&#39;t be used here.</span>
641641
<span class="c1"># Otherwise in torch.load CPU storage is reconstructed with randomly</span>
@@ -645,7 +645,7 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
645645
<span class="c1"># 2. Python list is not a good fit due to performance reason.</span>
646646
<span class="c1"># `tolist()` converts every single element in the tensor into python objects</span>
647647
<span class="c1"># and serialize them one by one.</span>
648-
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">device</span><span class="o">.</span><span class="n">type</span> <span class="ow">in</span> <span class="p">[</span><span class="s1">&#39;xla&#39;</span><span class="p">,</span> <span class="s1">&#39;ort&#39;</span><span class="p">,</span> <span class="s1">&#39;mlc&#39;</span><span class="p">,</span> <span class="s1">&#39;hpu&#39;</span><span class="p">]:</span>
648+
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">device</span><span class="o">.</span><span class="n">type</span> <span class="ow">in</span> <span class="p">[</span><span class="s1">&#39;xla&#39;</span><span class="p">,</span> <span class="s1">&#39;ort&#39;</span><span class="p">,</span> <span class="s1">&#39;mps&#39;</span><span class="p">,</span> <span class="s1">&#39;hpu&#39;</span><span class="p">]:</span>
649649
<span class="k">return</span> <span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">_utils</span><span class="o">.</span><span class="n">_rebuild_device_tensor_from_numpy</span><span class="p">,</span> <span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">cpu</span><span class="p">()</span><span class="o">.</span><span class="n">numpy</span><span class="p">(),</span>
650650
<span class="bp">self</span><span class="o">.</span><span class="n">dtype</span><span class="p">,</span>
651651
<span class="nb">str</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">device</span><span class="p">),</span>
@@ -756,11 +756,12 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
756756
<span class="c1"># See Note [Don&#39;t serialize hooks]</span>
757757
<span class="bp">self</span><span class="o">.</span><span class="n">requires_grad</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">_backward_hooks</span> <span class="o">=</span> <span class="n">state</span>
758758

759-
<span class="k">def</span> <span class="fm">__repr__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
759+
<span class="k">def</span> <span class="fm">__repr__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="o">*</span><span class="p">,</span> <span class="n">tensor_contents</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
760760
<span class="k">if</span> <span class="n">has_torch_function_unary</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
761-
<span class="k">return</span> <span class="n">handle_torch_function</span><span class="p">(</span><span class="n">Tensor</span><span class="o">.</span><span class="fm">__repr__</span><span class="p">,</span> <span class="p">(</span><span class="bp">self</span><span class="p">,),</span> <span class="bp">self</span><span class="p">)</span>
761+
<span class="k">return</span> <span class="n">handle_torch_function</span><span class="p">(</span><span class="n">Tensor</span><span class="o">.</span><span class="fm">__repr__</span><span class="p">,</span> <span class="p">(</span><span class="bp">self</span><span class="p">,),</span> <span class="bp">self</span><span class="p">,</span>
762+
<span class="n">tensor_contents</span><span class="o">=</span><span class="n">tensor_contents</span><span class="p">)</span>
762763
<span class="c1"># All strings are unicode in Python 3.</span>
763-
<span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">_tensor_str</span><span class="o">.</span><span class="n">_str</span><span class="p">(</span><span class="bp">self</span><span class="p">)</span>
764+
<span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">_tensor_str</span><span class="o">.</span><span class="n">_str</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">tensor_contents</span><span class="o">=</span><span class="n">tensor_contents</span><span class="p">)</span>
764765

765766
<div class="viewcode-block" id="Tensor.backward"><a class="viewcode-back" href="../../generated/torch.Tensor.backward.html#torch.Tensor.backward">[docs]</a> <span class="k">def</span> <span class="nf">backward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">gradient</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">retain_graph</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">create_graph</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">inputs</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
766767
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Computes the gradient of current tensor w.r.t. graph leaves.</span>
@@ -949,11 +950,11 @@ <h1>Source code for torch._tensor</h1><div class="highlight"><pre>
949950
<span class="k">else</span><span class="p">:</span>
950951
<span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">flip</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span>
951952

952-
<div class="viewcode-block" id="Tensor.norm"><a class="viewcode-back" href="../../generated/torch.Tensor.norm.html#torch.Tensor.norm">[docs]</a> <span class="k">def</span> <span class="nf">norm</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">p</span><span class="o">=</span><span class="s2">&quot;fro&quot;</span><span class="p">,</span> <span class="n">dim</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">keepdim</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
953+
<span class="k">def</span> <span class="nf">norm</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">p</span><span class="o">=</span><span class="s2">&quot;fro&quot;</span><span class="p">,</span> <span class="n">dim</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">keepdim</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
953954
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;See :func:`torch.norm`&quot;&quot;&quot;</span>
954955
<span class="k">if</span> <span class="n">has_torch_function_unary</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
955956
<span class="k">return</span> <span class="n">handle_torch_function</span><span class="p">(</span><span class="n">Tensor</span><span class="o">.</span><span class="n">norm</span><span class="p">,</span> <span class="p">(</span><span class="bp">self</span><span class="p">,),</span> <span class="bp">self</span><span class="p">,</span> <span class="n">p</span><span class="o">=</span><span class="n">p</span><span class="p">,</span> <span class="n">dim</span><span class="o">=</span><span class="n">dim</span><span class="p">,</span> <span class="n">keepdim</span><span class="o">=</span><span class="n">keepdim</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">dtype</span><span class="p">)</span>
956-
<span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">norm</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">p</span><span class="p">,</span> <span class="n">dim</span><span class="p">,</span> <span class="n">keepdim</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">dtype</span><span class="p">)</span></div>
957+
<span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">norm</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">p</span><span class="p">,</span> <span class="n">dim</span><span class="p">,</span> <span class="n">keepdim</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">dtype</span><span class="p">)</span>
957958

958959
<span class="k">def</span> <span class="nf">lu</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">pivot</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">get_infos</span><span class="o">=</span><span class="kc">False</span><span class="p">):</span>
959960
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;See :func:`torch.lu`&quot;&quot;&quot;</span>

0 commit comments

Comments
 (0)