Skip to content

Commit fbb5ab9

Browse files
committed
auto-generating sphinx docs
1 parent 515345c commit fbb5ab9

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

docs/_sources/notes/cuda.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Just pass an additional ``async=True`` argument to a :meth:`~torch.Tensor.cuda`
6262
call. This can be used to overlap data transfers with computation.
6363

6464
You can make the :class:`~torch.utils.data.DataLoader` return batches placed in
65-
pinned memory by passing ``pinned=True`` to its constructor.
65+
pinned memory by passing ``pin_memory=True`` to its constructor.
6666

6767
.. _cuda-nn-dataparallel-instead:
6868

docs/notes/cuda.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -536,7 +536,7 @@ <h3>Use pinned memory buffers<a class="headerlink" href="#use-pinned-memory-buff
536536
Just pass an additional <code class="docutils literal"><span class="pre">async=True</span></code> argument to a <a class="reference internal" href="../tensors.html#torch.Tensor.cuda" title="torch.Tensor.cuda"><code class="xref py py-meth docutils literal"><span class="pre">cuda()</span></code></a>
537537
call. This can be used to overlap data transfers with computation.</p>
538538
<p>You can make the <a class="reference internal" href="../data.html#torch.utils.data.DataLoader" title="torch.utils.data.DataLoader"><code class="xref py py-class docutils literal"><span class="pre">DataLoader</span></code></a> return batches placed in
539-
pinned memory by passing <code class="docutils literal"><span class="pre">pinned=True</span></code> to its constructor.</p>
539+
pinned memory by passing <code class="docutils literal"><span class="pre">pin_memory=True</span></code> to its constructor.</p>
540540
</div>
541541
<div class="section" id="use-nn-dataparallel-instead-of-multiprocessing">
542542
<span id="cuda-nn-dataparallel-instead"></span><h3>Use nn.DataParallel instead of multiprocessing<a class="headerlink" href="#use-nn-dataparallel-instead-of-multiprocessing" title="Permalink to this headline"></a></h3>

0 commit comments

Comments
 (0)