You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just pass an additional <codeclass="docutils literal"><spanclass="pre">async=True</span></code> argument to a <aclass="reference internal" href="../tensors.html#torch.Tensor.cuda" title="torch.Tensor.cuda"><codeclass="xref py py-meth docutils literal"><spanclass="pre">cuda()</span></code></a>
537
537
call. This can be used to overlap data transfers with computation.</p>
538
538
<p>You can make the <aclass="reference internal" href="../data.html#torch.utils.data.DataLoader" title="torch.utils.data.DataLoader"><codeclass="xref py py-class docutils literal"><spanclass="pre">DataLoader</span></code></a> return batches placed in
539
-
pinned memory by passing <codeclass="docutils literal"><spanclass="pre">pinned=True</span></code> to its constructor.</p>
539
+
pinned memory by passing <codeclass="docutils literal"><spanclass="pre">pin_memory=True</span></code> to its constructor.</p>
<spanid="cuda-nn-dataparallel-instead"></span><h3>Use nn.DataParallel instead of multiprocessing<aclass="headerlink" href="#use-nn-dataparallel-instead-of-multiprocessing" title="Permalink to this headline">¶</a></h3>
0 commit comments