Skip to content

Commit 7a432d8

Browse files
authored
Revert "Update rpc_ddp_tutorial.rst (pytorch#1516)" (pytorch#1518)
This reverts commit 7e19410.
1 parent 93a8fdf commit 7a432d8

File tree

2 files changed

+4
-5
lines changed

2 files changed

+4
-5
lines changed

advanced_source/rpc_ddp_tutorial.rst

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ The parameter server just initializes the RPC framework and waits for RPCs from
9090
the trainers and master.
9191

9292

93-
.. literalinclude:: ../advanced_source/rpc_ddp/main.py
93+
.. literalinclude:: ../advanced_source/rpc_ddp_tutorial/main.py
9494
:language: py
9595
:start-after: BEGIN run_worker
9696
:end-before: END run_worker
@@ -107,7 +107,7 @@ embedding lookup on the parameter server using RemoteModule's ``forward``
107107
and passes its output onto the FC layer.
108108

109109

110-
.. literalinclude:: ../advanced_source/rpc_ddp/main.py
110+
.. literalinclude:: ../advanced_source/rpc_ddp_tutorial/main.py
111111
:language: py
112112
:start-after: BEGIN hybrid_model
113113
:end-before: END hybrid_model
@@ -134,7 +134,7 @@ which is not supported by ``RemoteModule``.
134134
Finally, we create our DistributedOptimizer using all the RRefs and define a
135135
CrossEntropyLoss function.
136136

137-
.. literalinclude:: ../advanced_source/rpc_ddp/main.py
137+
.. literalinclude:: ../advanced_source/rpc_ddp_tutorial/main.py
138138
:language: py
139139
:start-after: BEGIN setup_trainer
140140
:end-before: END setup_trainer
@@ -151,11 +151,10 @@ batch:
151151
4) Use Distributed Autograd to execute a distributed backward pass using the loss.
152152
5) Finally, run a Distributed Optimizer step to optimize all the parameters.
153153

154-
.. literalinclude:: ../advanced_source/rpc_ddp/main.py
154+
.. literalinclude:: ../advanced_source/rpc_ddp_tutorial/main.py
155155
:language: py
156156
:start-after: BEGIN run_trainer
157157
:end-before: END run_trainer
158158
.. code:: python
159159
160160
Source code for the entire example can be found `here <https://github.com/pytorch/examples/tree/master/distributed/rpc/ddp_rpc>`__.
161-

0 commit comments

Comments
 (0)