@@ -90,7 +90,7 @@ The parameter server just initializes the RPC framework and waits for RPCs from
90
90
the trainers and master.
91
91
92
92
93
- .. literalinclude :: ../advanced_source/rpc_ddp /main.py
93
+ .. literalinclude :: ../advanced_source/rpc_ddp_tutorial /main.py
94
94
:language: py
95
95
:start-after: BEGIN run_worker
96
96
:end-before: END run_worker
@@ -107,7 +107,7 @@ embedding lookup on the parameter server using RemoteModule's ``forward``
107
107
and passes its output onto the FC layer.
108
108
109
109
110
- .. literalinclude :: ../advanced_source/rpc_ddp /main.py
110
+ .. literalinclude :: ../advanced_source/rpc_ddp_tutorial /main.py
111
111
:language: py
112
112
:start-after: BEGIN hybrid_model
113
113
:end-before: END hybrid_model
@@ -134,7 +134,7 @@ which is not supported by ``RemoteModule``.
134
134
Finally, we create our DistributedOptimizer using all the RRefs and define a
135
135
CrossEntropyLoss function.
136
136
137
- .. literalinclude :: ../advanced_source/rpc_ddp /main.py
137
+ .. literalinclude :: ../advanced_source/rpc_ddp_tutorial /main.py
138
138
:language: py
139
139
:start-after: BEGIN setup_trainer
140
140
:end-before: END setup_trainer
@@ -151,11 +151,10 @@ batch:
151
151
4) Use Distributed Autograd to execute a distributed backward pass using the loss.
152
152
5) Finally, run a Distributed Optimizer step to optimize all the parameters.
153
153
154
- .. literalinclude :: ../advanced_source/rpc_ddp /main.py
154
+ .. literalinclude :: ../advanced_source/rpc_ddp_tutorial /main.py
155
155
:language: py
156
156
:start-after: BEGIN run_trainer
157
157
:end-before: END run_trainer
158
158
.. code :: python
159
159
160
160
Source code for the entire example can be found `here <https://github.com/pytorch/examples/tree/master/distributed/rpc/ddp_rpc >`__.
161
-
0 commit comments