Skip to content

Update rpc_ddp_tutorial.rst #1516

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
May 10, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions advanced_source/rpc_ddp_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ The parameter server just initializes the RPC framework and waits for RPCs from
the trainers and master.


.. literalinclude:: ../advanced_source/rpc_ddp_tutorial/main.py
.. literalinclude:: ../advanced_source/rpc_ddp/main.py
:language: py
:start-after: BEGIN run_worker
:end-before: END run_worker
Expand All @@ -107,7 +107,7 @@ embedding lookup on the parameter server using RemoteModule's ``forward``
and passes its output onto the FC layer.


.. literalinclude:: ../advanced_source/rpc_ddp_tutorial/main.py
.. literalinclude:: ../advanced_source/rpc_ddp/main.py
:language: py
:start-after: BEGIN hybrid_model
:end-before: END hybrid_model
Expand All @@ -134,7 +134,7 @@ which is not supported by ``RemoteModule``.
Finally, we create our DistributedOptimizer using all the RRefs and define a
CrossEntropyLoss function.

.. literalinclude:: ../advanced_source/rpc_ddp_tutorial/main.py
.. literalinclude:: ../advanced_source/rpc_ddp/main.py
:language: py
:start-after: BEGIN setup_trainer
:end-before: END setup_trainer
Expand All @@ -151,10 +151,11 @@ batch:
4) Use Distributed Autograd to execute a distributed backward pass using the loss.
5) Finally, run a Distributed Optimizer step to optimize all the parameters.

.. literalinclude:: ../advanced_source/rpc_ddp_tutorial/main.py
.. literalinclude:: ../advanced_source/rpc_ddp/main.py
:language: py
:start-after: BEGIN run_trainer
:end-before: END run_trainer
.. code:: python

Source code for the entire example can be found `here <https://github.com/pytorch/examples/tree/master/distributed/rpc/ddp_rpc>`__.