Skip to content

Commit 494f2b1

Browse files
authored
Update 2020-4-17-pytorch-1-dot-5-released-with-new-and-updated-apis.md
1 parent fa00377 commit 494f2b1

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

_posts/2020-4-17-pytorch-1-dot-5-released-with-new-and-updated-apis.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -70,13 +70,13 @@ You can try it out in the tutorial [here](https://pytorch.org/tutorials/recipes/
7070

7171
The Distributed [RPC framework](https://pytorch.org/docs/stable/rpc.html) was launched as experimental in the 1.4 release and the proposal is to mark Distributed RPC framework as stable and no longer experimental. This work involves a lot of enhancements and bug fixes to make the distributed RPC framework more reliable and robust overall, as well as adding a couple of new features, including profiling support, using TorchScript functions in RPC, and several enhancements for ease of use. Below is an overview of the various APIs within the framework:
7272

73-
#### RPC API
73+
### RPC API
7474
The RPC API allows users to specify functions to run and objects to be instantiated on remote nodes. These functions are transparently recorded so that gradients can backpropagate through remote needs using Distributed Autograd.
7575

76-
#### Distributed Autograd
76+
### Distributed Autograd
7777
Distributed Autograd connects the autograd graph across several nodes and allows gradients to flow through during the backwards pass. Gradients are accumulated into a context (as opposed to the .grad field as with Autograd) and users must specify their model’s forward pass under a with `dist_autograd.context()` manager in order to ensure that all RPC communication is recorded properly. Currently, only FAST mode is implemented (see https://pytorch.org/docs/stable/notes/distributed_autograd.html#smart-mode-algorithm for the difference between FAST and SMART modes).
7878

79-
#### Distributed Optimizer
79+
### Distributed Optimizer
8080
The distributed optimizer creates RRefs to optimizers on each worker with parameters that require gradients, and then uses the RPC API to run the optimizer remotely. The user must collect all remote parameters and wrap them in an `RRef`, as this is required input to the distributed optimizer. The user must also specify the distributed autograd `context_id` so that the optimizer knows in which context to look for gradients.
8181

8282
Learn more about distributed RPC framework APIs [here](https://pytorch.org/docs/stable/rpc.html).

0 commit comments

Comments
 (0)