diff --git a/_get_started/pytorch.md b/_get_started/pytorch.md index ef5686ecf131..d68f5b8b510c 100644 --- a/_get_started/pytorch.md +++ b/_get_started/pytorch.md @@ -283,7 +283,7 @@ The minifier automatically reduces the issue you are seeing to a small snippet o If you are not seeing the speedups that you expect, then we have the **torch.\_dynamo.explain** tool that explains which parts of your code induced what we call “graph breaks”. Graph breaks generally hinder the compiler from speeding up the code, and reducing the number of graph breaks likely will speed up your code (up to some limit of diminishing returns). -You can read about these and more in our [troubleshooting guide](https://pytorch.org/docs/master/dynamo/troubleshooting.html). +You can read about these and more in our [troubleshooting guide](https://pytorch.org/docs/stable/dynamo/troubleshooting.html). ### Dynamic Shapes @@ -363,9 +363,9 @@ We have built utilities for partitioning an FX graph into subgraphs that contain We are super excited about the direction that we’ve taken for PyTorch 2.0 and beyond. The road to the final 2.0 release is going to be rough, but come join us on this journey early-on. If you are interested in deep-diving further or contributing to the compiler, please continue reading below which includes more information on how to get started (e.g., tutorials, benchmarks, models, FAQs) and **Ask the Engineers: 2.0 Live Q&A Series** starting this month. Additional resources include: -- Getting Started @ [https://pytorch.org/docs/master/dynamo/get-started.html](https://pytorch.org/docs/master/dynamo/get-started.html) +- Getting Started @ [https://pytorch.org/docs/stable/dynamo/get-started.html](https://pytorch.org/docs/stable/dynamo/get-started.html) - Tutorials @ [https://pytorch.org/tutorials/](https://pytorch.org/tutorials/) -- Documentation @ [https://pytorch.org/docs/master](https://pytorch.org/docs/master) and [http://pytorch.org/docs/master/dynamo](http://pytorch.org/docs/master/dynamo) +- Documentation @ [https://pytorch.org/docs/stable](https://pytorch.org/docs/stable) and [http://pytorch.org/docs/stable/dynamo](http://pytorch.org/docs/stable/dynamo) - Developer Discussions @ [https://dev-discuss.pytorch.org](https://dev-discuss.pytorch.org) @@ -496,7 +496,7 @@ In 2.0, if you wrap your model in `model = torch.compile(model)`, your model goe 3. Graph compilation, where the kernels call their corresponding low-level device-specific operations. 9. **What new components does PT2.0 add to PT?** - - **TorchDynamo** generates FX Graphs from Python bytecode. It maintains the eager-mode capabilities using [guards](https://pytorch.org/docs/master/dynamo/guards-overview.html#caching-and-guards-overview) to ensure the generated graphs are valid ([read more](https://dev-discuss.pytorch.org/t/torchdynamo-an-experiment-in-dynamic-python-bytecode-transformation/361)) + - **TorchDynamo** generates FX Graphs from Python bytecode. It maintains the eager-mode capabilities using [guards](https://pytorch.org/docs/stable/dynamo/guards-overview.html#caching-and-guards-overview) to ensure the generated graphs are valid ([read more](https://dev-discuss.pytorch.org/t/torchdynamo-an-experiment-in-dynamic-python-bytecode-transformation/361)) - **AOTAutograd** to generate the backward graph corresponding to the forward graph captured by TorchDynamo ([read more](https://dev-discuss.pytorch.org/t/torchdynamo-update-6-training-support-with-aotautograd/570)). - **PrimTorch** to decompose complicated PyTorch operations into simpler and more elementary ops ([read more](https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-2/645)). - **\[Backend]** Backends integrate with TorchDynamo to compile the graph into IR that can run on accelerators. For example, **TorchInductor** compiles the graph to either **Triton** for GPU execution or **OpenMP** for CPU execution ([read more](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747)). @@ -511,10 +511,10 @@ DDP and FSDP in Compiled mode can run up to 15% faster than Eager-Mode in FP32 The [PyTorch Developers forum](http://dev-discuss.pytorch.org/) is the best place to learn about 2.0 components directly from the developers who build them. 13. **Help my code is running slower with 2.0’s Compiled Mode!** -The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more [here](https://pytorch.org/docs/master/dynamo/faq.html#why-am-i-not-seeing-speedups). +The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more [here](https://pytorch.org/docs/stable/dynamo/faq.html#why-am-i-not-seeing-speedups). 14. **My previously-running code is crashing with 2.0’s Compiled Mode! How do I debug it?** -Here are some techniques to triage where your code might be failing, and printing helpful logs: [https://pytorch.org/docs/master/dynamo/faq.html#why-is-my-code-crashing](https://pytorch.org/docs/master/dynamo/faq.html#why-is-my-code-crashing). +Here are some techniques to triage where your code might be failing, and printing helpful logs: [https://pytorch.org/docs/stable/dynamo/faq.html#why-is-my-code-crashing](https://pytorch.org/docs/stable/dynamo/faq.html#why-is-my-code-crashing). ## Ask the Engineers: 2.0 Live Q&A Series