diff --git a/_posts/2024-07-24-pytorch2-4.md b/_posts/2024-07-24-pytorch2-4.md index 04a95f5a7c6b..ac5567b2a61f 100644 --- a/_posts/2024-07-24-pytorch2-4.md +++ b/_posts/2024-07-24-pytorch2-4.md @@ -105,6 +105,13 @@ Pipeline Parallelism is one of the primitive parallelism techniques for deep lea For more information on this please refer to our [documentation](https://pytorch.org/docs/main/distributed.pipelining.html) and [tutorial](https://pytorch.org/tutorials/intermediate/pipelining_tutorial.html). +### [PROTOTYPE] Intel GPU is available through source build + +Intel GPU in PyTorch on Linux systems offers fundamental functionalities on IntelĀ® Data Center GPU Max Series: eager mode and torch.compile. + +For eager mode, the commonly used Aten operators are implemented by using SYCL programming language. The most performance-critical graphs and operators are highly optimized by using oneAPI Deep Neural Network (oneDNN). For torch.compile mode, Intel GPU backend is integrated to Inductor on top of Triton. + +For more information for Intel GPU source build please refer to our [blog post](https://www.intel.com/content/www/us/en/developer/articles/technical/pytorch-2-4-supports-gpus-accelerate-ai-workloads.html) and [documentation](https://pytorch.org/docs/main/notes/get_start_xpu.html). ## Performance Improvements