You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Invoke the code below to enable beta feature torch.compile
50
-
model = torch.compile(model, backend="ipex")
51
-
...
52
-
optimizer.zero_grad()
53
-
output = model(data)
54
-
...
55
-
```
56
-
57
-
Below you can find complete code examples demonstrating how to use the extension on training for different data types:
58
-
59
-
##### Float32
60
-
61
-
**Note:** You need to install `torchvision` Python package to run the following example.
62
-
63
-
[//]: #(marker_train_single_fp32_complete)
64
-
[//]: #(marker_train_single_fp32_complete)
65
-
66
-
##### BFloat16
67
-
68
-
**Note:** You need to install `torchvision` Python package to run the following example.
69
-
70
-
[//]: #(marker_train_single_bf16_complete)
71
-
[//]: #(marker_train_single_bf16_complete)
72
-
73
28
#### Distributed Training
74
29
75
30
Distributed training with PyTorch DDP is accelerated by oneAPI Collective Communications Library Bindings for Pytorch\* (oneCCL Bindings for Pytorch\*). The extension supports FP32 and BF16 data types. More detailed information and examples are available at the [Github repo](https://github.com/intel/torch-ccl).
0 commit comments