Skip to content

Commit 9469f8c

Browse files
jingxu10ZhaoqiongZNeoZhangJianyuxiguiwYuningQiu
authored
Move oneAPI IPEX samples back to IPEX repo (#2943)
* restruct example directories * add jupyter notebook of IntelPytorch Inference AMX BF16 and INT8 * mv 2 examples from onesample (#2787) * mv 2 examples from onesample * fix license format * add jupyter notebook readme * move oneAPI IPEX inference sample optimize (#2798) * clear output of notebook * Update example. Add example 'complete flag' * update readme, remove aikit and refer ipex installation guide * remove installation part in jupyter notebook * remove installation part in jupyter notebook and add kernel select * each sample use conda env seperately * Update cpu example jupyter nootbook README * rm install jupyter and refer to readme, fix table format * Create IPEX_Getting_Started.ipynb * Create IntelPytorch_Quantization.ipynb * remove training examples --------- Co-authored-by: Zheng, Zhaoqiong <zhaoqiong.zheng@intel.com> Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> Co-authored-by: xiguiw <111278656+xiguiw@users.noreply.github.com> Co-authored-by: Wang, Xigui <xigui.wang@intel.com> Co-authored-by: yqiu-intel <113460727+YuningQiu@users.noreply.github.com>
1 parent 61ff58f commit 9469f8c

33 files changed

+3025
-199
lines changed

docs/tutorials/examples.md

Lines changed: 0 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -25,51 +25,6 @@ Before running these examples, please note the following:
2525

2626
### Training
2727

28-
#### Single-instance Training
29-
30-
To use Intel® Extension for PyTorch\* on training, you need to make the following changes in your code:
31-
32-
1. Import `intel_extension_for_pytorch` as `ipex`.
33-
2. Invoke the `ipex.optimize` function to apply optimizations against the model and optimizer objects, as shown below:
34-
35-
36-
```python
37-
...
38-
import torch
39-
import intel_extension_for_pytorch as ipex
40-
...
41-
model = Model()
42-
criterion = ...
43-
optimizer = ...
44-
model.train()
45-
# For Float32
46-
model, optimizer = ipex.optimize(model, optimizer=optimizer)
47-
# For BFloat16
48-
model, optimizer = ipex.optimize(model, optimizer=optimizer, dtype=torch.bfloat16)
49-
# Invoke the code below to enable beta feature torch.compile
50-
model = torch.compile(model, backend="ipex")
51-
...
52-
optimizer.zero_grad()
53-
output = model(data)
54-
...
55-
```
56-
57-
Below you can find complete code examples demonstrating how to use the extension on training for different data types:
58-
59-
##### Float32
60-
61-
**Note:** You need to install `torchvision` Python package to run the following example.
62-
63-
[//]: # (marker_train_single_fp32_complete)
64-
[//]: # (marker_train_single_fp32_complete)
65-
66-
##### BFloat16
67-
68-
**Note:** You need to install `torchvision` Python package to run the following example.
69-
70-
[//]: # (marker_train_single_bf16_complete)
71-
[//]: # (marker_train_single_bf16_complete)
72-
7328
#### Distributed Training
7429

7530
Distributed training with PyTorch DDP is accelerated by oneAPI Collective Communications Library Bindings for Pytorch\* (oneCCL Bindings for Pytorch\*). The extension supports FP32 and BF16 data types. More detailed information and examples are available at the [Github repo](https://github.com/intel/torch-ccl).

examples/cpu/inference/python/jupyter-notebooks/.gitkeep

Whitespace-only changes.

0 commit comments

Comments
 (0)