Skip to content

Commit 1afa474

Browse files
Jessica Linzou3519
authored andcommitted
Add Custom Build tutorial for iOS and Android (#325)
1 parent 94f6dad commit 1afa474

File tree

2 files changed

+88
-27
lines changed

2 files changed

+88
-27
lines changed

_mobile/android.md

Lines changed: 45 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -34,10 +34,10 @@ example = torch.rand(1, 3, 224, 224)
3434
traced_script_module = torch.jit.trace(model, example)
3535
traced_script_module.save("app/src/main/assets/model.pt")
3636
```
37-
If everything works well, we should have our model - `model.pt` generated in the assets folder of android application.
37+
If everything works well, we should have our model - `model.pt` generated in the assets folder of android application.
3838
That will be packaged inside android application as `asset` and can be used on the device.
3939

40-
More details about TorchScript you can find in [tutorials on pytorch.org](https://pytorch.org/docs/stable/jit.html)
40+
More details about TorchScript you can find in [tutorials on pytorch.org](https://pytorch.org/docs/stable/jit.html)
4141

4242
#### 2. Cloning from github
4343
```
@@ -67,14 +67,14 @@ dependencies {
6767
}
6868
```
6969
Where `org.pytorch:pytorch_android` is the main dependency with PyTorch Android API, including libtorch native library for all 4 android abis (armeabi-v7a, arm64-v8a, x86, x86_64).
70-
Further in this doc you can find how to rebuild it only for specific list of android abis.
70+
Further in this doc you can find how to rebuild it only for specific list of android abis.
7171

7272
`org.pytorch:pytorch_android_torchvision` - additional library with utility functions for converting `android.media.Image` and `android.graphics.Bitmap` to tensors.
7373

7474
#### 4. Reading image from Android Asset
7575

7676
All the logic happens in [`org.pytorch.helloworld.MainActivity`](https://github.com/pytorch/android-demo-app/blob/master/HelloWorldApp/app/src/main/java/org/pytorch/helloworld/MainActivity.java#L31-L69).
77-
As a first step we read `image.jpg` to `android.graphics.Bitmap` using the standard Android API.
77+
As a first step we read `image.jpg` to `android.graphics.Bitmap` using the standard Android API.
7878
```
7979
Bitmap bitmap = BitmapFactory.decodeStream(getAssets().open("image.jpg"));
8080
```
@@ -93,13 +93,13 @@ Tensor inputTensor = TensorImageUtils.bitmapToFloat32Tensor(bitmap,
9393
`org.pytorch.torchvision.TensorImageUtils` is part of `org.pytorch:pytorch_android_torchvision` library.
9494
The `TensorImageUtils#bitmapToFloat32Tensor` method creates tensors in the [torchvision format](https://pytorch.org/docs/stable/torchvision/models.html) using `android.graphics.Bitmap` as a source.
9595

96-
> All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224.
96+
> All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224.
9797
> The images have to be loaded in to a range of `[0, 1]` and then normalized using `mean = [0.485, 0.456, 0.406]` and `std = [0.229, 0.224, 0.225]`
9898
99-
`inputTensor`'s shape is `1x3xHxW`, where `H` and `W` are bitmap height and width appropriately.
99+
`inputTensor`'s shape is `1x3xHxW`, where `H` and `W` are bitmap height and width appropriately.
100100

101101
#### 7. Run Inference
102-
102+
103103
```
104104
Tensor outputTensor = module.forward(IValue.from(inputTensor)).toTensor();
105105
float[] scores = outputTensor.getDataAsFloatArray();
@@ -109,7 +109,7 @@ float[] scores = outputTensor.getDataAsFloatArray();
109109

110110
#### 8. Processing results
111111
Its content is retrieved using `org.pytorch.Tensor.getDataAsFloatArray()` method that returns java array of floats with scores for every image net class.
112-
112+
113113
After that we just find index with maximum score and retrieve predicted class name from `ImageNetClasses.IMAGENET_CLASSES` array that contains all ImageNet classes.
114114

115115
```
@@ -123,8 +123,8 @@ for (int i = 0; i < scores.length; i++) {
123123
}
124124
String className = ImageNetClasses.IMAGENET_CLASSES[maxScoreIdx];
125125
```
126-
127-
In the following sections you can find detailed explanations of PyTorch Android API, code walk through for a bigger [demo application](https://github.com/pytorch/android-demo-app/tree/master/PyTorchDemoApp),
126+
127+
In the following sections you can find detailed explanations of PyTorch Android API, code walk through for a bigger [demo application](https://github.com/pytorch/android-demo-app/tree/master/PyTorchDemoApp),
128128
implementation details of the API, how to customize and build it from source.
129129

130130
## PyTorch Demo Application
@@ -169,7 +169,7 @@ After getting predicted scores from the model it finds top K classes with the hi
169169
#### Language Processing Example
170170

171171
Another example is natural language processing, based on an LSTM model, trained on a reddit comments dataset.
172-
The logic happens in [`TextClassificattionActivity`](https://github.com/pytorch/android-demo-app/blob/master/PyTorchDemoApp/app/src/main/java/org/pytorch/demo/nlp/TextClassificationActivity.java).
172+
The logic happens in [`TextClassificattionActivity`](https://github.com/pytorch/android-demo-app/blob/master/PyTorchDemoApp/app/src/main/java/org/pytorch/demo/nlp/TextClassificationActivity.java).
173173

174174
Result class names are packaged inside the TorchScript model and initialized just after initial module initialization.
175175
The module has a `get_classes` method that returns `List[str]`, which can be called using method `Module.runMethod(methodName)`:
@@ -199,7 +199,7 @@ Running inference of the model is similar to previous examples:
199199
Tensor outputTensor = mModule.forward(IValue.from(inputTensor)).toTensor()
200200
```
201201

202-
After that, the code processes the output, finding classes with the highest scores.
202+
After that, the code processes the output, finding classes with the highest scores.
203203

204204
## Building PyTorch Android from Source
205205

@@ -219,7 +219,7 @@ The workflow contains several steps:
219219
2\. Create symbolic links to the results of those builds:
220220
`android/pytorch_android/src/main/jniLibs/${abi}` to the directory with output libraries
221221
`android/pytorch_android/src/main/cpp/libtorch_include/${abi}` to the directory with headers. These directories are used to build `libpytorch.so` library that will be loaded on android device.
222-
222+
223223
3\. And finally run `gradle` in `android/pytorch_android` directory with task `assembleRelease`
224224

225225
Script requires that Android SDK, Android NDK and gradle are installed.
@@ -263,6 +263,9 @@ dependencies {
263263
implementation(name:'pytorch_android', ext:'aar')
264264
implementation(name:'pytorch_android_torchvision', ext:'aar')
265265
implementation(name:'pytorch_android_fbjni', ext:'aar')
266+
...
267+
implementation 'com.android.support:appcompat-v7:28.0.0'
268+
implementation 'com.facebook.soloader:nativeloader:0.8.0'
266269
}
267270
```
268271

@@ -273,7 +276,35 @@ packagingOptions {
273276
}
274277
```
275278

276-
## More Details
279+
Also we have to add all transitive dependencies of our aars. As `pytorch_android` depends on `com.android.support:appcompat-v7:28.0.0` and `com.facebook.soloader:nativeloader:0.8.0`, we need to add them. (In case of using maven dependencies they are added automatically from `pom.xml`).
280+
281+
282+
## Custom Build
283+
284+
To reduce the size of binaries you can do custom build of PyTorch Android with only set of operators required by your model.
285+
This includes two steps: preparing the list of operators from your model, rebuilding pytorch android with specified list.
286+
287+
1\. Preparation of the list of operators
288+
289+
List of operators of your serialized torchscript model can be prepared in yaml format using python api function `torch.jit.export_opnames()`:
290+
```
291+
import torch, yaml
292+
m = torch.jit.load("example.pt")
293+
ops = torch.jit.export_opnames(m)
294+
f = open('test.yaml', 'w')
295+
yaml.dump(ops, f)
296+
```
297+
2\. Building PyTorch Android with prepared operators list.
298+
299+
To build PyTorch Android with the prepared yaml list of operators, specify it in the environment variable `SELECTED_OP_LIST`. Also in the arguments, specify which Android ABIs it should build; by default it builds all 4 Android ABIs.
300+
301+
```
302+
SELECTED_OP_LIST=test.yaml sh scripts/build_pytorch_android.sh x86
303+
```
304+
305+
After successful build you can integrate the result aar files to your android gradle project, following the steps from previous section of this tutorial (Building PyTorch Android from Source).
306+
307+
## API Docs
277308

278309
You can find more details about the PyTorch Android API in the [Javadoc](https://pytorch.org/docs/stable/packages.html).
279310

_mobile/ios.md

Lines changed: 43 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
---
1+
\---
22
layout: mobile
33
title: iOS
44
permalink: /mobile/ios/
@@ -10,17 +10,17 @@ published: true
1010

1111
# iOS
1212

13-
To get started with PyTorch on iOS, we recommend exploring the following [HelloWorld](https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld).
13+
To get started with PyTorch on iOS, we recommend exploring the following [HelloWorld](https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld).
1414

1515
## Quickstart with a Hello World Example
1616

1717
HelloWorld is a simple image classification application that demonstrates how to use PyTorch C++ libraries on iOS. The code is written in Swift and uses Objective-C as a bridge.
1818

1919
### Model Preparation
2020

21-
Let's start with model preparation. If you are familiar with PyTorch, you probably should already know how to train and save your model. In case you don't, we are going to use a pre-trained image classification model - [Resnet18](https://pytorch.org/hub/pytorch_vision_resnet/), which is already packaged in [TorchVision](https://pytorch.org/docs/stable/torchvision/index.html). To install it, run the command below.
21+
Let's start with model preparation. If you are familiar with PyTorch, you probably should already know how to train and save your model. In case you don't, we are going to use a pre-trained image classification model - [MobileNet v2](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/), which is already packaged in [TorchVision](https://pytorch.org/docs/stable/torchvision/index.html). To install it, run the command below.
2222

23-
> We highly recommend following the [Pytorch Github page](https://github.com/pytorch/pytorch) to set up the Python development environment on your local machine.
23+
> We highly recommend following the [Pytorch Github page](https://github.com/pytorch/pytorch) to set up the Python development environment on your local machine.
2424
2525
```shell
2626
pip install torchvision
@@ -34,7 +34,7 @@ python trace_model.py
3434

3535
If everything works well, we should have our model - `model.pt` generated in the `HelloWorld` folder. Now copy the model file to our application folder `HelloWorld/model`.
3636

37-
> To find out more details about TorchScript, please visit [tutorials on pytorch.org](https://pytorch.org/tutorials/advanced/cpp_export.html)
37+
> To find out more details about TorchScript, please visit [tutorials on pytorch.org](https://pytorch.org/tutorials/advanced/cpp_export.html)
3838
3939
### Install LibTorch via Cocoapods
4040

@@ -94,7 +94,7 @@ private lazy var module: TorchModule = {
9494
}
9595
}()
9696
```
97-
Note that the `TorchModule` Class is an Objective-C wrapper of `torch::jit::script::Module`.
97+
Note that the `TorchModule` Class is an Objective-C wrapper of `torch::jit::script::Module`.
9898

9999
```cpp
100100
torch::jit::script::Module module = torch::jit::load(filePath.UTF8String);
@@ -103,7 +103,7 @@ Since Swift can not talk to C++ directly, we have to either use an Objective-C c
103103

104104
#### Run Inference
105105

106-
Now it's time to run inference and get the results.
106+
Now it's time to run inference and get the results.
107107

108108
```swift
109109
guard let outputs = module.predict(image: UnsafeMutableRawPointer(&pixelBuffer)) else {
@@ -115,17 +115,16 @@ Again, the `predict` method is just an Objective-C wrapper. Under the hood, it c
115115
```cpp
116116
at::Tensor tensor = torch::from_blob(imageBuffer, {1, 3, 224, 224}, at::kFloat);
117117
torch::autograd::AutoGradMode guard(false);
118-
at::AutoNonVariableTypeMode non_var_type_mode(true);
119118
auto outputTensor = _impl.forward({tensor}).toTensor();
120119
float* floatBuffer = outputTensor.data_ptr<float>();
121120
```
122-
The C++ function `torch::from_blob` will create an input tensor from the pixel buffer. Note that the shape of the tensor is `{1,3,224,224}` which represents `NxCxWxH` as we discussed in the above section.
121+
The C++ function `torch::from_blob` will create an input tensor from the pixel buffer. Note that the shape of the tensor is `{1,3,224,224}` which represents `NxCxWxH` as we discussed in the above section.
123122
124123
```cpp
125124
torch::autograd::AutoGradMode guard(false);
126125
at::AutoNonVariableTypeMode non_var_type_mode(true);
127126
```
128-
The above two lines tells the PyTorch engine to do inference only. This is because by default, PyTorch has built-in support for doing auto-differentiation, which is also known as [autograd](https://pytorch.org/docs/stable/notes/autograd.html). Since we don't do training on mobile, we can just disable the autograd mode.
127+
The above two lines tells the PyTorch engine to do inference only. This is because by default, PyTorch has built-in support for doing auto-differentiation, which is also known as [autograd](https://pytorch.org/docs/stable/notes/autograd.html). Since we don't do training on mobile, we can just disable the autograd mode.
129128

130129
Finally, we can call this `forward` function to get the output tensor and convert it to a `float` buffer.
131130

@@ -183,16 +182,47 @@ After the build succeeds, all static libraries and header files will be generate
183182

184183
Open your project in XCode, copy all the static libraries as well as header files to your project. Navigate to the project settings, set the value **Header Search Paths** to the path of header files you just copied.
185184

186-
In the build settings, search for **other linker flags**. Add a custom linker flag below
185+
In the build settings, search for **other linker flags**. Add a custom linker flag below
187186

188187
```
189188
-force_load $(PROJECT_DIR)/${path-to-libtorch.a}
190189
```
191190
Finally, disable bitcode for your target by selecting the Build Settings, searching for **Enable Bitcode**, and set the value to **No**.
192191

193-
## API Docs
192+
### API Docs
193+
194+
Currently, the iOS framework uses the Pytorch C++ front-end APIs directly. The C++ document can be found [here](https://pytorch.org/cppdocs/). To learn more about it, we recommend exploring the [C++ front-end tutorials](https://pytorch.org/tutorials/advanced/cpp_frontend.html) on PyTorch webpage. In the meantime, we're working on providing the Swift/Objective-C API wrappers to PyTorch.
195+
196+
197+
### Custom Build
194198

195-
Currently, the iOS framework uses the Pytorch C++ front-end APIs directly. The C++ document can be found here https://pytorch.org/cppdocs/. To learn more about it, we recommend exploring the [C++ front-end tutorials](https://pytorch.org/tutorials/advanced/cpp_frontend.html) on PyTorch webpage. In the meantime, we're working on providing the Swift/Objective-C API wrappers to PyTorch.
199+
Starting from 1.4.0, PyTorch supports custom build. You can now build the PyTorch library that only contains the operators needed by your model. To do that, follow the steps below
200+
201+
1\. Verify your PyTorch version is 1.4.0 or above. You can do that by checking the value of `torch.__version__`.
202+
203+
2\. To dump the operators in your model, run the following lines of Python code:
204+
205+
```python
206+
import torch, yaml
207+
model = torch.jit.load("example.pt")
208+
ops = torch.jit.export_opnames(m)
209+
f = open('example.yaml', 'w')
210+
yaml.dump(ops, f)
211+
```
212+
In the snippet above, you first need to load the ScriptModule. Then, use `export_opnames` to return a list of operator names of the ScriptModule and its submodules. Lastly, save the result in a yaml file.
213+
214+
3\. To run the iOS build script locally with the prepared yaml list of operators, pass in the yaml file generate from the last step into the environment variable `SELECTED_OP_LIST`. Also in the arguments, specify `BUILD_PYTORCH_MOBILE=1` as well as the platform/architechture type. Take the arm64 build for example, the command should be:
215+
216+
```
217+
SELECTED_OP_LIST=example.yaml BUILD_PYTORCH_MOBILE=1 IOS_ARCH=arm64 ./scripts/build_ios.sh
218+
```
219+
4\. After the build succeeds, you can integrate the result libraries to your project by following the [XCode Setup](#xcode-setup) section above.
220+
221+
5\. The last step is to add a single line of C++ code before running `forward`. This is because by default JIT will do some optimizations on operators (fusion for example), which might break the consistency with the ops we dumped from the model.
222+
223+
```cpp
224+
torch::jit::GraphOptimizerEnabledGuard guard(false);
225+
```
196226
197227
## Issues and Contribution
198228

0 commit comments

Comments
 (0)