You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If everything works well, we should have our model - `model.pt` generated in the assets folder of android application.
37
+
If everything works well, we should have our model - `model.pt` generated in the assets folder of android application.
38
38
That will be packaged inside android application as `asset` and can be used on the device.
39
39
40
-
More details about TorchScript you can find in [tutorials on pytorch.org](https://pytorch.org/docs/stable/jit.html)
40
+
More details about TorchScript you can find in [tutorials on pytorch.org](https://pytorch.org/docs/stable/jit.html)
41
41
42
42
#### 2. Cloning from github
43
43
```
@@ -67,14 +67,14 @@ dependencies {
67
67
}
68
68
```
69
69
Where `org.pytorch:pytorch_android` is the main dependency with PyTorch Android API, including libtorch native library for all 4 android abis (armeabi-v7a, arm64-v8a, x86, x86_64).
70
-
Further in this doc you can find how to rebuild it only for specific list of android abis.
70
+
Further in this doc you can find how to rebuild it only for specific list of android abis.
71
71
72
72
`org.pytorch:pytorch_android_torchvision` - additional library with utility functions for converting `android.media.Image` and `android.graphics.Bitmap` to tensors.
73
73
74
74
#### 4. Reading image from Android Asset
75
75
76
76
All the logic happens in [`org.pytorch.helloworld.MainActivity`](https://github.com/pytorch/android-demo-app/blob/master/HelloWorldApp/app/src/main/java/org/pytorch/helloworld/MainActivity.java#L31-L69).
77
-
As a first step we read `image.jpg` to `android.graphics.Bitmap` using the standard Android API.
77
+
As a first step we read `image.jpg` to `android.graphics.Bitmap` using the standard Android API.
`org.pytorch.torchvision.TensorImageUtils` is part of `org.pytorch:pytorch_android_torchvision` library.
94
94
The `TensorImageUtils#bitmapToFloat32Tensor` method creates tensors in the [torchvision format](https://pytorch.org/docs/stable/torchvision/models.html) using `android.graphics.Bitmap` as a source.
95
95
96
-
> All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224.
96
+
> All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224.
97
97
> The images have to be loaded in to a range of `[0, 1]` and then normalized using `mean = [0.485, 0.456, 0.406]` and `std = [0.229, 0.224, 0.225]`
98
98
99
-
`inputTensor`'s shape is `1x3xHxW`, where `H` and `W` are bitmap height and width appropriately.
99
+
`inputTensor`'s shape is `1x3xHxW`, where `H` and `W` are bitmap height and width appropriately.
Its content is retrieved using `org.pytorch.Tensor.getDataAsFloatArray()` method that returns java array of floats with scores for every image net class.
112
-
112
+
113
113
After that we just find index with maximum score and retrieve predicted class name from `ImageNetClasses.IMAGENET_CLASSES` array that contains all ImageNet classes.
114
114
115
115
```
@@ -123,8 +123,8 @@ for (int i = 0; i < scores.length; i++) {
In the following sections you can find detailed explanations of PyTorch Android API, code walk through for a bigger [demo application](https://github.com/pytorch/android-demo-app/tree/master/PyTorchDemoApp),
126
+
127
+
In the following sections you can find detailed explanations of PyTorch Android API, code walk through for a bigger [demo application](https://github.com/pytorch/android-demo-app/tree/master/PyTorchDemoApp),
128
128
implementation details of the API, how to customize and build it from source.
129
129
130
130
## PyTorch Demo Application
@@ -169,7 +169,7 @@ After getting predicted scores from the model it finds top K classes with the hi
169
169
#### Language Processing Example
170
170
171
171
Another example is natural language processing, based on an LSTM model, trained on a reddit comments dataset.
172
-
The logic happens in [`TextClassificattionActivity`](https://github.com/pytorch/android-demo-app/blob/master/PyTorchDemoApp/app/src/main/java/org/pytorch/demo/nlp/TextClassificationActivity.java).
172
+
The logic happens in [`TextClassificattionActivity`](https://github.com/pytorch/android-demo-app/blob/master/PyTorchDemoApp/app/src/main/java/org/pytorch/demo/nlp/TextClassificationActivity.java).
173
173
174
174
Result class names are packaged inside the TorchScript model and initialized just after initial module initialization.
175
175
The module has a `get_classes` method that returns `List[str]`, which can be called using method `Module.runMethod(methodName)`:
@@ -199,7 +199,7 @@ Running inference of the model is similar to previous examples:
After that, the code processes the output, finding classes with the highest scores.
202
+
After that, the code processes the output, finding classes with the highest scores.
203
203
204
204
## Building PyTorch Android from Source
205
205
@@ -219,7 +219,7 @@ The workflow contains several steps:
219
219
2\. Create symbolic links to the results of those builds:
220
220
`android/pytorch_android/src/main/jniLibs/${abi}` to the directory with output libraries
221
221
`android/pytorch_android/src/main/cpp/libtorch_include/${abi}` to the directory with headers. These directories are used to build `libpytorch.so` library that will be loaded on android device.
222
-
222
+
223
223
3\. And finally run `gradle` in `android/pytorch_android` directory with task `assembleRelease`
224
224
225
225
Script requires that Android SDK, Android NDK and gradle are installed.
Also we have to add all transitive dependencies of our aars. As `pytorch_android` depends on `com.android.support:appcompat-v7:28.0.0` and `com.facebook.soloader:nativeloader:0.8.0`, we need to add them. (In case of using maven dependencies they are added automatically from `pom.xml`).
280
+
281
+
282
+
## Custom Build
283
+
284
+
To reduce the size of binaries you can do custom build of PyTorch Android with only set of operators required by your model.
285
+
This includes two steps: preparing the list of operators from your model, rebuilding pytorch android with specified list.
286
+
287
+
1\. Preparation of the list of operators
288
+
289
+
List of operators of your serialized torchscript model can be prepared in yaml format using python api function `torch.jit.export_opnames()`:
290
+
```
291
+
import torch, yaml
292
+
m = torch.jit.load("example.pt")
293
+
ops = torch.jit.export_opnames(m)
294
+
f = open('test.yaml', 'w')
295
+
yaml.dump(ops, f)
296
+
```
297
+
2\. Building PyTorch Android with prepared operators list.
298
+
299
+
To build PyTorch Android with the prepared yaml list of operators, specify it in the environment variable `SELECTED_OP_LIST`. Also in the arguments, specify which Android ABIs it should build; by default it builds all 4 Android ABIs.
300
+
301
+
```
302
+
SELECTED_OP_LIST=test.yaml sh scripts/build_pytorch_android.sh x86
303
+
```
304
+
305
+
After successful build you can integrate the result aar files to your android gradle project, following the steps from previous section of this tutorial (Building PyTorch Android from Source).
306
+
307
+
## API Docs
277
308
278
309
You can find more details about the PyTorch Android API in the [Javadoc](https://pytorch.org/docs/stable/packages.html).
Copy file name to clipboardExpand all lines: _mobile/ios.md
+43-13Lines changed: 43 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
---
1
+
\---
2
2
layout: mobile
3
3
title: iOS
4
4
permalink: /mobile/ios/
@@ -10,17 +10,17 @@ published: true
10
10
11
11
# iOS
12
12
13
-
To get started with PyTorch on iOS, we recommend exploring the following [HelloWorld](https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld).
13
+
To get started with PyTorch on iOS, we recommend exploring the following [HelloWorld](https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld).
14
14
15
15
## Quickstart with a Hello World Example
16
16
17
17
HelloWorld is a simple image classification application that demonstrates how to use PyTorch C++ libraries on iOS. The code is written in Swift and uses Objective-C as a bridge.
18
18
19
19
### Model Preparation
20
20
21
-
Let's start with model preparation. If you are familiar with PyTorch, you probably should already know how to train and save your model. In case you don't, we are going to use a pre-trained image classification model - [Resnet18](https://pytorch.org/hub/pytorch_vision_resnet/), which is already packaged in [TorchVision](https://pytorch.org/docs/stable/torchvision/index.html). To install it, run the command below.
21
+
Let's start with model preparation. If you are familiar with PyTorch, you probably should already know how to train and save your model. In case you don't, we are going to use a pre-trained image classification model - [MobileNet v2](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/), which is already packaged in [TorchVision](https://pytorch.org/docs/stable/torchvision/index.html). To install it, run the command below.
22
22
23
-
> We highly recommend following the [Pytorch Github page](https://github.com/pytorch/pytorch) to set up the Python development environment on your local machine.
23
+
> We highly recommend following the [Pytorch Github page](https://github.com/pytorch/pytorch) to set up the Python development environment on your local machine.
24
24
25
25
```shell
26
26
pip install torchvision
@@ -34,7 +34,7 @@ python trace_model.py
34
34
35
35
If everything works well, we should have our model - `model.pt` generated in the `HelloWorld` folder. Now copy the model file to our application folder `HelloWorld/model`.
36
36
37
-
> To find out more details about TorchScript, please visit [tutorials on pytorch.org](https://pytorch.org/tutorials/advanced/cpp_export.html)
37
+
> To find out more details about TorchScript, please visit [tutorials on pytorch.org](https://pytorch.org/tutorials/advanced/cpp_export.html)
The C++ function `torch::from_blob` will create an input tensor from the pixel buffer. Note that the shape of the tensor is `{1,3,224,224}` which represents `NxCxWxH` as we discussed in the above section.
121
+
The C++ function `torch::from_blob` will create an input tensor from the pixel buffer. Note that the shape of the tensor is `{1,3,224,224}` which represents `NxCxWxH` as we discussed in the above section.
The above two lines tells the PyTorch engine to do inference only. This is because by default, PyTorch has built-in support for doing auto-differentiation, which is also known as [autograd](https://pytorch.org/docs/stable/notes/autograd.html). Since we don't do training on mobile, we can just disable the autograd mode.
127
+
The above two lines tells the PyTorch engine to do inference only. This is because by default, PyTorch has built-in support for doing auto-differentiation, which is also known as [autograd](https://pytorch.org/docs/stable/notes/autograd.html). Since we don't do training on mobile, we can just disable the autograd mode.
129
128
130
129
Finally, we can call this `forward` function to get the output tensor and convert it to a `float` buffer.
131
130
@@ -183,16 +182,47 @@ After the build succeeds, all static libraries and header files will be generate
183
182
184
183
Open your project in XCode, copy all the static libraries as well as header files to your project. Navigate to the project settings, set the value **Header Search Paths** to the path of header files you just copied.
185
184
186
-
In the build settings, search for **other linker flags**. Add a custom linker flag below
185
+
In the build settings, search for **other linker flags**. Add a custom linker flag below
187
186
188
187
```
189
188
-force_load $(PROJECT_DIR)/${path-to-libtorch.a}
190
189
```
191
190
Finally, disable bitcode for your target by selecting the Build Settings, searching for **Enable Bitcode**, and set the value to **No**.
192
191
193
-
## API Docs
192
+
### API Docs
193
+
194
+
Currently, the iOS framework uses the Pytorch C++ front-end APIs directly. The C++ document can be found [here](https://pytorch.org/cppdocs/). To learn more about it, we recommend exploring the [C++ front-end tutorials](https://pytorch.org/tutorials/advanced/cpp_frontend.html) on PyTorch webpage. In the meantime, we're working on providing the Swift/Objective-C API wrappers to PyTorch.
195
+
196
+
197
+
### Custom Build
194
198
195
-
Currently, the iOS framework uses the Pytorch C++ front-end APIs directly. The C++ document can be found here https://pytorch.org/cppdocs/. To learn more about it, we recommend exploring the [C++ front-end tutorials](https://pytorch.org/tutorials/advanced/cpp_frontend.html) on PyTorch webpage. In the meantime, we're working on providing the Swift/Objective-C API wrappers to PyTorch.
199
+
Starting from 1.4.0, PyTorch supports custom build. You can now build the PyTorch library that only contains the operators needed by your model. To do that, follow the steps below
200
+
201
+
1\. Verify your PyTorch version is 1.4.0 or above. You can do that by checking the value of `torch.__version__`.
202
+
203
+
2\. To dump the operators in your model, run the following lines of Python code:
204
+
205
+
```python
206
+
import torch, yaml
207
+
model = torch.jit.load("example.pt")
208
+
ops = torch.jit.export_opnames(m)
209
+
f =open('example.yaml', 'w')
210
+
yaml.dump(ops, f)
211
+
```
212
+
In the snippet above, you first need to load the ScriptModule. Then, use `export_opnames` to return a list of operator names of the ScriptModule and its submodules. Lastly, save the result in a yaml file.
213
+
214
+
3\. To run the iOS build script locally with the prepared yaml list of operators, pass in the yaml file generate from the last step into the environment variable `SELECTED_OP_LIST`. Also in the arguments, specify `BUILD_PYTORCH_MOBILE=1` as well as the platform/architechture type. Take the arm64 build for example, the command should be:
4\. After the build succeeds, you can integrate the result libraries to your project by following the [XCode Setup](#xcode-setup) section above.
220
+
221
+
5\. The last step is to add a single line of C++ code before running `forward`. This is because by default JIT will do some optimizations on operators (fusion for example), which might break the consistency with the ops we dumped from the model.
0 commit comments