You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To get started with PyTorch on iOS, we recommend exploring the following a HelloWorld example on Github [Hellow Word](https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld).
14
14
15
-
#!/usr/bin/python3
16
-
print('Hello World!')
15
+
## HelloWorld example
17
16
18
-
{% endhighlight %}
17
+
HelloWorld is a simple image classification application that demonstrates how to use PyTorch C++ libraries on iOS. The code is written in Swift and uses an Objective-C class as a bridging header.
19
18
20
-
## Lorem Ipsum
19
+
Before we jump into details, we highly recommend following the Pytorch Github page to setup Python development environment on your local machine.
21
20
22
-
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque eu placerat odio, nec tristique sem. Ut sed vehicula tellus. Morbi facilisis rutrum quam. Suspendisse quis lacus finibus, lobortis justo in, aliquet velit. Vestibulum ultrices dignissim felis, in fermentum mauris ullamcorper molestie. Quisque faucibus ac enim eu dignissim. Nullam sit amet nibh eleifend, semper nisi et, scelerisque ex. Integer a mauris diam.
21
+
### Model preparation
23
22
24
-
Morbi luctus, metus non porttitor vestibulum, arcu dui rhoncus velit, sollicitudin efficitur augue augue ac neque. Vestibulum at nisl nec velit fermentum tristique. Sed pharetra sit amet justo vitae mattis. Sed dolor elit, rhoncus in eleifend convallis, facilisis vel mauris. Morbi mattis metus enim, in fermentum ipsum malesuada vel. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam quis lobortis felis. Nunc dignissim ipsum ante, nec finibus quam facilisis maximus. Donec fermentum consectetur elementum. Quisque ac maximus ex. Nunc rutrum orci sed felis viverra cursus.
23
+
Let's start with model preparation. If you are familiar with PyTorch, you probably should already know how to train and save your model. In case you don't, we are going to use a pre-trained image classification model(Resnet18) which is packaged in [TorchVision](https://pytorch.org/docs/stable/torchvision/index.html). To install TorchVision, run the command below
25
24
26
-
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque eu placerat odio, nec tristique sem. Ut sed vehicula tellus. Morbi facilisis rutrum quam. Suspendisse quis lacus finibus, lobortis justo in, aliquet velit. Vestibulum ultrices dignissim felis, in fermentum mauris ullamcorper molestie. Quisque faucibus ac enim eu dignissim. Nullam sit amet nibh eleifend, semper nisi et, scelerisque ex. Integer a mauris diam.
25
+
```shell
26
+
pip install torchvision
27
+
```
27
28
28
-
## Binaries
29
+
Once we have TorchVision installed successfully, let's navigate to the HelloWorld folder and run a python script to generate our model. The `trace_model.py` contains the code of tracing and saving a [torchscript model](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html) that can be run on mobile devices. Run the command below to get our model
29
30
30
-
Morbi luctus, metus non porttitor vestibulum, arcu dui rhoncus velit, sollicitudin efficitur augue augue ac neque. Vestibulum at nisl nec velit fermentum tristique. Sed pharetra sit amet justo vitae mattis. Sed dolor elit, rhoncus in eleifend convallis, facilisis vel mauris. Morbi mattis metus enim, in fermentum ipsum malesuada vel. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam quis lobortis felis. Nunc dignissim ipsum ante, nec finibus quam facilisis maximus. Donec fermentum consectetur elementum. Quisque ac maximus ex. Nunc rutrum orci sed felis viverra cursus.
31
+
```shell
32
+
python trace_model.py
33
+
```
31
34
32
-
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque eu placerat odio, nec tristique sem. Ut sed vehicula tellus. Morbi facilisis rutrum quam. Suspendisse quis lacus finibus, lobortis justo in, aliquet velit. Vestibulum ultrices dignissim felis, in fermentum mauris ullamcorper molestie. Quisque faucibus ac enim eu dignissim. Nullam sit amet nibh eleifend, semper nisi et, scelerisque ex. Integer a mauris diam.
35
+
If everything works well, we should have our model - `model.pt` generated in the same folder. Now copy the model file to our application folder `HelloWorld/model`.
33
36
34
-
##Build Scripts
37
+
### Install PyTorch C++ libraries via Cocoapods
35
38
36
-
Morbi luctus, metus non porttitor vestibulum, arcu dui rhoncus velit, sollicitudin efficitur augue augue ac neque. Vestibulum at nisl nec velit fermentum tristique. Sed pharetra sit amet justo vitae mattis. Sed dolor elit, rhoncus in eleifend convallis, facilisis vel mauris. Morbi mattis metus enim, in fermentum ipsum malesuada vel. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam quis lobortis felis. Nunc dignissim ipsum ante, nec finibus quam facilisis maximus. Donec fermentum consectetur elementum. Quisque ac maximus ex. Nunc rutrum orci sed felis viverra cursus.
37
-
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque eu placerat odio, nec tristique sem. Ut sed vehicula tellus. Morbi facilisis rutrum quam. Suspendisse quis lacus finibus, lobortis justo in, aliquet velit. Vestibulum ultrices dignissim felis, in fermentum mauris ullamcorper molestie. Quisque faucibus ac enim eu dignissim. Nullam sit amet nibh eleifend, semper nisi et, scelerisque ex. Integer a mauris diam.
39
+
The PyTorch C++ library is available in [Cocoapods](https://cocoapods.org/), to integrate it to our project, we can simply run
38
40
39
-
Morbi luctus, metus non porttitor vestibulum, arcu dui rhoncus velit, sollicitudin efficitur augue augue ac neque. Vestibulum at nisl nec velit fermentum tristique. Sed pharetra sit amet justo vitae mattis. Sed dolor elit, rhoncus in eleifend convallis, facilisis vel mauris. Morbi mattis metus enim, in fermentum ipsum malesuada vel. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam quis lobortis felis. Nunc dignissim ipsum ante, nec finibus quam facilisis maximus. Donec fermentum consectetur elementum. Quisque ac maximus ex. Nunc rutrum orci sed felis viverra cursus.
40
-
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque eu placerat odio, nec tristique sem. Ut sed vehicula tellus. Morbi facilisis rutrum quam. Suspendisse quis lacus finibus, lobortis justo in, aliquet velit. Vestibulum ultrices dignissim felis, in fermentum mauris ullamcorper molestie. Quisque faucibus ac enim eu dignissim. Nullam sit amet nibh eleifend, semper nisi et, scelerisque ex. Integer a mauris diam.
41
+
```ruby
42
+
pod install
43
+
```
44
+
Now it's time to open the `HelloWorld.xcworkspace` in XCode, select an iOS simulator and hit the build and run button (cmd + R). If everything works well, we should see a wolf picture on the simulator screen along with the prediction result.
41
45
42
-
##Objective C Walkthrough Cocopods Walkthrough Tutorials
46
+
### Code Walkthrough
43
47
44
-
Morbi luctus, metus non porttitor vestibulum, arcu dui rhoncus velit, sollicitudin efficitur augue augue ac neque. Vestibulum at nisl nec velit fermentum tristique. Sed pharetra sit amet justo vitae mattis. Sed dolor elit, rhoncus in eleifend convallis, facilisis vel mauris. Morbi mattis metus enim, in fermentum ipsum malesuada vel. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam quis lobortis felis. Nunc dignissim ipsum ante, nec finibus quam facilisis maximus. Donec fermentum consectetur elementum. Quisque ac maximus ex. Nunc rutrum orci sed felis viverra cursus.
45
-
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque eu placerat odio, nec tristique sem. Ut sed vehicula tellus. Morbi facilisis rutrum quam. Suspendisse quis lacus finibus, lobortis justo in, aliquet velit. Vestibulum ultrices dignissim felis, in fermentum mauris ullamcorper molestie. Quisque faucibus ac enim eu dignissim. Nullam sit amet nibh eleifend, semper nisi et, scelerisque ex. Integer a mauris diam.
48
+
In this part, we are going to walk through the code step by step. The `ViewController.swift` contains most of the code.
49
+
50
+
- Image loading
51
+
52
+
Let's begin with image loading.
53
+
54
+
```swift
55
+
let image =UIImage(named: "image.jpg")!
56
+
imageView.image= image
57
+
let resizedImage = image.resized(to: CGSize(width: 224, height: 224))
We first load an image from the bundle and resize it to 224x224 which is the size of the input tensor. Then we call this `normalized()` category method on UIImage to get normalized pixel data from the image. Let's take a look at the code below
64
+
65
+
```swift
66
+
var normalizedBuffer: [Float32] = [Float32](repeating: 0, count: w * h *3)
67
+
// normalize the pixel buffer
68
+
// see https://pytorch.org/hub/pytorch_vision_resnet/ for more detail
69
+
for i in0..< w * h {
70
+
normalizedBuffer[i] = (Float32(rawBytes[i *4+0]) /255.0-0.485) /0.229// R
71
+
normalizedBuffer[w * h + i] = (Float32(rawBytes[i *4+1]) /255.0-0.456) /0.224// G
72
+
normalizedBuffer[w * h *2+ i] = (Float32(rawBytes[i *4+2]) /255.0-0.406) /0.225// B
73
+
}
74
+
```
75
+
The input data of our model is a 3-channel RGB image of shape (3 x H x W), where H and W are expected to be at least 224. The image have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225].
76
+
77
+
- Init JIT interpreter
78
+
79
+
Now that we have preprocessed our input data and we have a pre-trained TorchScript model, the next step is to use the model and the data to run the predication. To do that, we'll first load our model into the application
Now it's time to run the inference and get the result. We pass in the pixel buffer object as a raw pointer to the `predict` method and get the result from it.
Again, the `predict` method on the `module` is an Objective-C method. Under the hood, it calls the C++ version of predict which is `forward`
107
+
108
+
```cpp
109
+
auto outputTensor = _impl.forward({inputTensor}).toTensor();
110
+
```
111
+
112
+
### Collect results
113
+
114
+
The output tensor is a one-dimensional float array of shape 1x1000 where each value represents the confidence that a label is predicted from the image. The code below sorts the array and retrieves the top three results.
115
+
116
+
```swift
117
+
let zippedResults = zip(labels.indices, outputs)
118
+
let sortedResults = zippedResults.sorted { $0.1.floatValue > $1.1.floatValue }.prefix(3)
119
+
```
120
+
121
+
### PyTorch demo app
122
+
123
+
For more complex use case, please checkout the [PyTorch demo application](https://github.com/pytorch/ios-demo-app/tree/master/PyTorchDemo) is a an app that contains two show cases - A full-fledged image classfification camera app that runs a quntized version of mobilenetv2 model and a text classfification app using a self trained NLP model to predict the topic of the input string.
124
+
125
+
## Build PyTorch iOS libraries from source
126
+
127
+
To track the latest progress on mobile, we can always build the PyTorch iOS libraries from the source. Follow the steps below
128
+
129
+
### Setup local Python development environment
130
+
131
+
- Follow the PyTorch Github page to set up the Python environment.
132
+
- Make sure you have `cmake` and Python installed correctly on your local machine.
133
+
134
+
### Build LibTorch.a for iOS simulator
135
+
136
+
- Open the terminal and navigate to the PyTorch root directory.
- After build build succeed, all static libraries and header files were generated under `build_ios/install`
147
+
148
+
### XCode setup
149
+
150
+
- Open XCode, copy all the static libraries as well as header files to your project
151
+
- Navigate to the project settings, set the value "Header Search Paths" to the path of header files you just copied in the first step.
152
+
- In the build settings, search for "other linker flags". Add a custom linker flag below
153
+
```
154
+
-force_load $(PROJECT_DIR)/path-to-libtorch.a
155
+
```
156
+
- Disable bitcode for your target by selecting the Build Settings, searching for Enable Bitcode and set the value to No.
157
+
158
+
159
+
## API Docs
160
+
161
+
Currently, the iOS framework uses the raw Pytorch C++ APIs directly. The C++ document can be found here https://pytorch.org/cppdocs/. To learn how to use them, we recommend exploring the [C++ front-end tutorials](https://pytorch.org/tutorials/advanced/cpp_frontend.html) on PyTorch webpage. In the meantime, we're working on providing the Swift/Objective-C API wrappers to PyTorch.
162
+
163
+
## Issues and Contribution
164
+
165
+
If you have any questions or want to contribute to PyTorch, please feel free to drop issues or open pull request to get in touch.
46
166
47
-
## How To
48
167
49
-
Morbi luctus, metus non porttitor vestibulum, arcu dui rhoncus velit, sollicitudin efficitur augue augue ac neque. Vestibulum at nisl nec velit fermentum tristique. Sed pharetra sit amet justo vitae mattis. Sed dolor elit, rhoncus in eleifend convallis, facilisis vel mauris. Morbi mattis metus enim, in fermentum ipsum malesuada vel. Lorem ipsum dolor sit amet, consectetur adipiscing elit.
50
168
51
-
## Dolor Sit Amet
52
169
53
-
Aliquam quis lobortis felis. Nunc dignissim ipsum ante, nec finibus quam facilisis maximus. Donec fermentum consectetur elementum. Quisque ac maximus ex. Nunc rutrum orci sed felis viverra cursus.
0 commit comments