Skip to content

[Mobile] Landing page update for v1.6.0 #409

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 28, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion _layouts/mobile.html
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
<div class="container">
<h1>PyTorch Mobile</h1>

<p class="lead">End-to-end workflow from Python to deployment on iOS and Android</p>
<p class="lead">End-to-end workflow from Training to Deployment for iOS and Android mobile devices</p>
</div>
</div>

Expand Down
20 changes: 13 additions & 7 deletions _mobile/home.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,23 @@ redirect_from: "/mobile/"

# PyTorch Mobile

Running ML on edge devices is growing in importance as applications continue to demand lower latency. It is also a foundational element for privacy-preserving techniques such as federated learning. As of PyTorch 1.3, PyTorch supports an end-to-end workflow from Python to deployment on iOS and Android.
There is a growing need to execute ML models on edge devices to reduce latency, preserve privacy and enable new interactive use cases. In the past, engineers used to train models separately. They would then go through a multi-step, error prone and often complex process to transform the models for execution on a mobile device. The mobile runtime was often significantly different from the operations available during training leading to inconsistent developer and eventually user experience.

This is an early, experimental release that we will be building on in several areas over the coming months:
PyTorch Mobile removes these friction surfaces by allowing a seamless process to go from training to deployment by staying entirely within the PyTorch ecosystem. It provides an end-to-end workflow that simplifies the research to production environment for mobile devices. In addition, it paves the way for privacy-preserving features via Federated Learning techniques.

- Provide APIs that cover common preprocessing and integration tasks needed for incorporating ML in mobile applications
- Support for QNNPACK quantized kernel libraries and support for ARM CPUs
- Build level optimization and selective compilation depending on the operators needed for user applications (i.e., you pay binary size for only the operators you need)
- Further improvements to performance and coverage on mobile CPUs and GPUs
PyTorch Mobile is in beta stage right now and in wide scale production use. It will soon be available as a stable release once the APIs are locked down.

Learn more or get started on [Android]({{site.baseurl}}/mobile/android) or [iOS]({{site.baseurl}}/mobile/ios).
Key features of PyTorch Mobile:

* Available for [iOS]({{site.baseurl}}/mobile/ios), [Android]({{site.baseurl}}/mobile/android) and Linux
* Provides APIs that cover common preprocessing and integration tasks needed for incorporating ML in mobile applications
* Support for tracing and scripting via TorchScript IR
* Support for XNNPACK floating point kernel libraries for Arm CPUs
* Integration of QNNPACK for 8-bit quantized kernels. Includes support for per-channel quantization, dynamic quantization and more
* Build level optimization and selective compilation depending on the operators needed for user applications, i.e., the final binary size of the app is determined by the actual operators the app needs
* Support for hardware backends like GPU, DSP, NPU will be available soon

A typical workflow from training to mobile deployment with the optional model optimization steps is outlined in the following figure.
<div class="text-center">
<img src="{{ site.baseurl }}/assets/images/pytorch-mobile.png" width="100%">
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recommendation to add a simple caption above this image. Not just helpful for sighted users, but also useful for visually impaired users or in situations if the the image doesn't render for some reason.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call. Added a string above the figure. Let me know if that works for you?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That description sounds good 👍

</div>
Expand Down
Binary file modified assets/images/pytorch-mobile.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.