@@ -131,77 +131,71 @@ can be easily integrated into your code. For more information on Tensorflow Lite
131
131
#### ONNX Runtime
132
132
133
133
** ONNX Runtime** is a high-performance inference engine designed to run models in the
134
- [ ONNX] ( https://onnx.ai/ ) format, an open standard that facilitates representing and
135
- transferring neural network models across various AI frameworks such as PyTorch,
136
- TensorFlow, or scikit-learn.
134
+ [ ONNX] ( https://onnx.ai/ ) format, an open standard for representing network models across
135
+ various AI frameworks such as PyTorch, TensorFlow, and scikit-learn.
137
136
138
- Thanks to this interoperability, you can run models trained in multiple frameworks using a
139
- single unified runtime. The general idea is :
137
+ Thanks to this interoperability, ONNX Runtime allows you to use models trained in
138
+ different frameworks with a single unified runtime. Here’s the general workflow :
140
139
141
- 1 . ** Get a model** : Train it using your preferred framework and export or convert it to
142
- the ONNX format. There are [ tutorials] ( https://onnxruntime.ai/docs/tutorials/ ) showing
143
- how to do this for popular frameworks and libraries.
140
+ 1 . ** Get a model** : Train a model using your preferred framework and export or convert it
141
+ to the ONNX format. There are [ tutorials] ( https://onnxruntime.ai/docs/tutorials/ )
142
+ available for popular frameworks and libraries.
144
143
145
144
2 . ** Load and run the model with ONNX Runtime** : Check out these [ C++ inference
146
145
examples] ( https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx )
147
- to quickly get started with some code samples.
148
-
149
- From there, ONNX Runtime offers options to tune performance using various runtime
150
- configurations or hardware accelerators. There are many possibilities—check [ the
151
- Performance section in the documentation] ( https://onnxruntime.ai/docs/performance/ ) for a
152
- more in-depth look.
146
+ to get started quickly.
153
147
154
- ONNX Runtime’s flexibility allows you to experiment with models from diverse sources,
155
- integrate them into your C++ applications, and scale as needed. For more details, check
156
- out the [ ONNX Runtime documentation] ( https://onnxruntime.ai/docs/ ) .
148
+ Additionally, ONNX Runtime offers multiple options for tuning performance using various
149
+ runtime configurations or hardware accelerators. Explore [ the Performance section in the
150
+ documentation] ( https://onnxruntime.ai/docs/performance/ ) for more details. For more
151
+ information, visit the [ ONNX Runtime documentation] ( https://onnxruntime.ai/docs/ ) .
157
152
158
153
#### OpenVINO
159
154
160
155
** OpenVINO** (Open Visual Inference and Neural Network Optimization) is an
161
156
[ Intel-developed toolkit] ( https://docs.openvino.ai/ ) that accelerates deep learning
162
- inference across a range of devices. It supports models from popular frameworks like
163
- PyTorch, TensorFlow, and ONNX, offering tools to optimize, deploy, and scale AI
164
- applications efficiently.
157
+ inference on a wide range of devices. It supports models from frameworks like PyTorch,
158
+ TensorFlow, and ONNX, offering tools to optimize, deploy, and scale AI applications
159
+ efficiently.
165
160
166
- You can check some of their [ C++
167
- examples] ( https://docs.openvino.ai/2024/learn-openvino/openvino-samples.html )
168
- demonstrating tasks like model loading, inference, and performance benchmarking, to help
169
- you get started .
161
+ The [ OpenVINO C++
162
+ examples] ( https://docs.openvino.ai/2024/learn-openvino/openvino-samples.html ) demonstrate
163
+ tasks such as model loading, inference, and performance benchmarking. Explore these
164
+ examples to see how you can integrate OpenVINO into your projects .
170
165
171
166
For more details, visit the [ OpenVINO documentation] ( https://docs.openvino.ai/2024/ ) .
172
167
173
168
#### mlpack
174
169
175
- ** mlpack** is a fast and flexible header-only C++ library for machine learning, designed
176
- for both lightweight deployment and interactive prototyping via tools like C++ notebooks.
177
- It offers a broad range of algorithms for classification, regression, clustering, and
178
- more, along with preprocessing utilities and transformations.
170
+ ** mlpack** is a fast, flexible, and lightweight header-only C++ library for machine
171
+ learning. It is ideal for lightweight deployments and prototyping. It offers a broad range
172
+ of machine learning algorithms for classification, regression, clustering, and more, along
173
+ with preprocessing utilities and data transformations.
179
174
180
- To explore [ mlpack] ( https://www.mlpack.org/ ) , visit the [ examples
181
- repository] ( https://github.com/mlpack/examples/tree/master/cpp ) , which showcases C++
182
- applications like training neural networks for digit recognition, using decision trees to
183
- predict loan defaults, and applying clustering to find patterns in healthcare datasets.
175
+ Explore [ mlpack’s examples
176
+ repository] ( https://github.com/mlpack/examples/tree/master/cpp ) , where you’ll find C++
177
+ applications such as training neural networks for digit recognition, decision tree models
178
+ for predicting loan defaults, and clustering algorithms for identifying patterns in
179
+ healthcare data.
184
180
185
- For more details, visit the [ mlpack documentation] ( https://www.mlpack.org/ ) .
181
+ For further details, visit the [ mlpack documentation] ( https://www.mlpack.org/ ) .
186
182
187
- ### Dlib
183
+ #### Dlib
188
184
189
- ** Dlib** is a modern C++ library offering advanced machine learning algorithms and
190
- computer vision functionalities, widely adopted in research and industry. Its
191
- well-designed API and comprehensive documentation make it easy to integrate ML
192
- capabilities into existing projects.
185
+ ** Dlib** is a modern C++ library widely used in research and industry for advanced machine
186
+ learning algorithms and computer vision tasks. Its comprehensive documentation and
187
+ well-designed API make it straightforward to integrate into existing projects.
193
188
194
- It provides algorithms for facial detection, landmark recognition, object classification,
195
- and tracking. Examples showcasing these algorithms can be found in [ their GitHub
196
- repository] ( https://github.com/davisking/dlib/tree/master/examples ) . For more details,
197
- visit the [ Dlib official site] ( http://dlib.net/ ) .
189
+ Dlib provides a variety of algorithms, including facial detection, landmark recognition,
190
+ object classification, and tracking. Examples of these functionalities can be found in
191
+ [ their GitHub repository] ( https://github.com/davisking/dlib/tree/master/examples ) .
192
+
193
+ For more information, visit the [ Dlib official site] ( http://dlib.net/ ) .
198
194
199
195
## Conclusion
200
196
201
- There is a wide variety of libraries available in C++ for working with AI. An additional
202
- advantage is the ability to customize optimizations for different platforms, enabling
203
- faster and more energy-efficient AI workflows. With Conan, integrating these libraries
204
- into your projects is both straightforward and flexible.
197
+ C++ offers high-performance AI libraries and the flexibility to optimize for your
198
+ hardware. With Conan, integrating these tools is straightforward, enabling efficient,
199
+ scalable AI workflows.
205
200
206
- With C++ and these libraries, getting started with AI is easier than you think. Give them
207
- a try and see what you can build!
201
+ Now, give these tools a go and see your AI ideas come to life in C++!
0 commit comments