@@ -34,7 +34,7 @@ Below are some notable libraries available in Conan Center Index. These librarie
34
34
from running large language models locally to optimizing model inference on edge devices
35
35
or using specialized toolkits for tasks like computer vision and numerical optimization.
36
36
37
- #### LLaMA.cpp
37
+ #### [ LLaMA.cpp] ( https://conan.io/center/recipes/llama-cpp )
38
38
39
39
** LLaMA.cpp** is a C/C++ implementation of [ Meta’s LLaMA models] ( https://www.llama.com/ )
40
40
and others, enabling local inference with minimal dependencies and high performance. It
@@ -100,7 +100,7 @@ integrate LLMs into your own applications. For example, here is the code for the
100
100
we just executed. For more information on the LLaMA.cpp project, please [ check their
101
101
repository on GitHub] ( https://github.com/ggerganov/llama.cpp ) .
102
102
103
- #### TensorFlow Lite
103
+ #### [ TensorFlow Lite] ( https://conan.io/center/recipes/tensorflow-lite )
104
104
105
105
** TensorFlow Lite** is a specialized version of [ TensorFlow] ( https://www.tensorflow.org/ )
106
106
designed for deploying machine learning models on mobile, embedded systems, and other
@@ -128,7 +128,7 @@ on platforms like [Kaggle Models](https://www.kaggle.com/models) for various tas
128
128
can be easily integrated into your code. For more information on Tensorflow Lite, please
129
129
[ check their documentation] ( https://www.tensorflow.org/lite/guide ) .
130
130
131
- #### ONNX Runtime
131
+ #### [ ONNX Runtime] ( https://conan.io/center/recipes/onnxruntime )
132
132
133
133
** ONNX Runtime** is a high-performance inference engine designed to run models in the
134
134
[ ONNX] ( https://onnx.ai/ ) format, an open standard for representing network models across
@@ -150,7 +150,13 @@ runtime configurations or hardware accelerators. Explore [the Performance sectio
150
150
documentation] ( https://onnxruntime.ai/docs/performance/ ) for more details. For more
151
151
information, visit the [ ONNX Runtime documentation] ( https://onnxruntime.ai/docs/ ) .
152
152
153
- #### OpenVINO
153
+ Check all available versions in the Conan Center Index by running:
154
+
155
+ ``` shell
156
+ conan search onnxruntime
157
+ ```
158
+
159
+ #### [ OpenVINO] ( https://conan.io/center/recipes/openvino )
154
160
155
161
** OpenVINO** (Open Visual Inference and Neural Network Optimization) is an
156
162
[ Intel-developed toolkit] ( https://docs.openvino.ai/ ) that accelerates deep learning
@@ -165,7 +171,13 @@ examples to see how you can integrate OpenVINO into your projects.
165
171
166
172
For more details, visit the [ OpenVINO documentation] ( https://docs.openvino.ai/2024/ ) .
167
173
168
- #### mlpack
174
+ Check all available versions in the Conan Center Index by running:
175
+
176
+ ``` shell
177
+ conan search openvino
178
+ ```
179
+
180
+ #### [ mlpack] ( https://conan.io/center/recipes/mlpack )
169
181
170
182
** mlpack** is a fast, flexible, and lightweight header-only C++ library for machine
171
183
learning. It is ideal for lightweight deployments and prototyping. It offers a broad range
@@ -180,7 +192,13 @@ healthcare data.
180
192
181
193
For further details, visit the [ mlpack documentation] ( https://www.mlpack.org/ ) .
182
194
183
- #### Dlib
195
+ Check all available versions in the Conan Center Index by running:
196
+
197
+ ``` shell
198
+ conan search mlpack
199
+ ```
200
+
201
+ #### [ Dlib] ( https://conan.io/center/recipes/dlib )
184
202
185
203
** Dlib** is a modern C++ library widely used in research and industry for advanced machine
186
204
learning algorithms and computer vision tasks. Its comprehensive documentation and
@@ -192,6 +210,12 @@ object classification, and tracking. Examples of these functionalities can be fo
192
210
193
211
For more information, visit the [ Dlib official site] ( http://dlib.net/ ) .
194
212
213
+ Check all available versions in the Conan Center Index by running:
214
+
215
+ ``` shell
216
+ conan search dlib
217
+ ```
218
+
195
219
## Conclusion
196
220
197
221
C++ offers high-performance AI libraries and the flexibility to optimize for your
0 commit comments