@@ -30,10 +30,9 @@ most representative AI libraries available in Conan Center Index.
30
30
31
31
### An Overview of Some AI and ML Libraries Available in Conan Center
32
32
33
- Below are some notable libraries you can easily integrate with your C++ projects through
34
- Conan Center. These libraries range from running large language models locally to
35
- optimizing model inference on edge devices or using specialized toolkits for tasks like
36
- computer vision and numerical optimization.
33
+ Below are some notable libraries available in Conan Center Index. These libraries range
34
+ from running large language models locally to optimizing model inference on edge devices
35
+ or using specialized toolkits for tasks like computer vision and numerical optimization.
37
36
38
37
#### LLaMA.cpp
39
38
@@ -45,12 +44,13 @@ models like [LLaMA 3](https://huggingface.co/models?search=llama),
45
44
as well as multimodal models like [ LLaVA] ( https://github.com/haotian-liu/LLaVA ) .
46
45
47
46
One of the most interesting aspects of this library is that it includes a collection of
48
- CLI tools as examples, making it easy to run your own LLMs straight out of the box. To
49
- install the library with Conan, ensure that you enable building the examples and activate
50
- the network options (which require ` libcurl ` ). Then, use a [ Conan
51
- deployer] ( https://docs.conan.io/2/reference/extensions/deployers.html ) to move the
52
- installed files from the Conan cache to the user space. To accomplish this, simply run the
53
- following command:
47
+ CLI tools as examples, making it easy to run your own LLMs straight out of the box.
48
+
49
+ Let's try one of those tools. First, install the library with Conan and ensure that you
50
+ enable building the examples and activate the network options (which require ` libcurl ` ).
51
+ Then, use a [ Conan deployer] ( https://docs.conan.io/2/reference/extensions/deployers.html )
52
+ to move the installed files from the Conan cache to the user space. To accomplish this,
53
+ simply run the following command:
54
54
55
55
``` shell
56
56
# Install llama-cpp using Conan and deploy to the local folder
0 commit comments