@@ -44,11 +44,13 @@ and others, enabling local inference with minimal dependencies and high performa
44
44
works on CPUs and GPUs, supports diverse architectures, and accommodates a variety of text
45
45
models like LLaMA 3, Mistral, or Phi, as well as multimodal models like LLaVA 1.6.
46
46
47
- One of the most interesting aspects of this library is that it includes CLI tools that
48
- allow you to run your own LLMs out of the box. To install the library with Conan, enabling
49
- the examples and network options, and using a [ Conan
50
- deployer] ( https://docs.conan.io/2/reference/extensions/deployers.html ) to move the files
51
- to the user space, you can run the following command:
47
+ One of the most interesting aspects of this library is that it includes some CLI tools
48
+ that will make it easy to run your own LLMs straight out of the box. To install the
49
+ library with Conan, ensure you enable building the examples and activate the network
50
+ options (which will require ` libcurl ` ). Then, use a [ Conan
51
+ deployer] ( https://docs.conan.io/2/reference/extensions/deployers.html ) to move the
52
+ installed files from the Conan cache to the user space. To do all that, just run the
53
+ following command:
52
54
53
55
``` shell
54
56
# Install llama-cpp using Conan and deploy to the local folder
@@ -58,7 +60,7 @@ $ conan install --requires=llama-cpp/b4079 --build=missing \
58
60
--deployer=full_deploy
59
61
```
60
62
61
- Running your own chatbot locally is as simple as invoking the packaged ` llama-cli `
63
+ You can run your chatbot locally by simply by invoking the packaged ` llama-cli `
62
64
application with a model from a Hugging Face repository (in this case we will be using a
63
65
Llama 3.2 model with 1 billion parameters and 6 bit quantization from the [ unsloth
64
66
repo] ( https://huggingface.co/unsloth ) ) and starting to ask questions:
@@ -110,7 +112,7 @@ performance while minimizing power consumption.
110
112
alt="Pose estimation with TensorFlow Lite"/>
111
113
</figure >
112
114
113
- To explore TensorFlow Lite in action, we previously published a [ blog
115
+ If you'd like to see TensorFlow Lite in action, we previously published a [ blog
114
116
post] ( https://blog.conan.io/2023/05/11/tensorflow-lite-cpp-mobile-ml-guide.html )
115
117
showcasing how to build a real-time human pose detection application using TensorFlow Lite
116
118
and OpenCV. If you haven't read it yet, we recommend checking it out for a detailed
0 commit comments