Skip to content

Commit 7016d16

Browse files
committed
wip
1 parent 8ff7957 commit 7016d16

File tree

1 file changed

+9
-7
lines changed

1 file changed

+9
-7
lines changed

_posts/2024-12-20-You-can-do-AI-with-cpp.markdown

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -44,11 +44,13 @@ and others, enabling local inference with minimal dependencies and high performa
4444
works on CPUs and GPUs, supports diverse architectures, and accommodates a variety of text
4545
models like LLaMA 3, Mistral, or Phi, as well as multimodal models like LLaVA 1.6.
4646

47-
One of the most interesting aspects of this library is that it includes CLI tools that
48-
allow you to run your own LLMs out of the box. To install the library with Conan, enabling
49-
the examples and network options, and using a [Conan
50-
deployer](https://docs.conan.io/2/reference/extensions/deployers.html) to move the files
51-
to the user space, you can run the following command:
47+
One of the most interesting aspects of this library is that it includes some CLI tools
48+
that will make it easy to run your own LLMs straight out of the box. To install the
49+
library with Conan, ensure you enable building the examples and activate the network
50+
options (which will require `libcurl`). Then, use a [Conan
51+
deployer](https://docs.conan.io/2/reference/extensions/deployers.html) to move the
52+
installed files from the Conan cache to the user space. To do all that, just run the
53+
following command:
5254

5355
```shell
5456
# Install llama-cpp using Conan and deploy to the local folder
@@ -58,7 +60,7 @@ $ conan install --requires=llama-cpp/b4079 --build=missing \
5860
--deployer=full_deploy
5961
```
6062

61-
Running your own chatbot locally is as simple as invoking the packaged `llama-cli`
63+
You can run your chatbot locally by simply by invoking the packaged `llama-cli`
6264
application with a model from a Hugging Face repository (in this case we will be using a
6365
Llama 3.2 model with 1 billion parameters and 6 bit quantization from the [unsloth
6466
repo](https://huggingface.co/unsloth)) and starting to ask questions:
@@ -110,7 +112,7 @@ performance while minimizing power consumption.
110112
alt="Pose estimation with TensorFlow Lite"/>
111113
</figure>
112114

113-
To explore TensorFlow Lite in action, we previously published a [blog
115+
If you'd like to see TensorFlow Lite in action, we previously published a [blog
114116
post](https://blog.conan.io/2023/05/11/tensorflow-lite-cpp-mobile-ml-guide.html)
115117
showcasing how to build a real-time human pose detection application using TensorFlow Lite
116118
and OpenCV. If you haven't read it yet, we recommend checking it out for a detailed

0 commit comments

Comments
 (0)