llama.cpp
llama.cpp is a C++ implementation focused on running transformer models efficiently on consumer hardware with minimal dependencies. It emphasizes CPU inference optimization and quantization techniques to enable local model execution across diverse platforms including mobile and edge devices.
The deployments in this document are for deploying Trinity-Nano-6B; however, they work the exact same for all Arcee AI models. To deploy a different model, simply change the model name to the model you'd like to deploy.
Prerequisites
Sufficient RAM (refer to Hardware Prerequisites)
A Hugging Face account
Deployment
Setup a python virtual environment. In this guide, we'll use
uv.
curl -LsSf https://astral.sh/uv/install.sh | sh
source $HOME/.local/bin/env
uv venv
source .venv/bin/activateClone the llama.cpp repo
git clone https://github.com/ggerganov/llama.cpp
cd llama.cppBuild and Install Dependencies
cmake .
make -j8
uv pip install -r requirements.txt --prerelease=allow --index-strategy unsafe-best-matchInstall Hugging Face and Login
Download the model size you want to run
The larger the model, the more memory it will require and the slower it will run
model_name:The exact name of the model you want to deploy, liketrinity-nano-6b. This tells the system which model to download from Hugging Face.model_quant:Indicates the quantization format of the model, such asbf16,q4_0, orq8_0. Choose based on your hardware; lower-bit formats run faster and use less memory but may reduce accuracy slightly.
Host the model
Run Inference using the Chat Completions endpoint.
Last updated

