SGLang
SGLang is a fast serving framework for language models which makes your interaction with models faster and more controllable by co-designing the backend runtime and frontend language.
The deployments in this document are for deploying Trinity-Nano-6B; however, they work the exact same for all Arcee AI models. To deploy a different model, simply change the model name to the model you'd like to deploy.
Docker Container for SGLang
Prerequisite
Sufficient VRAM (refer to Hardware Prerequisites)
A Hugging Face account
Docker and NVIDIA Container Toolkit installed on your instance
If you need assistance, see Install Docker Engine and Installing the NVIDIA Container Toolkit
Deployment
docker run --gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=your_hf_token_here" \
-p 8000:8000 \
--ipc=host \
lmsysorg/sglang:latest \
python -m sglang.launch_server \
--model-path arcee-ai/trinity-nano-thinking \
--host 0.0.0.0 \
--port 8000 \
--max-total-tokens 8192 \
--served-model-name afm \
--trust-remote-codeRun Inference using the Chat Completions endpoint.
Last updated

