# Trinity-Mini (26B)

**Overview**

Trinity Mini is a 26B-parameter (3B active) sparse mixture-of-experts language model, engineered for efficient inference over long contexts with robust function calling and multi-step agent workflows.

**Key Features**&#x20;

* Efficient attention mechanism: reduces memory and compute requirements while preserving long-context coherence.
* 128K-token context window: supports multi-turn interactions and extended document processing.
* Strong context utilization: fully leverages long inputs for coherent multi-turn reasoning and reliable function/tool calls.
* High inference efficiency: generates tokens rapidly while minimizing compute, delivering an outstanding price-to-performance ratio.

### Deployment Quickstart

To get started deploying Trinity-Mini, download the model [here](https://huggingface.co/arcee-ai) and proceed to [Quick Deploys](/quick-deploys/hardware-prerequisites.md)

### Model Summary

|                                  |                                                                                                         |
| -------------------------------- | ------------------------------------------------------------------------------------------------------- |
| Name                             | Trinity-Mini-26B                                                                                        |
| Architecture                     | Mixture-of-Experts                                                                                      |
| Parameters                       | 26 Billion Total, 3.5 Billion Active                                                                    |
| Experts                          | 128 Experts, 8 Active                                                                                   |
| Attention Mechanism              | Grouped Query Attention (GQA)                                                                           |
| Training Tokens                  | 10 trillion                                                                                             |
| License                          | Apache 2.0                                                                                              |
| Recommended Inference Parameters | <p></p><ul><li>temperature: 0.15</li><li>top\_p: 0.75</li><li>top\_k: 50</li><li>min\_p: 0.06</li></ul> |


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.arcee.ai/language-models/trinity-mini-26b.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
