# Model Selection

Discover our collection of Small Language Models (SLMs) fine-tuned by Arcee AI, each optimized for specific tasks and designed to power efficient, production-ready applications.

<details>

<summary>Blitz - General Purpose</summary>

### **Arcee Blitz**

* **Description:** Arcee-Blitz (24B) is a new Mistral-based 24B model distilled from DeepSeek, designed to be both **fast and efficient**. We view it as a practical “workhorse” model that can tackle a range of tasks without the overhead of larger architectures.
  * **#Parameters:** 24B
  * **Base Model:** Mistral-Small-24B-Instruct-2501
  * Open-source and available on Hugging Face under the Apache-2.0 license: [arcee-ai/Arcee-Blitz](https://huggingface.co/arcee-ai/Arcee-Blitz)
* **Top Use Cases:**
  * General-purpose task handling
  * Business communication
  * Automated document processing for mid-scale applications

</details>

<figure><img src="/files/UQmMv66S49T3AFJ83XIs" alt="" width="250"><figcaption><p>Blitz</p></figcaption></figure>

<details>

<summary>Virtuoso  (Small, Large, Medium) - General Purpose</summary>

### **Virtuoso Large**

* **Description:** Our most powerful and versatile general-purpose model, designed to excel at handling complex and varied tasks across domains. With state-of-the-art performance, it offers unparalleled capability for nuanced understanding, contextual adaptability, and high accuracy.
  * **#Parameters:** 72B
  * **Base Model:** Qwen-2.5-72B
  * API Access is Available via Arcee Conductor: [https://conductor.arcee.ai](/arcee-orchestra/introduction-to-arcee-orchestra.md)
* **Top Use Cases:**
  * Advanced content creation, such as technical writing and creative storytelling
  * Data summarization and report generation for cross-functional domains
  * Detailed knowledge synthesis and deep-dive insights from diverse datasets
  * Multilingual support for international operations and communications

### **Virtuoso Medium**

* **Description:** A versatile and powerful model, capable of handling complex and varied tasks with precision and adaptability across multiple domains. Ideal for dynamic use cases requiring significant computational power.
  * **#Parameters:** 32B
  * **Base Model:** Qwen-2.5-32B
  * API Access is Available via Arcee Conductor: [https://conductor.arcee.ai](/arcee-orchestra/introduction-to-arcee-orchestra.md)
* **Top Use Cases:**
  * Content generation
  * Knowledge retrieval
  * Advanced language understanding
  * Comprehensive data interpretation

### **Virtuoso Small**

* **Description:** A streamlined version of Virtuoso, maintaining robust capabilities for handling complex tasks across domains while offering enhanced cost-efficiency and quicker response times.
  * **#Parameters:** 14B
  * **Base Model:** Qwen-2.5-14B
  * API access is available via Arcee Conductor: [https://conductor.arcee.ai](/arcee-orchestra/introduction-to-arcee-orchestra.md)
  * Open-source and available on Hugging Face under the Apache-2.0 license: [arcee-ai/Virtuoso-Small](https://huggingface.co/arcee-ai/Virtuoso-Small)&#x20;
* **Top use cases:**&#x20;
  * General-purpose task handling
  * Business communication
  * Automated document processing for mid-scale applications

\`

</details>

<figure><img src="/files/D43NQz6903eIrpJQElkS" alt="" width="250"><figcaption><p>Virtuoso</p></figcaption></figure>

<details>

<summary>Coder (Small, Large) - Coding</summary>

### Coder Large

* **Description:** A high-performance model tailored for intricate programming tasks, Coder-Large thrives in software development environments. With its focus on efficiency, reliability, and adaptability, it supports developers in crafting, debugging, and refining code for complex systems.
  * **#Parameters:** 32B
  * **Base Model:** Qwen-2.5-32B-Instruct
  * Arcee Conductor: [https://conductor.arcee.ai](/arcee-orchestra/introduction-to-arcee-orchestra.md)
* **Top use cases:**
  * Writing modular, reusable code across various programming languages
  * Debugging and optimizing performance in large-scale applications
  * Generating efficient algorithms for computationally intensive tasks
  * Supporting DevOps processes, such as script automation and CI/CD pipelines

### Coder Small

* **Description:** A compact, high-performance coding model designed for efficient programming tasks, including generating code, debugging, and optimizing scripts for smaller projects.
  * **#Parameters:** 14B
  * **Base Model:** Qwen-2.5-32B-Instruct
* **Top use cases:**
  * Lightweight development tasks
  * Automated code reviews
  * Generating templates or prototypes quickly, code completion

</details>

<figure><img src="/files/5A5VbznhaGdj91e2ynLz" alt="" width="250"><figcaption><p>Coder</p></figcaption></figure>

<details>

<summary>Caller (Large) - Tool Use and Function Call</summary>

### Caller

* **Description:** Engineered for seamless integrations, Caller-Large is a robust model optimized for managing complex tool-based interactions and API function calls. Its strength lies in precise execution, intelligent orchestration, and effective communication between systems, making it indispensable for sophisticated automation pipelines.
  * **#Parameters:** 32B
  * **Base Model:** Qwen-2.5-32B
  * API Access is Available via Arcee Conductor: [https://conductor.arcee.ai](/arcee-orchestra/introduction-to-arcee-orchestra.md)
* **Top use cases:**
  * Managing integrations between CRMs, ERPs, and other enterprise systems
  * Running multi-step workflows with intelligent condition handling
  * Orchestrating external tool interactions like calendar scheduling, email parsing, or data extraction
  * Real-time monitoring and diagnostics in IoT or SaaS environments

</details>

<figure><img src="/files/7zm6py57w5wEMTenpIZA" alt="" width="250"><figcaption><p>Caller</p></figcaption></figure>

<details>

<summary>Maestro - Reasoning</summary>

### Maestro

* **Description:** An advanced reasoning model optimized for high-performance enterprise applications. Building on the innovative training techniques first deployed in maestro-7b-preview, Maestro-32B offers significantly enhanced reasoning capabilities at scale, rivaling or surpassing leading models like OpenAI’s O1 and DeepSeek’s R1, but at substantially reduced computational costs.
  * **#Parameters:** 32B
  * **Base Model:** Qwen-2.5-32B
  * API Access is Available via Arcee Conductor: [https://conductor.arcee.ai](/arcee-orchestra/introduction-to-arcee-orchestra.md)
  * Hybrid training method:
    1. Warm-up (SFT Phase): Quick supervised fine-tuning phase to prime the model with high-quality reasoning exemplars.
    2. RL Optimization Phase: Utilizes Reinforcement Learning techniques, specifically designed to boost logical coherence, depth of reasoning, and accurate inference by encouraging problem-solving from fundamental principles.
* **Top Use Cases:**
  * Enterprise decision support systems
  * Complex analytical and logical inference tasks
  * Automated research and analysis workflows
  * Generative reasoning for technical and professional contexts

</details>

<figure><img src="/files/6ndLrL6VnECNrHL833ck" alt="" width="250"><figcaption><p>Maestro</p></figcaption></figure>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.arcee.ai/arcee-conductor/arcee-small-language-models/model-selection.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
