LogoLogo
  • 👋Welcome to Arcee AI Docs
  • Arcee Orchestra
    • Introduction to Arcee Orchestra
    • Getting Started
    • Workflows
      • Workflow Components
        • Model Node
        • Code Node
        • Integrations
        • Knowledge Retrieval
        • Conditional Node
      • Passing Variables
      • API Invocation
        • List Available Workflows API
        • Workflow Execution API
        • Workflow Execution Steps API
        • Execution History API
        • Workflow Diagram API
        • API Code Examples
        • Upload Workflow JSON API
        • Workflow Runs API
    • Workflow Library
      • Research Automation
      • Real Time Financial Analysis
      • Blog Writer
      • Code Improvement
      • Energy Domain Assistant
    • Chat Interface
    • FAQ
  • ARCEE CONDUCTOR
    • Introduction to Arcee Conductor
    • Getting Started
    • Features & Functionality
      • Auto Mode
      • Auto Reasoning Mode
      • Auto Tools Mode
      • Compare
      • Direct Model Invocation
      • Usage
      • API
    • Arcee Small Language Models
      • Model Selection
      • Model Performance
    • Pricing
Powered by GitBook
On this page
  1. ARCEE CONDUCTOR
  2. Features & Functionality

Compare

PreviousAuto Tools ModeNextDirect Model Invocation

Last updated 1 month ago

The Compare feature in Arcee Conductor allows you to compare the performance and output of Conductor against models you specify. This allows you to directly evaluate responses, response times, and costs for your most important prompts.

To get started with Compare, select a model you'd like to compare against Conductor. This is done by selecting the model name on the right side of the comparison window (in the example above, this is where it says "Virtuoso Large"). You have the option to comparse against a suite of Arcee SLMs, such as Virtuoso Small, Medium, and Large, Blitz, Coder, and Maestro, or against closed source LLMs such as Sonnet 3.7, Sonnet 3.5, Deepseek R1, GPT-4o, and others.

When you send a prompt, it is automatically sent to both models so you can see a comparison in real time. In addition to the outputs, you can see whether Conductor or the selected model had a quicker response time and which was more cost effective for the specific prompt.

The usage button at the top can be used to evaluate price performance across a series of prompts.

When you click on the Usage button, the graph above is provided which shows the cost performance between Conductor and your selected model for the entire session. The example above shows the price differential after three prompts, where Conductor is the pink line and Sonnet 3.7 is the purple line.

Conductor Comparison
Compare
Comparison Usage
Page cover image