LogoLogo
  • 👋Welcome to Arcee AI Docs
  • Arcee Orchestra
    • Introduction to Arcee Orchestra
    • Getting Started
    • Workflows
      • Workflow Components
        • Model Node
        • Code Node
        • Integrations
        • Knowledge Retrieval
        • Conditional Node
      • Passing Variables
      • API Invocation
        • List Available Workflows API
        • Workflow Execution API
        • Workflow Execution Steps API
        • Execution History API
        • Workflow Diagram API
        • API Code Examples
        • Upload Workflow JSON API
        • Workflow Runs API
    • Workflow Library
      • Research Automation
      • Real Time Financial Analysis
      • Blog Writer
      • Code Improvement
      • Energy Domain Assistant
    • Chat Interface
    • FAQ
  • ARCEE CONDUCTOR
    • Introduction to Arcee Conductor
    • Getting Started
    • Features & Functionality
      • Auto Mode
      • Auto Reasoning Mode
      • Auto Tools Mode
      • Compare
      • Direct Model Invocation
      • Usage
      • API
    • Arcee Small Language Models
      • Model Selection
      • Model Performance
    • Pricing
Powered by GitBook
On this page
  • Prerequisites
  • Step 1: Setting Up the Environment
  • Step 2: Initialize the Coder Client
  • Step 3: Set Up the Response Handler
  • Step 4: Testing Technical Explanation Capabilities
  • Step 5: Testing Code Review and Improvement Capabilities
  • Best Practices for Using the Coder Model
  1. ARCEE CONDUCTOR
  2. Arcee Small Language Models
  3. Model Capabilities

Code Generation

In this example, you will learn how to use the Arcee coder model for a coding problem.

Prerequisites

  • Python 3.12 or higher

  • httpx library

  • openai library

  • API key for accessing the Arcee.ai models

Step 1: Setting Up the Environment

  1. Create a new Python virtual environment:

python -m venv env-openai-client
source env-openai-client/bin/activate  # On Unix/macOS
# or
.\env-openai-client\Scripts\activate  # On Windows
  1. Install the required packages:

pip install httpx openai
  1. Create a file named api_key.py containing your API key:

api_key = "your_api_key_here"

Step 2: Initialize the Coder Client

Create a new Jupyter Notebook or Python script and set up the OpenAI client specifically for the Coder model:

import httpx
import os
from openai import OpenAI
from api_key import api_key

endpoint = "https://models.arcee.ai/v1"
model = "coder"  # Arcee's specialized SLM for coding tasks

client = OpenAI(
    base_url=endpoint,
    api_key=api_key,
    http_client=httpx.Client(http2=True)
)

Step 3: Set Up the Response Handler

Create a helper function to handle streaming responses:

def print_streaming_response(response):
    num_tokens = 0
    for message in response:
        if len(message.choices) > 0:
            num_tokens += 1
            print(message.choices[0].delta.content, end="")
    print(f"\n\nNumber of tokens: {num_tokens}")

Step 4: Testing Technical Explanation Capabilities

Test the model's ability to explain complex technical concepts with code examples:

response = client.chat.completions.create(
    model=model,
    messages=[
        {'role': 'user', 
         'content': """Explain the difference between logit-based distillation 
         and hidden state distillation. Show an example for both with Pytorch code, 
         with BERT-Large as the teacher model, and BERT-Base as the student model."""
        }   
    ],
    temperature=0.9,
    stream=True,
    max_tokens=16384
)

print_streaming_response(response)

Step 5: Testing Code Review and Improvement Capabilities

You can use the model to review and improve existing code:

code_example = """
def print_streaming_response(response):
    num_tokens=0
    for message in response:
        if len(message.choices) > 0:
            num_tokens+=1
            print(message.choices[0].delta.content, end="")
    print(f"\\n\\nNumber of tokens: {num_tokens}")
"""

response = client.chat.completions.create(
    model=model,
    messages=[
        {'role': 'user', 
         'content': f"Improve the following code: {code_example}. Explain why your changes are an improvement."
        }   
    ],
    temperature=0.9,
    stream=True,
    max_tokens=2048
)

print_streaming_response(response)

Best Practices for Using the Coder Model

  1. Specific Prompts:

    • Be specific about the programming language

    • Specify the framework or library you're using

    • Mention any version requirements

    • Include context about the problem you're trying to solve

  2. Code Review Requests:

    • Include the complete code snippet you want to review

    • Specify what aspects you want to improve (performance, readability, security, etc.)

    • Ask for explanations of suggested improvements

  3. Technical Explanations:

    • Request specific examples alongside theoretical explanations

    • Ask for comparisons between different approaches

    • Request code snippets that demonstrate the concepts

Last updated 2 months ago

Page cover image