# Code Generation

In this example, you will learn how to use the Arcee coder model for a coding problem.

### Prerequisites

* Python 3.12 or higher
* `httpx` library
* `openai` library
* API key for accessing the Arcee.ai models

### Step 1: Setting Up the Environment

1. Create a new Python virtual environment:

```bash
python -m venv env-openai-client
source env-openai-client/bin/activate  # On Unix/macOS
# or
.\env-openai-client\Scripts\activate  # On Windows
```

2. Install the required packages:

```bash
pip install httpx openai
```

3. Create a file named `api_key.py` containing your API key:

```python
api_key = "your_api_key_here"
```

### Step 2: Initialize the Coder Client

Create a new Jupyter Notebook or Python script and set up the OpenAI client specifically for the Coder model:

```python
import httpx
import os
from openai import OpenAI
from api_key import api_key

endpoint = "https://models.arcee.ai/v1"
model = "coder"  # Arcee's specialized SLM for coding tasks

client = OpenAI(
    base_url=endpoint,
    api_key=api_key,
    http_client=httpx.Client(http2=True)
)
```

### Step 3: Set Up the Response Handler

Create a helper function to handle streaming responses:

```python
def print_streaming_response(response):
    num_tokens = 0
    for message in response:
        if len(message.choices) > 0:
            num_tokens += 1
            print(message.choices[0].delta.content, end="")
    print(f"\n\nNumber of tokens: {num_tokens}")
```

### Step 4: Testing Technical Explanation Capabilities

Test the model's ability to explain complex technical concepts with code examples:

```python
response = client.chat.completions.create(
    model=model,
    messages=[
        {'role': 'user', 
         'content': """Explain the difference between logit-based distillation 
         and hidden state distillation. Show an example for both with Pytorch code, 
         with BERT-Large as the teacher model, and BERT-Base as the student model."""
        }   
    ],
    temperature=0.9,
    stream=True,
    max_tokens=16384
)

print_streaming_response(response)
```

### Step 5: Testing Code Review and Improvement Capabilities

You can use the model to review and improve existing code:

```python
code_example = """
def print_streaming_response(response):
    num_tokens=0
    for message in response:
        if len(message.choices) > 0:
            num_tokens+=1
            print(message.choices[0].delta.content, end="")
    print(f"\\n\\nNumber of tokens: {num_tokens}")
"""

response = client.chat.completions.create(
    model=model,
    messages=[
        {'role': 'user', 
         'content': f"Improve the following code: {code_example}. Explain why your changes are an improvement."
        }   
    ],
    temperature=0.9,
    stream=True,
    max_tokens=2048
)

print_streaming_response(response)
```

### Best Practices for Using the Coder Model

1. **Specific Prompts**:
   * Be specific about the programming language
   * Specify the framework or library you're using
   * Mention any version requirements
   * Include context about the problem you're trying to solve
2. **Code Review Requests**:
   * Include the complete code snippet you want to review
   * Specify what aspects you want to improve (performance, readability, security, etc.)
   * Ask for explanations of suggested improvements
3. **Technical Explanations**:
   * Request specific examples alongside theoretical explanations
   * Ask for comparisons between different approaches
   * Request code snippets that demonstrate the concepts


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.arcee.ai/arcee-conductor/arcee-small-language-models/model-capabilities/code-generation.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
