# Text Generation and Analysis

In this example, you will learn how to use `Virtuoso-Large` , for text generation, creative writing, and text analysis and comparison.&#x20;

### Prerequisites

* Python 3.10 or higher
* `httpx` library
* `openai` library
* API key for accessing the Arcee.ai models

### Step 1: Environment Setup

1. Create and activate a Python virtual environment:

```bash
Copypython -m venv env-openai-client
source env-openai-client/bin/activate  # On Unix/macOS
# or
.\env-openai-client\Scripts\activate  # On Windows
```

2. Install required packages:

```bash
pip install httpx openai
```

3. Create `api_key.py` file:

```python
api_key = "your_api_key_here"
```

### Step 2: Initialize the Virtuoso Client

Set up the OpenAI client specifically for the Virtuoso Large model:

```python
import httpx
import os
from openai import OpenAI
from api_key import api_key

endpoint = "https://models.arcee.ai/v1"
model = "virtuoso-large"  # Specific model for creative and analytical tasks

client = OpenAI(
    base_url=endpoint,
    api_key=api_key,
    http_client=httpx.Client(http2=True)
)
```

### Step 3: Create Response Handler

Set up a function to handle streaming responses:

```python
def print_streaming_response(response):
    num_tokens = 0
    for message in response:
        if len(message.choices) > 0:
            num_tokens += 1
            print(message.choices[0].delta.content, end="")
    print(f"\n\nNumber of tokens: {num_tokens}")
```

### Step 4: Testing Creative Writing Capabilities

Example of generating creative content:

```python
response = client.chat.completions.create(
    model=model,
    messages=[
        {'role': 'user', 
         'content': 'Write a short horror story in the style of HP Lovecraft. It should take place in the 1920s in Antarctica. Write at least 2000 words.'
        }   
    ],
    temperature=0.9,
    stream=True,
    max_tokens=16384
)

print_streaming_response(response)
```

### Step 5: Text Analysis and Comparison

Example of analyzing and comparing literary texts:

```python
# Read and analyze the first text
with open("alice.txt", "r") as file:
    book_text1 = file.read()

num_words = len(book_text1.split())
print(f"Number of words in text 1: {num_words}")

# Read and analyze second text
with open("gatsby.txt", "r") as file:
    book_text2 = file.read()

num_words = len(book_text2.split())
print(f"Number of words in text 2: {num_words}")

# Generate comparative analysis
response = client.chat.completions.create(
    model=model,
    messages=[
        {'role': 'user', 
         'content': f"""Draw a parallel between the main characters of these two books.
         
         First text: {book_text1}
         
         Second text: {book_text2}"""
        }   
    ],
    temperature=0.9,
    stream=True,
    max_tokens=2048
)

print_streaming_response(response)
```

### Best Practices for Virtuoso Large

1. **Creative Writing Tasks**:
   * Be specific about style, genre, and length
   * Provide context and time period if relevant
   * Specify any particular themes or elements to include
   * Use a higher temperature (0.9) for more creative outputs
2. **Text Analysis Tasks**:
   * Provide complete texts for analysis
   * Specify the type of analysis needed
   * Consider token limits when analyzing large texts
   * Use a lower temperature (0.7) for a more focused analysis
3. **File Handling**:
   * Always use proper error handling when reading files
   * Check the file size before processing
   * Consider chunking large texts if needed


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.arcee.ai/arcee-conductor/arcee-small-language-models/model-capabilities/text-generation-and-analysis.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
