LogoLogo
  • 👋Welcome to Arcee AI Docs
  • Arcee Orchestra
    • Introduction to Arcee Orchestra
    • Getting Started
    • Workflows
      • Workflow Components
        • Model Node
        • Code Node
        • Integrations
        • Knowledge Retrieval
        • Conditional Node
      • Passing Variables
      • API Invocation
        • List Available Workflows API
        • Workflow Execution API
        • Workflow Execution Steps API
        • Execution History API
        • Workflow Diagram API
        • API Code Examples
        • Upload Workflow JSON API
        • Workflow Runs API
    • Workflow Library
      • Research Automation
      • Real Time Financial Analysis
      • Blog Writer
      • Code Improvement
      • Energy Domain Assistant
    • Chat Interface
    • FAQ
  • ARCEE CONDUCTOR
    • Introduction to Arcee Conductor
    • Getting Started
    • Features & Functionality
      • Auto Mode
      • Auto Reasoning Mode
      • Auto Tools Mode
      • Compare
      • Direct Model Invocation
      • Usage
      • API
    • Arcee Small Language Models
      • Model Selection
      • Model Performance
    • Pricing
Powered by GitBook
On this page
  • Prerequisites
  • Step 1: Environment Setup
  • Step 2: Initialize the Virtuoso Client
  • Step 3: Create Response Handler
  • Step 4: Testing Creative Writing Capabilities
  • Step 5: Text Analysis and Comparison
  • Best Practices for Virtuoso Large
  1. ARCEE CONDUCTOR
  2. Arcee Small Language Models
  3. Model Capabilities

Text Generation and Analysis

In this example, you will learn how to use Virtuoso-Large , for text generation, creative writing, and text analysis and comparison.

Prerequisites

  • Python 3.10 or higher

  • httpx library

  • openai library

  • API key for accessing the Arcee.ai models

Step 1: Environment Setup

  1. Create and activate a Python virtual environment:

Copypython -m venv env-openai-client
source env-openai-client/bin/activate  # On Unix/macOS
# or
.\env-openai-client\Scripts\activate  # On Windows
  1. Install required packages:

pip install httpx openai
  1. Create api_key.py file:

api_key = "your_api_key_here"

Step 2: Initialize the Virtuoso Client

Set up the OpenAI client specifically for the Virtuoso Large model:

import httpx
import os
from openai import OpenAI
from api_key import api_key

endpoint = "https://models.arcee.ai/v1"
model = "virtuoso-large"  # Specific model for creative and analytical tasks

client = OpenAI(
    base_url=endpoint,
    api_key=api_key,
    http_client=httpx.Client(http2=True)
)

Step 3: Create Response Handler

Set up a function to handle streaming responses:

def print_streaming_response(response):
    num_tokens = 0
    for message in response:
        if len(message.choices) > 0:
            num_tokens += 1
            print(message.choices[0].delta.content, end="")
    print(f"\n\nNumber of tokens: {num_tokens}")

Step 4: Testing Creative Writing Capabilities

Example of generating creative content:

response = client.chat.completions.create(
    model=model,
    messages=[
        {'role': 'user', 
         'content': 'Write a short horror story in the style of HP Lovecraft. It should take place in the 1920s in Antarctica. Write at least 2000 words.'
        }   
    ],
    temperature=0.9,
    stream=True,
    max_tokens=16384
)

print_streaming_response(response)

Step 5: Text Analysis and Comparison

Example of analyzing and comparing literary texts:

# Read and analyze the first text
with open("alice.txt", "r") as file:
    book_text1 = file.read()

num_words = len(book_text1.split())
print(f"Number of words in text 1: {num_words}")

# Read and analyze second text
with open("gatsby.txt", "r") as file:
    book_text2 = file.read()

num_words = len(book_text2.split())
print(f"Number of words in text 2: {num_words}")

# Generate comparative analysis
response = client.chat.completions.create(
    model=model,
    messages=[
        {'role': 'user', 
         'content': f"""Draw a parallel between the main characters of these two books.
         
         First text: {book_text1}
         
         Second text: {book_text2}"""
        }   
    ],
    temperature=0.9,
    stream=True,
    max_tokens=2048
)

print_streaming_response(response)

Best Practices for Virtuoso Large

  1. Creative Writing Tasks:

    • Be specific about style, genre, and length

    • Provide context and time period if relevant

    • Specify any particular themes or elements to include

    • Use a higher temperature (0.9) for more creative outputs

  2. Text Analysis Tasks:

    • Provide complete texts for analysis

    • Specify the type of analysis needed

    • Consider token limits when analyzing large texts

    • Use a lower temperature (0.7) for a more focused analysis

  3. File Handling:

    • Always use proper error handling when reading files

    • Check the file size before processing

    • Consider chunking large texts if needed

Last updated 3 months ago

Page cover image