Skip to main content

Basic Usage

Execute a prompt using its key:
from switchport import Switchport

client = Switchport()

response = client.prompts.execute(
    prompt_key="welcome-message"
)

print(response.text)

Using Variables

Pass dynamic variables to your prompts:
response = client.prompts.execute(
    prompt_key="welcome-message",
    variables={
        "name": "Alice",
        "product": "Pro Plan"
    }
)
Your prompt template in the dashboard:
Write a friendly welcome message for {{name}} who just purchased {{product}}.

Using Subject for A/B Testing

Provide a subject to enable deterministic A/B testing:
response = client.prompts.execute(
    prompt_key="product-pitch",
    subject={"user_id": "user_123"},
    variables={"product": "Enterprise Plan"}
)

print(f"Version: {response.version_name}")
The same subject always gets the same version. This ensures users have a consistent experience.

Subject Types

Subject can be either a string or a dictionary:
response = client.prompts.execute(
    prompt_key="greeting",
    subject="user_123"
)

Response Object

The execute method returns a response object with the following attributes:
response = client.prompts.execute(
    prompt_key="welcome-message",
    variables={"name": "Alice"}
)

# Generated text from the LLM
print(response.text)

# Model that was used
print(response.model)  # e.g., "gpt-5"

# Version that was selected
print(response.version_name)  # e.g., "v1"
print(response.version_id)    # e.g., "ver_abc123"

# Unique request identifier
print(response.request_id)  # e.g., "req_xyz789"

Response Attributes

AttributeTypeDescription
textstrThe generated text from the LLM
modelstrThe model used (e.g., gpt-5, claude-3-5-sonnet-20241022)
version_namestrHuman-readable version name
version_idstrUnique version identifier
request_idstrUnique request identifier

Error Handling

Handle common errors when executing prompts:
from switchport import (
    Switchport,
    PromptNotFoundError,
    AuthenticationError,
    APIError
)

client = Switchport()

try:
    response = client.prompts.execute(
        prompt_key="my-prompt",
        variables={"name": "Alice"}
    )
    print(response.text)

except PromptNotFoundError:
    print("Prompt not found - check your prompt key")

except AuthenticationError:
    print("Authentication failed - check your API key")

except APIError as e:
    print(f"API error: {e}")
    # Fallback to default behavior

Common Patterns

Pattern 1: Dynamic Email Generation

def send_welcome_email(user):
    response = client.prompts.execute(
        prompt_key="welcome-email",
        subject={"user_id": user.id},
        variables={
            "name": user.name,
            "email": user.email
        }
    )

    send_email(user.email, response.text)

Pattern 2: Chatbot Responses

def get_bot_response(user_id, user_message):
    response = client.prompts.execute(
        prompt_key="support-bot",
        subject={"user_id": user_id},
        variables={
            "user_message": user_message,
            "conversation_history": get_history(user_id)
        }
    )

    return response.text

Pattern 3: Content Generation

def generate_product_description(product_id, user_segment):
    product = get_product(product_id)

    response = client.prompts.execute(
        prompt_key="product-description",
        subject={"segment": user_segment},
        variables={
            "product_name": product.name,
            "features": product.features,
            "price": product.price
        }
    )

    return response.text

Pattern 4: Fallback Handling

def get_ai_response(prompt_key, variables):
    try:
        response = client.prompts.execute(
            prompt_key=prompt_key,
            variables=variables
        )
        return response.text

    except APIError:
        # Fallback to default template if API fails
        return get_default_response(prompt_key, variables)

Best Practices

If you want to track metrics or run A/B tests, always provide subject identification when executing prompts.
Use the same subject (e.g., user ID) across prompt executions and metric recording for the same user or session.
Always implement error handling with fallback behavior to ensure your application continues working even if the API fails.
For expensive or frequently-used prompts, consider caching responses to reduce API calls and latency.
Store request_id values for debugging and support purposes.

Next Steps

Recording Metrics

Learn how to track metrics for your prompts

A/B Testing

Set up A/B tests with multiple versions