Skip to main content

What is Switchport?

Switchport is a platform that helps you manage, test, and optimize LLM prompts in production. Execute prompts, run A/B tests, and track metrics to continuously improve your AI applications.

Key Features

Prompt Management

Store and version your prompts in one place. Update prompts without deploying code.

A/B Testing

Test multiple prompt versions simultaneously with deterministic routing based on subject identification.

Metrics Tracking

Track performance metrics and aggregate results per prompt version automatically.

Multi-Model Support

Support for OpenAI, Anthropic, Google Gemini, and more.

Get Started

Choose your language to begin:

Python SDK

Get started with the Python SDK in 5 minutes.

Go SDK

Get started with the Go SDK in 5 minutes.

TypeScript SDK

Coming soon - TypeScript/JavaScript SDK.

How it Works

1

Create a prompt config

Define your prompt in the Switchport dashboard with variables and multiple versions.
2

Execute from your code

Use the SDK to execute prompts with dynamic variables and subject identification for A/B testing.
3

Track metrics

Record user feedback, conversion rates, or any custom metrics linked to prompt versions.
4

Analyze results

View aggregated metrics per version in the dashboard to identify the best-performing prompts.

Quick Example

from switchport import Switchport

client = Switchport()

# Execute a prompt
response = client.prompts.execute(
    prompt_key="welcome-message",
    subject={"user_id": "user_123"},
    variables={"name": "Alice"}
)

print(response.text)

# Track user feedback
client.metrics.record(
    metric_key="satisfaction",
    value=4.5,
    subject={"user_id": "user_123"}
)

Resources

GitHub

Explore our open-source SDKs

Dashboard

Access the Switchport platform

Discord

Join our community