Prerequisites
Installation
Install the Switchport SDK using pip:
Or install from source:
git clone https://github.com/switchport-ai/switchport-python.git
cd switchport-python
pip install -e .
Get Your API Key
Navigate to Settings
Go to Settings → API Keys in the dashboard.
Copy your API key
Copy your API key (it starts with sp_).
Set Your API Key
Set your API key as an environment variable:
export SWITCHPORT_API_KEY=sp_your_key_here
Create Your First Prompt
Before using the SDK, you need to create a prompt in the Switchport dashboard:
Create a prompt config
- Go to Prompts → New Prompt Config
- Name: “Welcome Message”
- Key:
welcome-message
- Click Create
Add a version
- Click Add Version
- Model: Select
gpt-5 (or another model)
- Prompt:
Write a friendly welcome message for {{name}}.
- Click Save
Publish the version
Click Publish on the version you just created.
The SDK calls LLMs on your behalf, so you need to configure your LLM API keys:
Go to Organization Settings
Navigate to Settings → Organization Settings
Add your LLM API keys
Add API keys for the LLM providers you want to use:
- OpenAI API key (for GPT models)
- Anthropic API key (for Claude models)
- Google API key (for Gemini models)
Save
Click Save to store your API keys securely.
Execute Your First Prompt
Create a file test_switchport.py:
from switchport import Switchport
# Initialize client (reads API key from environment)
client = Switchport()
# Execute a prompt
response = client.prompts.execute(
prompt_key="welcome-message",
variables={"name": "Alice"}
)
print("Generated text:")
print(response.text)
print(f"\nModel: {response.model}")
print(f"Version: {response.version_name}")
Run it:
python test_switchport.py
You should see output like:
Generated text:
Hello Alice! Welcome to our platform. We're excited to have you here!
Model: gpt-5
Version: v1
Record Your First Metric
Now let’s track a metric. First, create a metric definition in the dashboard:
Create metric definition
- Go to Metrics → New Metric
- Key:
satisfaction
- Name: “User Satisfaction”
- Type:
float
- Click Create
Then record a metric in your code:
from switchport import Switchport
client = Switchport()
# Execute prompt with subject identification
response = client.prompts.execute(
prompt_key="welcome-message",
subject={"user_id": "user_123"},
variables={"name": "Alice"}
)
# Simulate user feedback (1-5 stars)
user_rating = 4.5
# Record metric with same subject
result = client.metrics.record(
metric_key="satisfaction",
value=user_rating,
subject={"user_id": "user_123"} # Same context!
)
print(f"Metric recorded! Event ID: {result.metric_event_id}")
Always use the same subject when executing prompts and recording metrics. This ensures metrics are correctly aggregated per prompt version.
Next Steps
A/B Testing
Learn how to run A/B tests with multiple prompt versions
API Reference
Explore the full API reference
Examples
See more code examples
Core Concepts
Understand key concepts