Overview
Metrics let you measure how well your prompts perform. When you record metrics with the same subject used for prompt execution, Switchport automatically aggregates them per prompt version.
Basic Usage
Record a metric:
from switchport import Switchport
client = Switchport()
result = client.metrics.record(
metric_key = "user_satisfaction" ,
value = 4.5 ,
subject = { "user_id" : "user_123" }
)
print ( f "Metric recorded: { result.metric_event_id } " )
Metric Types
Switchport supports three types of metrics:
Float Metrics
Numerical values (e.g., ratings, scores, response times):
# User satisfaction (1-5 scale)
client.metrics.record(
metric_key = "satisfaction" ,
value = 4.5 ,
subject = { "user_id" : "user_123" }
)
# Response time in milliseconds
client.metrics.record(
metric_key = "response_time_ms" ,
value = 125.7 ,
subject = { "request_id" : "req_abc" }
)
Boolean Metrics
True/false values (e.g., conversions, success rates):
# Conversion tracking
client.metrics.record(
metric_key = "conversion" ,
value = True ,
subject = { "user_id" : "user_123" }
)
# Email opened
client.metrics.record(
metric_key = "email_opened" ,
value = False ,
subject = { "email_id" : "email_456" }
)
Enum Metrics
Categorical values (e.g., sentiment, outcome):
# User sentiment
client.metrics.record(
metric_key = "sentiment" ,
value = "positive" ,
subject = { "user_id" : "user_123" }
)
# Outcome classification
client.metrics.record(
metric_key = "outcome" ,
value = "resolved" ,
subject = { "ticket_id" : "ticket_789" }
)
Creating Metric Definitions
Before recording metrics, create a metric definition in the dashboard:
Navigate to Metrics
Go to Metrics → New Metric in the dashboard
Configure the metric
Key : Unique identifier (e.g., satisfaction)
Name : Human-readable name (e.g., “User Satisfaction”)
Type : float, boolean, or enum
Create
Click Create to save the metric definition
Linking Metrics to Prompts
To link metrics to prompt versions, use the same subject for both:
# Execute prompt with subject identification
response = client.prompts.execute(
prompt_key = "welcome-message" ,
subject = { "user_id" : "user_123" },
variables = { "name" : "Alice" }
)
# Show message to user...
# Later, record metric with SAME subject
client.metrics.record(
metric_key = "satisfaction" ,
value = 4.5 ,
subject = { "user_id" : "user_123" } # Same context!
)
Always use the same subject when executing prompts and recording metrics. This ensures metrics are correctly attributed to the right prompt version.
Response Object
The record method returns a result object:
result = client.metrics.record(
metric_key = "satisfaction" ,
value = 4.5 ,
subject = { "user_id" : "user_123" }
)
# Unique event identifier
print (result.metric_event_id) # e.g., "evt_abc123"
Error Handling
Handle common errors when recording metrics:
from switchport import (
Switchport,
MetricNotFoundError,
ValidationError,
APIError
)
client = Switchport()
try :
result = client.metrics.record(
metric_key = "satisfaction" ,
value = 4.5 ,
subject = { "user_id" : "user_123" }
)
except MetricNotFoundError:
print ( "Metric definition not found - create it in the dashboard" )
except ValidationError as e:
print ( f "Invalid metric value: { e } " )
except APIError as e:
print ( f "API error: { e } " )
Common Patterns
Pattern 1: User Feedback
def handle_user_feedback ( user_id , rating ):
"""Record user satisfaction rating."""
client.metrics.record(
metric_key = "satisfaction" ,
value = rating,
subject = { "user_id" : user_id}
)
Pattern 2: Conversion Tracking
def track_conversion ( user_id , converted ):
"""Track whether user converted."""
client.metrics.record(
metric_key = "conversion" ,
value = converted,
subject = { "user_id" : user_id}
)
Pattern 3: Email Campaign
def send_email_and_track ( user ):
# Execute prompt
response = client.prompts.execute(
prompt_key = "marketing-email" ,
subject = { "user_id" : user.id, "campaign" : "summer_2025" },
variables = { "name" : user.name}
)
# Send email
send_email(user.email, response.text)
# Track if opened (later)
if email_was_opened(user.id):
client.metrics.record(
metric_key = "email_opened" ,
value = True ,
subject = { "user_id" : user.id, "campaign" : "summer_2025" }
)
Pattern 4: Multi-Metric Tracking
def complete_interaction ( user_id , interaction_data ):
"""Record multiple metrics for a single interaction."""
subject = { "user_id" : user_id}
# Record satisfaction
client.metrics.record(
metric_key = "satisfaction" ,
value = interaction_data.rating,
subject = subject
)
# Record conversion
client.metrics.record(
metric_key = "conversion" ,
value = interaction_data.converted,
subject = subject
)
# Record sentiment
client.metrics.record(
metric_key = "sentiment" ,
value = interaction_data.sentiment,
subject = subject
)
Best Practices
Always use the same subject for prompt execution and metric recording to ensure proper aggregation.
Record metrics asynchronously
Consider recording metrics in a background task to avoid blocking your main application flow.
Handle failures gracefully
Don’t let metric recording failures break your application. Use try/except blocks and log errors.
Choose appropriate metric types
Use the right metric type for your use case:
Float for numerical measurements
Boolean for yes/no tracking
Enum for categorical data
Create metric definitions in the dashboard before recording metrics in your code.
Viewing Metrics
Metrics are automatically aggregated per prompt version in the dashboard. You can:
View average values for float metrics
View success rates for boolean metrics
View distribution for enum metrics
Compare metrics across different prompt versions
Identify winning versions based on metric performance
Next Steps
A/B Testing Learn how to run A/B tests with metrics
API Reference See the full metrics API reference