Overview
Metrics let you measure how well your prompts perform. When you record metrics with the same subject used for prompt execution, Switchport automatically aggregates them per prompt version.
Basic Usage
Record a metric:
package main
import (
" fmt "
" log "
" github.com/switchport-ai/switchport-go/switchport "
)
func main () {
client , err := switchport . NewClient ( "" )
if err != nil {
log . Fatalf ( "Failed to create client: %v " , err )
}
result , err := client . Metrics . Record (
"user_satisfaction" ,
4.5 ,
map [ string ] interface {}{ "user_id" : "user_123" },
nil , // timestamp (nil = current time)
)
if err != nil {
log . Fatalf ( "Failed to record metric: %v " , err )
}
fmt . Printf ( "Metric recorded: %s \n " , result . MetricEventID )
}
Metric Types
Switchport supports three types of metrics:
Float Metrics
Numerical values (e.g., ratings, scores, response times):
// User satisfaction (1-5 scale)
client . Metrics . Record (
"satisfaction" ,
4.5 ,
map [ string ] interface {}{ "user_id" : "user_123" },
nil ,
)
// Response time in milliseconds
client . Metrics . Record (
"response_time_ms" ,
125.7 ,
map [ string ] interface {}{ "request_id" : "req_abc" },
nil ,
)
// Can also pass int values
client . Metrics . Record (
"response_time_ms" ,
125 ,
map [ string ] interface {}{ "request_id" : "req_abc" },
nil ,
)
Boolean Metrics
True/false values (e.g., conversions, success rates):
// Conversion tracking
client . Metrics . Record (
"conversion" ,
true ,
map [ string ] interface {}{ "user_id" : "user_123" },
nil ,
)
// Email opened
client . Metrics . Record (
"email_opened" ,
false ,
map [ string ] interface {}{ "email_id" : "email_456" },
nil ,
)
Enum Metrics
Categorical values (e.g., sentiment, outcome):
// User sentiment
client . Metrics . Record (
"sentiment" ,
"positive" ,
map [ string ] interface {}{ "user_id" : "user_123" },
nil ,
)
// Outcome classification
client . Metrics . Record (
"outcome" ,
"resolved" ,
map [ string ] interface {}{ "ticket_id" : "ticket_789" },
nil ,
)
Creating Metric Definitions
Before recording metrics, create a metric definition in the dashboard:
Navigate to Metrics
Go to Metrics → New Metric in the dashboard
Configure the metric
Key : Unique identifier (e.g., satisfaction)
Name : Human-readable name (e.g., “User Satisfaction”)
Type : float, boolean, or enum
Create
Click Create to save the metric definition
Linking Metrics to Prompts
To link metrics to prompt versions, use the same subject for both:
// Execute prompt with subject identification
subject := map [ string ] interface {}{ "user_id" : "user_123" }
response , err := client . Prompts . Execute (
"welcome-message" , subject ,
map [ string ] interface {}{ "name" : "Alice" },
)
if err != nil {
log . Fatalf ( "Failed to execute prompt: %v " , err )
}
// Show message to user...
fmt . Println ( response . Text )
// Later, record metric with SAME subject
result , err := client . Metrics . Record (
"satisfaction" ,
4.5 ,
subject , // Same subject!
nil ,
)
if err != nil {
log . Fatalf ( "Failed to record metric: %v " , err )
}
Always use the same subject when executing prompts and recording metrics. This ensures metrics are correctly attributed to the right prompt version.
Custom Timestamps
You can provide a custom timestamp for metrics:
import " time "
// Record metric with custom timestamp
timestamp := time . Now (). Add ( - 1 * time . Hour ) // 1 hour ago
result , err := client . Metrics . Record (
"satisfaction" ,
4.5 ,
map [ string ] interface {}{ "user_id" : "user_123" },
& timestamp , // Custom timestamp
)
If you pass nil, the current time is used.
Response Object
The Record method returns a MetricRecordResponse:
result , err := client . Metrics . Record (
"satisfaction" ,
4.5 ,
map [ string ] interface {}{ "user_id" : "user_123" },
nil ,
)
if err != nil {
log . Fatalf ( "Failed to record metric: %v " , err )
}
// Check success
fmt . Printf ( "Success: %v \n " , result . Success )
// Unique event identifier
fmt . Printf ( "Event ID: %s \n " , result . MetricEventID )
Error Handling
Handle common errors when recording metrics:
package main
import (
" errors "
" log "
" github.com/switchport-ai/switchport-go/switchport "
)
func main () {
client , err := switchport . NewClient ( "" )
if err != nil {
log . Fatalf ( "Failed to create client: %v " , err )
}
result , err := client . Metrics . Record (
"satisfaction" ,
4.5 ,
map [ string ] interface {}{ "user_id" : "user_123" },
nil ,
)
if err != nil {
var metricNotFound * switchport . MetricNotFoundError
var authErr * switchport . AuthenticationError
var apiErr * switchport . APIError
switch {
case errors . As ( err , & metricNotFound ):
log . Println ( "Metric definition not found - create it in the dashboard" )
case errors . As ( err , & authErr ):
log . Println ( "Authentication failed - check your API key" )
case errors . As ( err , & apiErr ):
log . Printf ( "API error (status %d ): %v " , apiErr . StatusCode , apiErr )
default :
log . Printf ( "Unknown error: %v " , err )
}
return
}
log . Printf ( "Metric recorded: %s \n " , result . MetricEventID )
}
Common Patterns
Pattern 1: User Feedback
func handleUserFeedback ( client * switchport . Client , userID string , rating float64 ) error {
_ , err := client . Metrics . Record (
"satisfaction" ,
rating ,
map [ string ] interface {}{ "user_id" : userID },
nil ,
)
return err
}
Pattern 2: Conversion Tracking
func trackConversion ( client * switchport . Client , userID string , converted bool ) error {
_ , err := client . Metrics . Record (
"conversion" ,
converted ,
map [ string ] interface {}{ "user_id" : userID },
nil ,
)
return err
}
Pattern 3: Email Campaign
func sendEmailAndTrack ( client * switchport . Client , user User ) error {
subject := map [ string ] interface {}{
"user_id" : user . ID ,
"campaign" : "summer_2025" ,
}
// Execute prompt
response , err := client . Prompts . Execute (
"marketing-email" , subject ,
map [ string ] interface {}{ "name" : user . Name },
)
if err != nil {
return err
}
// Send email
if err := sendEmail ( user . Email , response . Text ); err != nil {
return err
}
// Track if opened (later)
if emailWasOpened ( user . ID ) {
_ , err := client . Metrics . Record (
"email_opened" ,
true , subject , // Same subject
nil ,
)
return err
}
return nil
}
Pattern 4: Multi-Metric Tracking
type InteractionData struct {
Rating float64
Converted bool
Sentiment string
}
func completeInteraction ( client * switchport . Client , userID string , data InteractionData ) error {
subject := map [ string ] interface {}{ "user_id" : userID }
// Record satisfaction
if _ , err := client . Metrics . Record ( "satisfaction" , data . Rating , subject , nil ); err != nil {
return err
}
// Record conversion
if _ , err := client . Metrics . Record ( "conversion" , data . Converted , subject , nil ); err != nil {
return err
}
// Record sentiment
if _ , err := client . Metrics . Record ( "sentiment" , data . Sentiment , subject , nil ); err != nil {
return err
}
return nil
}
Pattern 5: Goroutine for Async Recording
func recordMetricAsync ( client * switchport . Client , metricKey string , value interface {}, subject switchport . Subject ) {
go func () {
_ , err := client . Metrics . Record ( metricKey , value , subject , nil )
if err != nil {
log . Printf ( "Failed to record metric %s : %v " , metricKey , err )
}
}()
}
Best Practices
Always use the same subject for prompt execution and metric recording to ensure proper aggregation.
Record metrics asynchronously
Consider recording metrics in a goroutine to avoid blocking your main application flow.
Handle failures gracefully
Don’t let metric recording failures break your application. Use error handling and logging.
Choose appropriate metric types
Use the right metric type for your use case:
Float (or int) for numerical measurements
Boolean for yes/no tracking
String for categorical data (enum)
Create metric definitions in the dashboard before recording metrics in your code.
Consider creating wrapper functions with specific types for your metrics to avoid interface everywhere.
Viewing Metrics
Metrics are automatically aggregated per prompt version in the dashboard. You can:
View average values for float metrics
View success rates for boolean metrics
View distribution for enum metrics
Compare metrics across different prompt versions
Identify winning versions based on metric performance
Next Steps
A/B Testing Learn how to run A/B tests with metrics
API Reference See the full metrics API reference