Prompt Configs
A prompt config is a container for all versions of a specific prompt. It has:- Key: A unique identifier used in your code (e.g.,
welcome-message) - Name: A human-readable name for the dashboard
- Description: Optional documentation about the prompt’s purpose
- Variables: Placeholders that get filled in at execution time
Prompt Versions
Each prompt config can have multiple versions. A version includes:- Model: The LLM to use (e.g.,
gpt-5,claude-3-5-sonnet-20241022) - Prompt Template: The actual prompt text with variable placeholders
- Version Name: A label for this version (e.g.,
v1,formal-tone) - Status: Published or draft
Subject Identification
Subject identification is data you provide to ensure consistent version assignment. It’s used for:- Deterministic Routing: Same subject always gets the same version
- Metrics Aggregation: Links metrics to the correct prompt version
- A simple string:
"user_123" - A dictionary:
{"user_id": "user_123", "tier": "premium"}
A/B Testing
A/B testing (or multivariate testing) lets you compare multiple prompt versions:- Create multiple versions of a prompt
- Set up a traffic config that defines the distribution (e.g., 50% v1, 50% v2)
- Execute the prompt with subject identification
- Users are deterministically assigned to versions based on subject hash
The same subject always gets the same version. This ensures users have a consistent experience across sessions.
Metrics
Metrics are measurements you track to evaluate prompt performance. They can be:- Float: Numerical values (e.g., satisfaction score, response time)
- Boolean: True/false values (e.g., conversion, email opened)
- Enum: Categorical values (e.g., sentiment: positive/negative/neutral)
Variables
Variables are dynamic values you pass to prompts at execution time. They’re defined using double curly braces in your prompt template. Dashboard:Traffic Configs
A traffic config defines how users are distributed across prompt versions:- Gradual rollouts: Start with 10% on new version, increase if metrics look good
- A/B tests: Equal split (50/50) between two versions
- Multivariate tests: Test 3+ versions simultaneously
Request IDs
Every prompt execution returns a request ID. This unique identifier:- Links prompts to metrics in the dashboard
- Helps with debugging and support
- Enables detailed audit trails
Next Steps
Python Quickstart
Get started with the Python SDK
A/B Testing Guide
Learn how to run A/B tests

