We’re transitioning to OpenAPI spec-driven documentation. The new API Reference features interactive “Try It” functionality and auto-generated examples, and you can access the full OpenAPI spec there. This page will remain available for detailed descriptions during the transition.
The Freeplay HTTP API provides programmatic access to all platform capabilities. While the Freeplay SDKs offer language-native bindings for common operations, the API exposes the full range of functionality.
SDK vs. API: The Freeplay SDKs are designed to cover the most common integration patterns—prompt management, recording completions, and running tests. The HTTP API is a superset that includes additional capabilities like bulk operations, advanced search, and administrative endpoints.
Getting Started
Base URL
Your API root is your Freeplay instance URL plus /api/v2/.
| Deployment Type | Base URL |
|---|
| Cloud | https://app.freeplay.ai/api/v2 |
| Private | https://{your-subdomain}.freeplay.ai/api/v2 |
Authentication
Authenticate requests using your API key in the Authorization header:
Authorization: Bearer {freeplay_api_key}
API keys are managed at https://app.freeplay.ai/settings/api-access.
Error Handling
Freeplay uses standard HTTP status codes:
| Code | Description |
|---|
200 | Success |
400 | Bad Request - Malformed request or invalid data |
401 | Unauthorized - Invalid or missing API key |
404 | Not Found - Resource doesn’t exist or no access |
500 | Server Error - Transient issue, retry with backoff |
For 500 errors, retry up to three times with at least 5 seconds between attempts. Do not retry 400-level errors—these indicate client issues that won’t resolve on retry.
Example Error Response:
{
"message": "Session ID 456 is not a valid uuid4 format."
}
Observability
Freeplay organizes observability data in a hierarchy:
Session (user conversation or workflow)
└── Trace (optional grouping of related completions)
└── Completion (a single LLM call)
- Sessions are created implicitly when you record your first completion with a session ID, or you can create one
- Traces are optional but required for agent workflows—they group related completions together
- Completions are the atomic unit: a prompt sent to an LLM and its response
For conceptual background, see Sessions, Traces, and Completions. For SDK usage, see Organizing Principles.
Sessions
Sessions group related LLM completions together—typically representing a user conversation or workflow.
Base URL: /api/v2/projects/<project-id>/sessions
Sessions are created implicitly when you record a completion. You only need to generate a session ID (UUID v4) client-side. For SDK usage, see Sessions.
Retrieve Sessions
GET /
Returns sessions with their completions, ordered by most recent first.
Query Parameters:
| Parameter | Type | Description | Required |
|---|
page | int | Page number | No (Default: 1) |
page_size | int | Results per page (max: 100) | No (Default: 10) |
from_date | str | Start date (inclusive). Format: YYYY-MM-DD | No |
to_date | str | End date (exclusive). Format: YYYY-MM-DD | No |
test_list | str | Filter by dataset name | No |
test_run_id | str | Filter by test run ID | No |
prompt_name | str | Filter by prompt template name | No |
review_queue_id | str | Filter by review queue ID | No |
custom_metadata.* | dict | Filter by session metadata | No |
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/sessions?page_size=20&from_date=2025-01-01"
Response:
[
{
"session_id": "85d7d393-4e85-4664-8598-5dd91dc75b5b",
"start_time": "2024-07-05T14:33:02.721000",
"custom_metadata": {},
"messages": [
{
"completion_id": "0202ae57-1098-4d3e-94f7-1fbb034fbed5",
"prompt_template_name": "my-prompt",
"model_name": "claude-2.1",
"provider_name": "anthropic",
"environment": "latest",
"prompt": [...],
"response": "You bundle prompts by...",
"input_variables": {"question": "How do I bundle prompts?"}
}
]
}
]
Delete Session
DELETE /<session-id>
Permanently deletes a session and all associated completions.
This operation cannot be undone.
curl -X DELETE \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/sessions/$SESSION_ID"
Traces
Traces group related completions within a session. They’re essential for agent workflows where multiple LLM calls work together to accomplish a task.
Base URL: /api/v2/projects/<project-id>/sessions/<session-id>/traces
For SDK usage, see Traces.
Record a Trace
POST /id/<trace-id>
Records a trace within a session. Like sessions, traces are created implicitly—generate a UUID v4 client-side and use it as the trace ID.
Request Payload:
| Parameter | Type | Description | Required |
|---|
input | any | Input to the trace (e.g., user query) | Yes |
output | any | Output from the trace (e.g., final response) | Yes |
agent_name | str | Name of the agent (required for agent workflows) | No |
name | str | Display name for the trace | No |
custom_metadata | dict[str, str|int|float|bool] | Custom metadata | No |
eval_results | dict[str, float|bool] | Code evaluation results | No |
parent_id | UUID | Parent trace ID for nested traces | No |
test_run_info | {test_run_id, test_case_id} | Test run association | No |
kind | "tool" | Set to "tool" for tool call traces | No |
start_time | datetime | Trace start time | No |
end_time | datetime | Trace end time | No |
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/sessions/$SESSION_ID/traces/id/$TRACE_ID" \
-d '{
"input": "What is the weather in San Francisco?",
"output": "The weather in San Francisco is 65°F and sunny.",
"agent_name": "weather-agent",
"custom_metadata": {"version": "1.0.0"}
}'
When building agents, record completions with trace_info to associate them with a trace, then call this endpoint to finalize the trace with its output.
Completions
Completions are the atomic unit of observability—a single LLM call with its prompt and response.
Base URL: /api/v2/projects/<project-id>/sessions/<session-id>/completions
For SDK usage, see Recording Completions.
Record a Completion
POST /
Records an LLM completion to a session. This is the primary endpoint for logging LLM interactions.
Request Payload:
| Parameter | Type | Description | Required |
|---|
messages | list[{role, content}] | Messages sent to and received from the LLM | Yes |
inputs | dict[str, any] | Input variables used in the prompt | Yes |
prompt_info | {prompt_template_version_id, environment} | Prompt template info | Yes |
trace_info | {trace_id} | Associate completion with a trace | No |
tool_schema | list[{name, description, parameters}] | Tool definitions | No |
session_info | {custom_metadata: dict} | Session metadata | No |
call_info | {start_time, end_time, model, provider, usage, ...} | LLM call details | No |
test_run_info | {test_run_id, test_case_id} | Test run association | No |
completion_id | UUID | Custom completion ID | No |
eval_results | dict[str, bool|float] | Code evaluation results | No |
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/sessions/$SESSION_ID/completions" \
-d '{
"messages": [
{"role": "user", "content": "Generate an album name for Taylor Swift"},
{"role": "assistant", "content": "Rainy Melodies"}
],
"inputs": {"pop_star": "Taylor Swift"},
"prompt_info": {
"prompt_template_version_id": "f503c15e-2f0f-4ce4-b443-4c87d0b6435d",
"environment": "prod"
}
}'
Response:
{
"completion_id": "707bc301-85e9-4f02-aa97-faba8cd7774a"
}
With Trace Association:
To associate a completion with a trace (for agent workflows), include trace_info:
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/sessions/$SESSION_ID/completions" \
-d '{
"messages": [...],
"inputs": {...},
"prompt_info": {...},
"trace_info": {
"trace_id": "abc123-trace-uuid"
}
}'
Prompt Templates
Create, retrieve, and manage prompt templates programmatically. For conceptual background on prompt management patterns, see Prompt Management.
Base URL: /api/v2/projects/<project-id>/prompt-templates
Create Prompt Template
POST /
Creates a new prompt template (without any versions). Typically used when you want to create a template first, then add versions separately.
NOTE: The same objective can be accomplished using the create_template_if_not_exists parameter on the Create Version by Name endpoint below.
Request Payload:
| Parameter | Type | Description | Required |
|---|
name | str | Template name (unique within project) | Yes |
id | uuid | Custom template ID | No |
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates" \
-d '{"name": "my-assistant"}'
Response:
{
"id": "5cccc8e6-b163-4094-8bd2-90030f151ec8"
}
List Prompt Templates
GET /
Returns all prompt templates in a project with pagination.
Query Parameters:
| Parameter | Type | Description | Default |
|---|
page | int | Page number | 1 |
page_size | int | Results per page (max: 100) | 30 |
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates?page=1&page_size=50"
Create Version by Name
POST /name/<template-name>/versions
Creates a new prompt version by template name. This is the recommended endpoint for CI/CD workflows because it supports creating the template automatically if it doesn’t exist.
Query Parameters:
| Parameter | Type | Description | Default |
|---|
create_template_if_not_exists | boolean | Create template if not found | false |
Request Payload:
| Parameter | Type | Description | Required |
|---|
template_messages | list[{role, content}] | Message array with mustache variables | Yes |
model | str | Model name | Yes |
provider | str | Provider key (openai, anthropic, etc.) | Yes |
llm_parameters | dict | Model parameters (temperature, etc.) | No |
tool_schema | list | Tool definitions | No |
output_schema | dict | Structured output schema | No |
version_name | str | Display name | No |
version_description | str | Description | No |
environments | list[str] | Environments to deploy to | No |
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates/name/my-assistant/versions?create_template_if_not_exists=true" \
-d '{
"template_messages": [
{"role": "system", "content": "You are a helpful assistant. The user'\''s name is {{user_name}}."},
{"role": "user", "content": "{{user_input}}"}
],
"provider": "openai",
"model": "gpt-4o",
"llm_parameters": {"temperature": 0.2, "max_tokens": 1024},
"version_name": "v1.2.0",
"version_description": "Production release with improved system prompt"
}'
Response:
{
"prompt_template_id": "5cccc8e6-b163-4094-8bd2-90030f151ec8",
"prompt_template_version_id": "f503c15e-2f0f-4ce4-b443-4c87d0b6435d",
"prompt_template_name": "my-assistant",
"version_name": "v1.2.0",
"version_description": "Production release with improved system prompt",
"format_version": 2,
"project_id": "abc123-project-uuid",
"content": [...],
"metadata": {
"flavor": "openai_chat",
"model": "gpt-4o",
"provider": "openai",
"params": {"temperature": 0.2, "max_tokens": 1024}
}
}
Code-managed prompts: Use the create_template_if_not_exists=true parameter in your CI/CD pipeline to automatically sync prompts from your codebase to Freeplay. See Code as source of truth for the full workflow.
Retrieve by Name
POST /name/<template-name>
Fetches a prompt template in one of three forms:
| Form | Description | How to Request |
|---|
| Raw | Template with {{variable}} placeholders | No body, no format param |
| Bound | Variables inserted, provider-agnostic format | Pass variables in body |
| Formatted | Variables inserted, provider-specific format | Pass variables + format=true |
Query Parameters:
| Parameter | Type | Description | Default |
|---|
environment | str | Environment tag | latest |
format | boolean | Return formatted prompt | false |
Formatted Prompt (most common):
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates/name/album_bot?environment=prod&format=true" \
-d '{"pop_star": "Taylor Swift"}'
Response:
{
"format_version": 2,
"prompt_template_id": "5cccc8e6-b163-4094-8bd2-90030f151ec8",
"prompt_template_name": "album_bot",
"prompt_template_version_id": "f503c15e-2f0f-4ce4-b443-4c87d0b6435d",
"formatted_content": [
{
"role": "user",
"content": "Generate a two word album name in the style of Taylor Swift"
}
],
"formatted_tool_schema": [...],
"metadata": {
"flavor": "openai_chat",
"model": "gpt-3.5-turbo-0125",
"provider": "openai",
"params": {"max_tokens": 100, "temperature": 0.2}
}
}
Retrieve by Version ID
POST /id/<template-id>/versions/<version-id>
Fetch a specific prompt version. Useful for pinning to a known version.
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates/id/$TEMPLATE_ID/versions/$VERSION_ID"
Retrieve All Templates
GET /all/<environment-name>
Returns all prompt templates in an environment.
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates/all/prod"
Create Version by ID
POST /id/<template-id>/versions
Adds a new version to an existing prompt template (identified only by ID). Deployed to latest by default.
Request Payload:
| Parameter | Type | Description | Required |
|---|
template_messages | list[{role, content}] | Message array with mustache variables | Yes |
model | str | Model name | Yes |
provider | str | Provider key (openai, anthropic, etc.) | Yes |
llm_parameters | dict | Model parameters (temperature, etc.) | No |
tool_schema | list | Tool definitions | No |
output_schema | dict | Structured output schema | No |
version_name | str | Display name | No |
version_description | str | Description | No |
environments | list[str] | Environments to deploy to | No |
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates/id/$TEMPLATE_ID/versions" \
-d '{
"template_messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "{{user_input}}"}
],
"provider": "openai",
"model": "gpt-4o",
"llm_parameters": {"temperature": 0.2, "max_tokens": 256}
}'
Update Environments
POST /id/<template-id>/versions/<version-id>/environments
Assign a version to additional environments.
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates/id/$TEMPLATE_ID/versions/$VERSION_ID/environments" \
-d '{"environments": ["staging", "prod"]}'
For SDK-based prompt management, see Prompts. For the conceptual guide on managing prompts programmatically, see Code as source of truth.
Test Runs
Execute batch tests using saved datasets.
Base URL: /api/v2/projects/<project-id>/test-runs
Create Test Run
POST /
Creates a new test run from an existing dataset.
| Parameter | Type | Description | Required |
|---|
dataset_name | str | Name of the dataset | Yes |
include_outputs | boolean | Include expected outputs | No (Default: true) |
test_run_name | str | Display name | No |
test_run_description | str | Description | No |
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/test-runs" \
-d '{"dataset_name": "Example Tests"}'
Response:
{
"test_run_id": "bd3eb06c-f93b-46a4-aa3b-d240789c8a06",
"test_run_name": "",
"test_run_description": "",
"test_cases": [
{
"test_case_id": "91e60c9e-fbaa-4990-b4cc-7a8bd067f298",
"variables": {"question": "How do test runs work?"},
"output": null
}
]
}
Retrieve Test Run Results
GET /id/<test-run-id>
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/test-runs/id/$TEST_RUN_ID"
Response:
{
"id": "2a9dd8bd-6c29-47c8-9ca4-427f73174881",
"name": "regression-test",
"model_name": "gpt-4o-mini",
"prompt_name": "rag-qa",
"sessions_count": 132,
"summary_statistics": {
"auto_evaluation": {"Answer Accuracy": {"5": 104, "4": 9}},
"client_evaluation": {},
"human_evaluation": {}
}
}
List Test Runs
GET /
| Parameter | Type | Description | Default |
|---|
page | int | Page number | 1 |
page_size | int | Results per page (max: 100) | 100 |
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/test-runs"
For SDK-based testing, see Test Runs.
Customer Feedback
Record user feedback for completions and traces.
Completion Feedback
Base URL: /api/v2/projects/<project-id>/completion-feedback
POST /id/<completion-id>
| Parameter | Type | Description | Required |
|---|
freeplay_feedback | str | "positive" or "negative" | Yes |
* | str|float|int|bool | Custom feedback attributes | No |
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/completion-feedback/id/$COMPLETION_ID" \
-d '{
"freeplay_feedback": "positive",
"rating": 5,
"comment": "Great response"
}'
Trace Feedback
Base URL: /api/v2/projects/<project-id>/trace-feedback
POST /id/<trace-id>
Same parameters as completion feedback.
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/trace-feedback/id/$TRACE_ID" \
-d '{"freeplay_feedback": "negative", "reason": "incomplete answer"}'
For SDK-based feedback, see Customer Feedback.
Search API
Query sessions, traces, and completions with powerful filtering. This functionality is API-only and not available through the SDKs.
Endpoints
| Endpoint | Description |
|---|
POST /search/sessions | Search sessions |
POST /search/traces | Search traces |
POST /search/completions | Search completions |
All endpoints support pagination via page and page_size query parameters.
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/search/completions?page=1&page_size=20" \
-d '{"filters": {"field": "cost", "op": "gte", "value": 0.01}}'
Filter Operators
| Operator | Description |
|---|
eq | Equals |
lt | Less than |
gt | Greater than |
lte | Less than or equal |
gte | Greater than or equal |
contains | Contains substring |
between | Within numeric range |
Available Filters
| Field | Supported Operators | Example Value |
|---|
cost | eq, lt, gt, lte, gte | 0.003 |
latency | eq, lt, gt, lte, gte | 8 |
start_time | eq, lt, gt, lte, gte | "2024-06-01 00:00:00" |
environment | eq | "staging" |
prompt_template | eq | "my-prompt" |
prompt_template_id | eq | "uuid..." |
model | eq | "gpt-4o" |
provider | eq | "openai" |
review_status | eq | "review_complete" |
agent_name | eq | "support-agent" |
trace_agent_name | eq | "my-agent" |
api_key | eq | "production-key" |
assignee | eq | "user@example.com" |
review_theme | eq | "Response Quality Issues" |
completion_output | contains | "weather" |
completion_inputs.* | contains | "topic": "weather" |
completion_feedback.* | contains | "rating": "positive" |
session_custom_metadata.* | contains | "user_type": "premium" |
trace_custom_metadata.* | contains | "workflow": "onboarding" |
trace_input.* | contains | "query": "weather" |
trace_output.* | contains | "response": "sunny" |
trace_feedback.* | contains | "rating": "positive" |
completion_evaluation_results.* | eq | "Response Quality": "4" |
completion_client_evaluation_results.* | eq | "score": "85" |
trace_evaluation_results.* | eq, gt, lt, gte, lte, contains | "Quality Score": 5 |
trace_client_eval_results.* | eq, contains | "confidence_score": 0.95 |
evaluation_notes.content | contains | "needs review" |
evaluation_notes.author | eq | "user@example.com" |
evaluation_notes.created_at | gt, lt, gte, lte | "2024-06-01 00:00:00" |
Compound Filters
Combine filters using and, or, and not:
{
"filters": {
"and": [
{"field": "cost", "op": "gte", "value": 0.001},
{
"or": [
{"field": "model", "op": "eq", "value": "gpt-4o"},
{"field": "model", "op": "eq", "value": "claude-3-opus"}
]
},
{
"not": {"field": "environment", "op": "eq", "value": "prod"}
}
]
}
}
Additional API Endpoints
The following endpoints provide administrative and bulk operations not covered by the SDKs.
Projects
Base URL: /api/v2/projects
List All Projects
GET /all
Returns all projects in your workspace.
Does not work with project-scoped API keys. Private projects are excluded.
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
"https://app.freeplay.ai/api/v2/projects/all"
Agents
Base URL: /api/v2/projects/<project-id>/agents
List Agents
GET /
| Parameter | Type | Description | Default |
|---|
page | int | Page number | 1 |
page_size | int | Results per page (max: 100) | 30 |
name | string | Filter by exact name | - |
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/agents"
Datasets
Base URL: /api/v2/projects/<project-id>/datasets
GET /name/<dataset-name> or GET /id/<dataset-id>
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/datasets/name/Sample"
Retrieve Dataset Test Cases
GET /name/<dataset-name>/test-cases or GET /id/<dataset-id>/test-cases
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/datasets/name/Sample/test-cases"
Upload Test Cases
POST /id/<dataset-id>/test-cases
Maximum 100 test cases per request.
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/datasets/id/$DATASET_ID/test-cases" \
-d '{
"examples": [
{"inputs": {"question": "What is Freeplay?"}, "output": "An LLM platform"},
{"inputs": {"question": "How do I integrate?"}, "output": "Use the SDK"}
]
}'
Completions Statistics
Base URL: /api/v2/projects/<project-id>/completions
Aggregate Statistics
POST /statistics
Returns evaluation statistics across all prompts for a date range (max 30 days).
| Parameter | Type | Description | Default |
|---|
from_date | str | Start date (inclusive) | 7 days ago |
to_date | str | End date (exclusive) | Today |
curl -X POST \
-H "Authorization: Bearer $FREEPLAY_API_KEY" \
-H "Content-Type: application/json" \
"https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/completions/statistics" \
-d '{"from_date": "2025-01-01", "to_date": "2025-01-15"}'
Statistics by Prompt
POST /statistics/<prompt-template-id>
Same parameters as aggregate statistics, filtered to a specific prompt template.
Complete Examples
End-to-End LLM Interaction
This example fetches a prompt, calls OpenAI, and records the completion:
import requests
import os
import json
import uuid
project_id = os.getenv("FREEPLAY_PROJECT_ID")
api_root = f"https://app.freeplay.ai/api/v2/projects/{project_id}"
headers = {"Authorization": f"Bearer {os.getenv('FREEPLAY_API_KEY')}"}
# 1. Fetch formatted prompt
prompt_resp = requests.post(
f"{api_root}/prompt-templates/name/album_bot",
headers=headers,
params={"environment": "prod", "format": "true"},
json={"pop_star": "Taylor Swift"}
)
formatted_prompt = prompt_resp.json()
# 2. Call OpenAI
openai_resp = requests.post(
"https://api.openai.com/v1/chat/completions",
headers={
"Authorization": f"Bearer {os.getenv('OPENAI_API_KEY')}",
"Content-Type": "application/json"
},
json={
"model": formatted_prompt["metadata"]["model"],
"messages": formatted_prompt["formatted_content"],
**formatted_prompt["metadata"]["params"]
}
)
response_message = openai_resp.json()['choices'][0]['message']
# 3. Record to Freeplay
messages = formatted_prompt["formatted_content"] + [response_message]
session_id = str(uuid.uuid4())
requests.post(
f"{api_root}/sessions/{session_id}/completions",
headers={**headers, "Content-Type": "application/json"},
json={
"messages": messages,
"inputs": {"pop_star": "Taylor Swift"},
"prompt_info": {
"prompt_template_version_id": formatted_prompt["prompt_template_version_id"],
"environment": "prod"
}
}
)
Executing a Test Run
import requests
import os
import json
import uuid
project_id = os.getenv("FREEPLAY_PROJECT_ID")
api_root = f"https://app.freeplay.ai/api/v2/projects/{project_id}"
headers = {"Authorization": f"Bearer {os.getenv('FREEPLAY_API_KEY')}"}
# Create test run
test_run_resp = requests.post(
f"{api_root}/test-runs",
headers={**headers, "Content-Type": "application/json"},
json={"dataset_name": "Example Tests"}
)
test_run = test_run_resp.json()
# Process each test case
for test_case in test_run["test_cases"]:
# Fetch prompt with test case variables
prompt_resp = requests.post(
f"{api_root}/prompt-templates/name/rag-qa",
headers=headers,
params={"environment": "prod", "format": "true"},
json=test_case['variables']
)
formatted_prompt = prompt_resp.json()
# Call LLM (example with OpenAI)
openai_resp = requests.post(
"https://api.openai.com/v1/chat/completions",
headers={
"Authorization": f"Bearer {os.getenv('OPENAI_API_KEY')}",
"Content-Type": "application/json"
},
json={
"model": formatted_prompt["metadata"]["model"],
"messages": formatted_prompt["formatted_content"],
**formatted_prompt["metadata"]["params"]
}
)
response_message = openai_resp.json()['choices'][0]['message']
# Record with test run info
messages = formatted_prompt["formatted_content"] + [response_message]
session_id = str(uuid.uuid4())
requests.post(
f"{api_root}/sessions/{session_id}/completions",
headers={**headers, "Content-Type": "application/json"},
json={
"messages": messages,
"inputs": test_case['variables'],
"prompt_info": {
"prompt_template_version_id": formatted_prompt["prompt_template_version_id"],
"environment": "prod"
},
"test_run_info": {
"test_run_id": test_run["test_run_id"],
"test_case_id": test_case["test_case_id"]
}
}
)
Agent Workflow with Traces
This example shows how to record an agent workflow with multiple completions grouped by a trace:
import requests
import os
import uuid
project_id = os.getenv("FREEPLAY_PROJECT_ID")
api_root = f"https://app.freeplay.ai/api/v2/projects/{project_id}"
headers = {"Authorization": f"Bearer {os.getenv('FREEPLAY_API_KEY')}"}
# Generate IDs for session and trace
session_id = str(uuid.uuid4())
trace_id = str(uuid.uuid4())
user_query = "What's the weather in San Francisco and should I bring an umbrella?"
# 1. First LLM call - agent decides to check weather
prompt_resp = requests.post(
f"{api_root}/prompt-templates/name/weather-agent",
headers=headers,
params={"environment": "prod", "format": "true"},
json={"query": user_query}
)
formatted_prompt = prompt_resp.json()
# Call LLM...
# response_1 = call_llm(formatted_prompt)
# Record first completion with trace association
requests.post(
f"{api_root}/sessions/{session_id}/completions",
headers={**headers, "Content-Type": "application/json"},
json={
"messages": [...], # Include prompt + response
"inputs": {"query": user_query},
"prompt_info": {
"prompt_template_version_id": formatted_prompt["prompt_template_version_id"],
"environment": "prod"
},
"trace_info": {"trace_id": trace_id} # Associate with trace
}
)
# 2. Second LLM call - agent formats final response
# ... make another LLM call and record with same trace_id ...
# 3. Finalize the trace with input/output
requests.post(
f"{api_root}/sessions/{session_id}/traces/id/{trace_id}",
headers={**headers, "Content-Type": "application/json"},
json={
"input": user_query,
"output": "The weather in San Francisco is 65°F and sunny. No umbrella needed!",
"agent_name": "weather-agent"
}
)
Next Steps