Skip to main content
We’re transitioning to OpenAPI spec-driven documentation. The new API Reference features interactive “Try It” functionality and auto-generated examples, and you can access the full OpenAPI spec there. This page will remain available for detailed descriptions during the transition.

The Freeplay HTTP API provides programmatic access to all platform capabilities. While the Freeplay SDKs offer language-native bindings for common operations, the API exposes the full range of functionality.
SDK vs. API: The Freeplay SDKs are designed to cover the most common integration patterns—prompt management, recording completions, and running tests. The HTTP API is a superset that includes additional capabilities like bulk operations, advanced search, and administrative endpoints.

Getting Started

Base URL

Your API root is your Freeplay instance URL plus /api/v2/.
Deployment TypeBase URL
Cloudhttps://app.freeplay.ai/api/v2
Privatehttps://{your-subdomain}.freeplay.ai/api/v2

Authentication

Authenticate requests using your API key in the Authorization header:
Authorization: Bearer {freeplay_api_key}
API keys are managed at https://app.freeplay.ai/settings/api-access.

Error Handling

Freeplay uses standard HTTP status codes:
CodeDescription
200Success
400Bad Request - Malformed request or invalid data
401Unauthorized - Invalid or missing API key
404Not Found - Resource doesn’t exist or no access
500Server Error - Transient issue, retry with backoff
For 500 errors, retry up to three times with at least 5 seconds between attempts. Do not retry 400-level errors—these indicate client issues that won’t resolve on retry.
Example Error Response:
{
  "message": "Session ID 456 is not a valid uuid4 format."
}

Observability

Freeplay organizes observability data in a hierarchy:
Session (user conversation or workflow)
  └── Trace (optional grouping of related completions)
        └── Completion (a single LLM call)
  • Sessions are created implicitly when you record your first completion with a session ID, or you can create one
  • Traces are optional but required for agent workflows—they group related completions together
  • Completions are the atomic unit: a prompt sent to an LLM and its response
For conceptual background, see Sessions, Traces, and Completions. For SDK usage, see Organizing Principles.

Sessions

Sessions group related LLM completions together—typically representing a user conversation or workflow. Base URL: /api/v2/projects/<project-id>/sessions
Sessions are created implicitly when you record a completion. You only need to generate a session ID (UUID v4) client-side. For SDK usage, see Sessions.

Retrieve Sessions

GET / Returns sessions with their completions, ordered by most recent first. Query Parameters:
ParameterTypeDescriptionRequired
pageintPage numberNo (Default: 1)
page_sizeintResults per page (max: 100)No (Default: 10)
from_datestrStart date (inclusive). Format: YYYY-MM-DDNo
to_datestrEnd date (exclusive). Format: YYYY-MM-DDNo
test_liststrFilter by dataset nameNo
test_run_idstrFilter by test run IDNo
prompt_namestrFilter by prompt template nameNo
review_queue_idstrFilter by review queue IDNo
custom_metadata.*dictFilter by session metadataNo
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/sessions?page_size=20&from_date=2025-01-01"
Response:
[
  {
    "session_id": "85d7d393-4e85-4664-8598-5dd91dc75b5b",
    "start_time": "2024-07-05T14:33:02.721000",
    "custom_metadata": {},
    "messages": [
      {
        "completion_id": "0202ae57-1098-4d3e-94f7-1fbb034fbed5",
        "prompt_template_name": "my-prompt",
        "model_name": "claude-2.1",
        "provider_name": "anthropic",
        "environment": "latest",
        "prompt": [...],
        "response": "You bundle prompts by...",
        "input_variables": {"question": "How do I bundle prompts?"}
      }
    ]
  }
]

Delete Session

DELETE /<session-id> Permanently deletes a session and all associated completions.
This operation cannot be undone.
curl -X DELETE \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/sessions/$SESSION_ID"

Traces

Traces group related completions within a session. They’re essential for agent workflows where multiple LLM calls work together to accomplish a task. Base URL: /api/v2/projects/<project-id>/sessions/<session-id>/traces For SDK usage, see Traces.

Record a Trace

POST /id/<trace-id> Records a trace within a session. Like sessions, traces are created implicitly—generate a UUID v4 client-side and use it as the trace ID. Request Payload:
ParameterTypeDescriptionRequired
inputanyInput to the trace (e.g., user query)Yes
outputanyOutput from the trace (e.g., final response)Yes
agent_namestrName of the agent (required for agent workflows)No
namestrDisplay name for the traceNo
custom_metadatadict[str, str|int|float|bool]Custom metadataNo
eval_resultsdict[str, float|bool]Code evaluation resultsNo
parent_idUUIDParent trace ID for nested tracesNo
test_run_info{test_run_id, test_case_id}Test run associationNo
kind"tool"Set to "tool" for tool call tracesNo
start_timedatetimeTrace start timeNo
end_timedatetimeTrace end timeNo
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/sessions/$SESSION_ID/traces/id/$TRACE_ID" \
  -d '{
    "input": "What is the weather in San Francisco?",
    "output": "The weather in San Francisco is 65°F and sunny.",
    "agent_name": "weather-agent",
    "custom_metadata": {"version": "1.0.0"}
  }'
When building agents, record completions with trace_info to associate them with a trace, then call this endpoint to finalize the trace with its output.

Completions

Completions are the atomic unit of observability—a single LLM call with its prompt and response. Base URL: /api/v2/projects/<project-id>/sessions/<session-id>/completions For SDK usage, see Recording Completions.

Record a Completion

POST / Records an LLM completion to a session. This is the primary endpoint for logging LLM interactions. Request Payload:
ParameterTypeDescriptionRequired
messageslist[{role, content}]Messages sent to and received from the LLMYes
inputsdict[str, any]Input variables used in the promptYes
prompt_info{prompt_template_version_id, environment}Prompt template infoYes
trace_info{trace_id}Associate completion with a traceNo
tool_schemalist[{name, description, parameters}]Tool definitionsNo
session_info{custom_metadata: dict}Session metadataNo
call_info{start_time, end_time, model, provider, usage, ...}LLM call detailsNo
test_run_info{test_run_id, test_case_id}Test run associationNo
completion_idUUIDCustom completion IDNo
eval_resultsdict[str, bool|float]Code evaluation resultsNo
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/sessions/$SESSION_ID/completions" \
  -d '{
    "messages": [
      {"role": "user", "content": "Generate an album name for Taylor Swift"},
      {"role": "assistant", "content": "Rainy Melodies"}
    ],
    "inputs": {"pop_star": "Taylor Swift"},
    "prompt_info": {
      "prompt_template_version_id": "f503c15e-2f0f-4ce4-b443-4c87d0b6435d",
      "environment": "prod"
    }
  }'
Response:
{
  "completion_id": "707bc301-85e9-4f02-aa97-faba8cd7774a"
}
With Trace Association: To associate a completion with a trace (for agent workflows), include trace_info:
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/sessions/$SESSION_ID/completions" \
  -d '{
    "messages": [...],
    "inputs": {...},
    "prompt_info": {...},
    "trace_info": {
      "trace_id": "abc123-trace-uuid"
    }
  }'

Prompt Templates

Create, retrieve, and manage prompt templates programmatically. For conceptual background on prompt management patterns, see Prompt Management. Base URL: /api/v2/projects/<project-id>/prompt-templates

Create Prompt Template

POST / Creates a new prompt template (without any versions). Typically used when you want to create a template first, then add versions separately. NOTE: The same objective can be accomplished using the create_template_if_not_exists parameter on the Create Version by Name endpoint below. Request Payload:
ParameterTypeDescriptionRequired
namestrTemplate name (unique within project)Yes
iduuidCustom template IDNo
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates" \
  -d '{"name": "my-assistant"}'
Response:
{
  "id": "5cccc8e6-b163-4094-8bd2-90030f151ec8"
}

List Prompt Templates

GET / Returns all prompt templates in a project with pagination. Query Parameters:
ParameterTypeDescriptionDefault
pageintPage number1
page_sizeintResults per page (max: 100)30
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates?page=1&page_size=50"

Create Version by Name

POST /name/<template-name>/versions Creates a new prompt version by template name. This is the recommended endpoint for CI/CD workflows because it supports creating the template automatically if it doesn’t exist. Query Parameters:
ParameterTypeDescriptionDefault
create_template_if_not_existsbooleanCreate template if not foundfalse
Request Payload:
ParameterTypeDescriptionRequired
template_messageslist[{role, content}]Message array with mustache variablesYes
modelstrModel nameYes
providerstrProvider key (openai, anthropic, etc.)Yes
llm_parametersdictModel parameters (temperature, etc.)No
tool_schemalistTool definitionsNo
output_schemadictStructured output schemaNo
version_namestrDisplay nameNo
version_descriptionstrDescriptionNo
environmentslist[str]Environments to deploy toNo
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates/name/my-assistant/versions?create_template_if_not_exists=true" \
  -d '{
    "template_messages": [
      {"role": "system", "content": "You are a helpful assistant. The user'\''s name is {{user_name}}."},
      {"role": "user", "content": "{{user_input}}"}
    ],
    "provider": "openai",
    "model": "gpt-4o",
    "llm_parameters": {"temperature": 0.2, "max_tokens": 1024},
    "version_name": "v1.2.0",
    "version_description": "Production release with improved system prompt"
  }'
Response:
{
  "prompt_template_id": "5cccc8e6-b163-4094-8bd2-90030f151ec8",
  "prompt_template_version_id": "f503c15e-2f0f-4ce4-b443-4c87d0b6435d",
  "prompt_template_name": "my-assistant",
  "version_name": "v1.2.0",
  "version_description": "Production release with improved system prompt",
  "format_version": 2,
  "project_id": "abc123-project-uuid",
  "content": [...],
  "metadata": {
    "flavor": "openai_chat",
    "model": "gpt-4o",
    "provider": "openai",
    "params": {"temperature": 0.2, "max_tokens": 1024}
  }
}
Code-managed prompts: Use the create_template_if_not_exists=true parameter in your CI/CD pipeline to automatically sync prompts from your codebase to Freeplay. See Code as source of truth for the full workflow.

Retrieve by Name

POST /name/<template-name> Fetches a prompt template in one of three forms:
FormDescriptionHow to Request
RawTemplate with {{variable}} placeholdersNo body, no format param
BoundVariables inserted, provider-agnostic formatPass variables in body
FormattedVariables inserted, provider-specific formatPass variables + format=true
Query Parameters:
ParameterTypeDescriptionDefault
environmentstrEnvironment taglatest
formatbooleanReturn formatted promptfalse
Formatted Prompt (most common):
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates/name/album_bot?environment=prod&format=true" \
  -d '{"pop_star": "Taylor Swift"}'
Response:
{
  "format_version": 2,
  "prompt_template_id": "5cccc8e6-b163-4094-8bd2-90030f151ec8",
  "prompt_template_name": "album_bot",
  "prompt_template_version_id": "f503c15e-2f0f-4ce4-b443-4c87d0b6435d",
  "formatted_content": [
    {
      "role": "user",
      "content": "Generate a two word album name in the style of Taylor Swift"
    }
  ],
  "formatted_tool_schema": [...],
  "metadata": {
    "flavor": "openai_chat",
    "model": "gpt-3.5-turbo-0125",
    "provider": "openai",
    "params": {"max_tokens": 100, "temperature": 0.2}
  }
}

Retrieve by Version ID

POST /id/<template-id>/versions/<version-id> Fetch a specific prompt version. Useful for pinning to a known version.
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates/id/$TEMPLATE_ID/versions/$VERSION_ID"

Retrieve All Templates

GET /all/<environment-name> Returns all prompt templates in an environment.
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates/all/prod"

Create Version by ID

POST /id/<template-id>/versions Adds a new version to an existing prompt template (identified only by ID). Deployed to latest by default. Request Payload:
ParameterTypeDescriptionRequired
template_messageslist[{role, content}]Message array with mustache variablesYes
modelstrModel nameYes
providerstrProvider key (openai, anthropic, etc.)Yes
llm_parametersdictModel parameters (temperature, etc.)No
tool_schemalistTool definitionsNo
output_schemadictStructured output schemaNo
version_namestrDisplay nameNo
version_descriptionstrDescriptionNo
environmentslist[str]Environments to deploy toNo
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates/id/$TEMPLATE_ID/versions" \
  -d '{
    "template_messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "{{user_input}}"}
    ],
    "provider": "openai",
    "model": "gpt-4o",
    "llm_parameters": {"temperature": 0.2, "max_tokens": 256}
  }'

Update Environments

POST /id/<template-id>/versions/<version-id>/environments Assign a version to additional environments.
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates/id/$TEMPLATE_ID/versions/$VERSION_ID/environments" \
  -d '{"environments": ["staging", "prod"]}'
For SDK-based prompt management, see Prompts. For the conceptual guide on managing prompts programmatically, see Code as source of truth.

Test Runs

Execute batch tests using saved datasets. Base URL: /api/v2/projects/<project-id>/test-runs

Create Test Run

POST / Creates a new test run from an existing dataset.
ParameterTypeDescriptionRequired
dataset_namestrName of the datasetYes
include_outputsbooleanInclude expected outputsNo (Default: true)
test_run_namestrDisplay nameNo
test_run_descriptionstrDescriptionNo
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/test-runs" \
  -d '{"dataset_name": "Example Tests"}'
Response:
{
  "test_run_id": "bd3eb06c-f93b-46a4-aa3b-d240789c8a06",
  "test_run_name": "",
  "test_run_description": "",
  "test_cases": [
    {
      "test_case_id": "91e60c9e-fbaa-4990-b4cc-7a8bd067f298",
      "variables": {"question": "How do test runs work?"},
      "output": null
    }
  ]
}

Retrieve Test Run Results

GET /id/<test-run-id>
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/test-runs/id/$TEST_RUN_ID"
Response:
{
  "id": "2a9dd8bd-6c29-47c8-9ca4-427f73174881",
  "name": "regression-test",
  "model_name": "gpt-4o-mini",
  "prompt_name": "rag-qa",
  "sessions_count": 132,
  "summary_statistics": {
    "auto_evaluation": {"Answer Accuracy": {"5": 104, "4": 9}},
    "client_evaluation": {},
    "human_evaluation": {}
  }
}

List Test Runs

GET /
ParameterTypeDescriptionDefault
pageintPage number1
page_sizeintResults per page (max: 100)100
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/test-runs"
For SDK-based testing, see Test Runs.

Customer Feedback

Record user feedback for completions and traces.

Completion Feedback

Base URL: /api/v2/projects/<project-id>/completion-feedback POST /id/<completion-id>
ParameterTypeDescriptionRequired
freeplay_feedbackstr"positive" or "negative"Yes
*str|float|int|boolCustom feedback attributesNo
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/completion-feedback/id/$COMPLETION_ID" \
  -d '{
    "freeplay_feedback": "positive",
    "rating": 5,
    "comment": "Great response"
  }'

Trace Feedback

Base URL: /api/v2/projects/<project-id>/trace-feedback POST /id/<trace-id> Same parameters as completion feedback.
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/trace-feedback/id/$TRACE_ID" \
  -d '{"freeplay_feedback": "negative", "reason": "incomplete answer"}'
For SDK-based feedback, see Customer Feedback.

Search API

Query sessions, traces, and completions with powerful filtering. This functionality is API-only and not available through the SDKs.

Endpoints

EndpointDescription
POST /search/sessionsSearch sessions
POST /search/tracesSearch traces
POST /search/completionsSearch completions
All endpoints support pagination via page and page_size query parameters.
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/search/completions?page=1&page_size=20" \
  -d '{"filters": {"field": "cost", "op": "gte", "value": 0.01}}'

Filter Operators

OperatorDescription
eqEquals
ltLess than
gtGreater than
lteLess than or equal
gteGreater than or equal
containsContains substring
betweenWithin numeric range

Available Filters

FieldSupported OperatorsExample Value
costeq, lt, gt, lte, gte0.003
latencyeq, lt, gt, lte, gte8
start_timeeq, lt, gt, lte, gte"2024-06-01 00:00:00"
environmenteq"staging"
prompt_templateeq"my-prompt"
prompt_template_ideq"uuid..."
modeleq"gpt-4o"
providereq"openai"
review_statuseq"review_complete"
agent_nameeq"support-agent"
trace_agent_nameeq"my-agent"
api_keyeq"production-key"
assigneeeq"user@example.com"
review_themeeq"Response Quality Issues"
completion_outputcontains"weather"
completion_inputs.*contains"topic": "weather"
completion_feedback.*contains"rating": "positive"
session_custom_metadata.*contains"user_type": "premium"
trace_custom_metadata.*contains"workflow": "onboarding"
trace_input.*contains"query": "weather"
trace_output.*contains"response": "sunny"
trace_feedback.*contains"rating": "positive"
completion_evaluation_results.*eq"Response Quality": "4"
completion_client_evaluation_results.*eq"score": "85"
trace_evaluation_results.*eq, gt, lt, gte, lte, contains"Quality Score": 5
trace_client_eval_results.*eq, contains"confidence_score": 0.95
evaluation_notes.contentcontains"needs review"
evaluation_notes.authoreq"user@example.com"
evaluation_notes.created_atgt, lt, gte, lte"2024-06-01 00:00:00"

Compound Filters

Combine filters using and, or, and not:
{
  "filters": {
    "and": [
      {"field": "cost", "op": "gte", "value": 0.001},
      {
        "or": [
          {"field": "model", "op": "eq", "value": "gpt-4o"},
          {"field": "model", "op": "eq", "value": "claude-3-opus"}
        ]
      },
      {
        "not": {"field": "environment", "op": "eq", "value": "prod"}
      }
    ]
  }
}

Additional API Endpoints

The following endpoints provide administrative and bulk operations not covered by the SDKs.

Projects

Base URL: /api/v2/projects

List All Projects

GET /all Returns all projects in your workspace.
Does not work with project-scoped API keys. Private projects are excluded.
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  "https://app.freeplay.ai/api/v2/projects/all"

Agents

Base URL: /api/v2/projects/<project-id>/agents

List Agents

GET /
ParameterTypeDescriptionDefault
pageintPage number1
page_sizeintResults per page (max: 100)30
namestringFilter by exact name-
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/agents"

Datasets

Base URL: /api/v2/projects/<project-id>/datasets

Retrieve Dataset Metadata

GET /name/<dataset-name> or GET /id/<dataset-id>
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/datasets/name/Sample"

Retrieve Dataset Test Cases

GET /name/<dataset-name>/test-cases or GET /id/<dataset-id>/test-cases
curl -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/datasets/name/Sample/test-cases"

Upload Test Cases

POST /id/<dataset-id>/test-cases
Maximum 100 test cases per request.
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/datasets/id/$DATASET_ID/test-cases" \
  -d '{
    "examples": [
      {"inputs": {"question": "What is Freeplay?"}, "output": "An LLM platform"},
      {"inputs": {"question": "How do I integrate?"}, "output": "Use the SDK"}
    ]
  }'

Completions Statistics

Base URL: /api/v2/projects/<project-id>/completions

Aggregate Statistics

POST /statistics Returns evaluation statistics across all prompts for a date range (max 30 days).
ParameterTypeDescriptionDefault
from_datestrStart date (inclusive)7 days ago
to_datestrEnd date (exclusive)Today
curl -X POST \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/completions/statistics" \
  -d '{"from_date": "2025-01-01", "to_date": "2025-01-15"}'

Statistics by Prompt

POST /statistics/<prompt-template-id> Same parameters as aggregate statistics, filtered to a specific prompt template.

Complete Examples

End-to-End LLM Interaction

This example fetches a prompt, calls OpenAI, and records the completion:
import requests
import os
import json
import uuid

project_id = os.getenv("FREEPLAY_PROJECT_ID")
api_root = f"https://app.freeplay.ai/api/v2/projects/{project_id}"
headers = {"Authorization": f"Bearer {os.getenv('FREEPLAY_API_KEY')}"}

# 1. Fetch formatted prompt
prompt_resp = requests.post(
    f"{api_root}/prompt-templates/name/album_bot",
    headers=headers,
    params={"environment": "prod", "format": "true"},
    json={"pop_star": "Taylor Swift"}
)
formatted_prompt = prompt_resp.json()

# 2. Call OpenAI
openai_resp = requests.post(
    "https://api.openai.com/v1/chat/completions",
    headers={
        "Authorization": f"Bearer {os.getenv('OPENAI_API_KEY')}",
        "Content-Type": "application/json"
    },
    json={
        "model": formatted_prompt["metadata"]["model"],
        "messages": formatted_prompt["formatted_content"],
        **formatted_prompt["metadata"]["params"]
    }
)
response_message = openai_resp.json()['choices'][0]['message']

# 3. Record to Freeplay
messages = formatted_prompt["formatted_content"] + [response_message]
session_id = str(uuid.uuid4())

requests.post(
    f"{api_root}/sessions/{session_id}/completions",
    headers={**headers, "Content-Type": "application/json"},
    json={
        "messages": messages,
        "inputs": {"pop_star": "Taylor Swift"},
        "prompt_info": {
            "prompt_template_version_id": formatted_prompt["prompt_template_version_id"],
            "environment": "prod"
        }
    }
)

Executing a Test Run

import requests
import os
import json
import uuid

project_id = os.getenv("FREEPLAY_PROJECT_ID")
api_root = f"https://app.freeplay.ai/api/v2/projects/{project_id}"
headers = {"Authorization": f"Bearer {os.getenv('FREEPLAY_API_KEY')}"}

# Create test run
test_run_resp = requests.post(
    f"{api_root}/test-runs",
    headers={**headers, "Content-Type": "application/json"},
    json={"dataset_name": "Example Tests"}
)
test_run = test_run_resp.json()

# Process each test case
for test_case in test_run["test_cases"]:
    # Fetch prompt with test case variables
    prompt_resp = requests.post(
        f"{api_root}/prompt-templates/name/rag-qa",
        headers=headers,
        params={"environment": "prod", "format": "true"},
        json=test_case['variables']
    )
    formatted_prompt = prompt_resp.json()

    # Call LLM (example with OpenAI)
    openai_resp = requests.post(
        "https://api.openai.com/v1/chat/completions",
        headers={
            "Authorization": f"Bearer {os.getenv('OPENAI_API_KEY')}",
            "Content-Type": "application/json"
        },
        json={
            "model": formatted_prompt["metadata"]["model"],
            "messages": formatted_prompt["formatted_content"],
            **formatted_prompt["metadata"]["params"]
        }
    )
    response_message = openai_resp.json()['choices'][0]['message']

    # Record with test run info
    messages = formatted_prompt["formatted_content"] + [response_message]
    session_id = str(uuid.uuid4())

    requests.post(
        f"{api_root}/sessions/{session_id}/completions",
        headers={**headers, "Content-Type": "application/json"},
        json={
            "messages": messages,
            "inputs": test_case['variables'],
            "prompt_info": {
                "prompt_template_version_id": formatted_prompt["prompt_template_version_id"],
                "environment": "prod"
            },
            "test_run_info": {
                "test_run_id": test_run["test_run_id"],
                "test_case_id": test_case["test_case_id"]
            }
        }
    )

Agent Workflow with Traces

This example shows how to record an agent workflow with multiple completions grouped by a trace:
import requests
import os
import uuid

project_id = os.getenv("FREEPLAY_PROJECT_ID")
api_root = f"https://app.freeplay.ai/api/v2/projects/{project_id}"
headers = {"Authorization": f"Bearer {os.getenv('FREEPLAY_API_KEY')}"}

# Generate IDs for session and trace
session_id = str(uuid.uuid4())
trace_id = str(uuid.uuid4())

user_query = "What's the weather in San Francisco and should I bring an umbrella?"

# 1. First LLM call - agent decides to check weather
prompt_resp = requests.post(
    f"{api_root}/prompt-templates/name/weather-agent",
    headers=headers,
    params={"environment": "prod", "format": "true"},
    json={"query": user_query}
)
formatted_prompt = prompt_resp.json()

# Call LLM...
# response_1 = call_llm(formatted_prompt)

# Record first completion with trace association
requests.post(
    f"{api_root}/sessions/{session_id}/completions",
    headers={**headers, "Content-Type": "application/json"},
    json={
        "messages": [...],  # Include prompt + response
        "inputs": {"query": user_query},
        "prompt_info": {
            "prompt_template_version_id": formatted_prompt["prompt_template_version_id"],
            "environment": "prod"
        },
        "trace_info": {"trace_id": trace_id}  # Associate with trace
    }
)

# 2. Second LLM call - agent formats final response
# ... make another LLM call and record with same trace_id ...

# 3. Finalize the trace with input/output
requests.post(
    f"{api_root}/sessions/{session_id}/traces/id/{trace_id}",
    headers={**headers, "Content-Type": "application/json"},
    json={
        "input": user_query,
        "output": "The weather in San Francisco is 65°F and sunny. No umbrella needed!",
        "agent_name": "weather-agent"
    }
)

Next Steps