Skip to main content
Below are details of the Freeplay data model, which can be helpful to understand for more advanced usage. Note the parameter names are written in snake case but some SDK languages make use of camel case instead

Record Payload

from freeplay import RecordPayload
Parameter NameData TypeDescriptionRequired
project_idstrFreeplay’s projectIdY
all_messagesList[dict[str, str]]All messages in the conversation so farY
inputsdictThe input variablesN
session_infoSessionInfoThe session id for which the recording should be associatedN
prompt_version_infoPromptVersionInfoThe prompt info from a formatted prompt, used for version trackingN
call_infoCallInfoInformation associated with the LLM callN
trace_infoTraceInfoThe trace to associate this completion with (for agent workflows)N
tool_schemaList[dict[str, any]]The tools/functions available to the model (for function calling/tool use)N
test_run_infoTestRunInfoInformation associated with the Test Run if this recording is part of a Test RunN

Record Response

The recordings.create() method returns a RecordResponse containing:
Parameter NameData TypeDescription
completion_idstrThe unique ID of the recorded completion. Use this as parent_id when creating child tool spans.

Call Info

from freeplay import CallInfo
Parameter NameData TypeDescriptionRequired
providerstringThe name of your LLM provider (e.g., “openai”, “anthropic”, “mistral”)N
modelstringThe name of your model (e.g., “gpt-4o-mini”, “claude-3-5-sonnet”)N
start_timefloatThe start time of the LLM call as Unix timestamp. This will be used to measure latencyN
end_timefloatThe end time of the LLM call as Unix timestamp. This will be used to measure latencyN
model_parametersLLMParametersThe parameters associated with your LLM call (e.g., temperature, max_tokens)N
usageUsageTokensToken count to record to Freeplay for cost calculation. If not included, Freeplay will estimate token counts using Tiktoken.N
provider_infoDict[str, Any]Additional provider-specific informatio, e.g. azure_deployment.N
api_style”batch” or “default”Set to “batch” if using a batch API (e.g., OpenAI Batch API) for accurate cost calculation. Use “default” or omit for standard API calls.N

LLM Parameters

from freeplay.llm_parameters import LLMParameters
Parameter NameData TypeDescriptionRequired
membersDict[str, any]Any parameters associated with your LLM call that you want recordedY

Trace Info

TraceInfo is returned by session.create_trace() and used to group completions together and track agent workflows. See Traces for usage examples.
Parameter NameData TypeDescriptionRequired
trace_idstringThe unique ID of the traceAuto
session_idstringThe session this trace belongs toAuto
inputstr | dict | listThe input to the trace (user message or tool arguments)N
agent_namestringName of the agent for this traceN
parent_idUUIDParent trace or completion ID for creating nested hierarchiesN
kind'tool' | 'agent'Type of trace—use 'tool' for tool execution spansN
namestringName of the trace or toolN
custom_metadatadict[str, str | int | bool]Custom metadata to associate with the traceN

Test Run Info

from freeplay import TestRunInfo
Parameter NameData TypeDescriptionRequired
test_run_idstringThe id of your Test RunY
test_case_idstringThe id of your Test CaseY

OpenAI Function Call

from freeplay.completions import OpenAIFunctionCall
Parameter NameData TypeDescriptionRequired
namestringThe name of the invoked function callY
argumentsstringThe arguments for the invoked function callY