Record Payload
| Parameter Name | Data Type | Description | Required |
|---|---|---|---|
| project_id | str | Freeplay’s projectId | Y |
| all_messages | List[dict[str, str]] | All messages in the conversation so far | Y |
| inputs | dict | The input variables | N |
| session_info | SessionInfo | The session id for which the recording should be associated | N |
| prompt_version_info | PromptVersionInfo | The prompt info from a formatted prompt, used for version tracking | N |
| call_info | CallInfo | Information associated with the LLM call | N |
| trace_info | TraceInfo | The trace to associate this completion with (for agent workflows) | N |
| tool_schema | List[dict[str, any]] | The tools/functions available to the model (for function calling/tool use) | N |
| test_run_info | TestRunInfo | Information associated with the Test Run if this recording is part of a Test Run | N |
Record Response
Therecordings.create() method returns a RecordResponse containing:
| Parameter Name | Data Type | Description |
|---|---|---|
| completion_id | str | The unique ID of the recorded completion. Use this as parent_id when creating child tool spans. |
Call Info
| Parameter Name | Data Type | Description | Required |
|---|---|---|---|
| provider | string | The name of your LLM provider (e.g., “openai”, “anthropic”, “mistral”) | N |
| model | string | The name of your model (e.g., “gpt-4o-mini”, “claude-3-5-sonnet”) | N |
| start_time | float | The start time of the LLM call as Unix timestamp. This will be used to measure latency | N |
| end_time | float | The end time of the LLM call as Unix timestamp. This will be used to measure latency | N |
| model_parameters | LLMParameters | The parameters associated with your LLM call (e.g., temperature, max_tokens) | N |
| usage | UsageTokens | Token count to record to Freeplay for cost calculation. If not included, Freeplay will estimate token counts using Tiktoken. | N |
| provider_info | Dict[str, Any] | Additional provider-specific informatio, e.g. azure_deployment. | N |
| api_style | ”batch” or “default” | Set to “batch” if using a batch API (e.g., OpenAI Batch API) for accurate cost calculation. Use “default” or omit for standard API calls. | N |
LLM Parameters
| Parameter Name | Data Type | Description | Required |
|---|---|---|---|
| members | Dict[str, any] | Any parameters associated with your LLM call that you want recorded | Y |
Trace Info
TraceInfo is returned bysession.create_trace() and used to group completions together and track agent workflows. See Traces for usage examples.
| Parameter Name | Data Type | Description | Required |
|---|---|---|---|
| trace_id | string | The unique ID of the trace | Auto |
| session_id | string | The session this trace belongs to | Auto |
| input | str | dict | list | The input to the trace (user message or tool arguments) | N |
| agent_name | string | Name of the agent for this trace | N |
| parent_id | UUID | Parent trace or completion ID for creating nested hierarchies | N |
| kind | 'tool' | 'agent' | Type of trace—use 'tool' for tool execution spans | N |
| name | string | Name of the trace or tool | N |
| custom_metadata | dict[str, str | int | bool] | Custom metadata to associate with the trace | N |
Test Run Info
| Parameter Name | Data Type | Description | Required |
|---|---|---|---|
| test_run_id | string | The id of your Test Run | Y |
| test_case_id | string | The id of your Test Case | Y |
OpenAI Function Call
| Parameter Name | Data Type | Description | Required |
|---|---|---|---|
| name | string | The name of the invoked function call | Y |
| arguments | string | The arguments for the invoked function call | Y |

