Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.freeplay.ai/llms.txt

Use this file to discover all available pages before exploring further.

Using Traces

Traces are an organizing component of a session. They are used to group completions together and provide a way to track the progress of a session. Named traces form the basis of Freeplay’s support for Agents. For a comprehensive guide on building agents with traces, see Agents.
Method NameParametersDescription
session.create_trace input: str - Input to the trace.
agent_name: str (optional) - Name of the agent/trace.
parent_id: UUID (optional) - Parent trace or completion ID for nesting.
kind: str (optional) - Either 'tool' or 'agent'.
name: str (optional) - Name of the trace/tool.
custom_metadata: dict[str, Any] (optional) - Metadata to associate with the trace.
Generate a TraceInfo object that will be used to group completions
trace.record_outputoutput: str - Output of the agent/trace.
project_id: str - ID of the project to record to.
eval_results: dict[str, Any] (optional) - Code evaluation results.
test_run_info: TestRunInfo (optional)
Record the output to a trace
traces.updateproject_id: str - ID of the project.
session_id: str - ID of the session.
trace_id: str - ID of the trace.
output: JSONValue (optional) - Updated output.
metadata: dict (optional) - Custom metadata.
feedback: dict (optional) - Customer feedback.
eval_results: dict (optional) - Evaluation results.
test_run_info: TestRunInfo (optional) - Test run information.
Update a trace after it has been recorded
Traces are a more fine-grained way to group LLM interactions within a Session. A Trace can contain one or more completions and a Session can contain one or more Traces. Find a more detailed guide on how Sessions, Traces, and Completions fit together here. For a complete code example, see Record Traces. Traces are created off of an existing session object:
input_question = "What color is the sky?"

# create or restore a session
session = fp_client.sessions.create()

# create the trace
trace_info = session.create_trace(
    input=input_question,
    agent_name="weather_agent",
    custom_metadata={
        "version": "1.0.8"
    }
)
To tie a Completion to a given Trace you will pass the trace info in the record call
from freeplay import Freeplay, RecordPayload, CallInfo, SessionInfo, TraceInfo

# create or restore a session
session = fp_client.sessions.create()

# create the trace
trace_info = session.create_trace(input=question)

# fetch prompt
# call LLM

# record with trace id
record_response = fp_client.recordings.create(
    RecordPayload(
        project_id=project_id,
        all_messages=all_messages,
        session_info=session_info,
        inputs=input_variables,
        prompt_version_info=formatted_prompt.prompt_info,
        call_info=call_info,
        trace_info=trace_info  # Pass the trace info along
    )
)
For a complete working example, see Record Traces. For agent-specific patterns, see Agents. Once you have recorded completions to the trace and are on the final output, you must close your trace in order to wrap the completions together, you can also optionally record eval results to the trace at this point:
# record output to the trace
output_answer = "blue" # from the LLM
trace_info.record_output(
  project_id=project_id,
  output=output_answer,
  eval_results={  # Optional trace eval logging
    "sentiment": 0.7,
    "valid_path": True,
  }
)

Adding Tools to Traces

When building agents that use tools, tool calls are recorded as the output of an LLM call by default. You can also add explicit tool spans to provide more data about tool execution, including latency and other metadata. These are recorded as a Trace with kind='tool' and linked to the parent completion using parent_id. For complete examples and code snippets, see Tool Calls.

Updating a Trace

Freeplay allows you to update a trace after it has been recorded. This is useful for adding evaluation results, customer feedback, metadata, or updating the output. To do this, you need the project_id, session_id, and trace_id. You must provide at least one of output, metadata, feedback, eval_results, or test_run_info.
from freeplay.resources.traces import TraceUpdatePayload

# Update the trace with evaluation results and feedback
fp_client.traces.update(
    TraceUpdatePayload(
        project_id=project_id,
        session_id=session.session_id,
        trace_id=trace_info.trace_id,
        output="updated output",
        metadata={"version": "1.0.8"},
        feedback={"freeplay_feedback": "positive", "satisfaction": True},
        eval_results={"accuracy": 0.99},
    )
)