Skip to main content
The Freeplay SDK is organized around a clear hierarchy of entities that map to how generative AI applications work in practice. Understanding this structure will help you integrate Freeplay effectively.

The Observability Hierarchy

Freeplay organizes your LLM application observability data in a three-level hierarchy:
Project
  └── Session (a complete user interaction, multi-turn conversation, or instance of your application running)
        └── Trace (optional grouping of related completions; required for complex agents)
              └── Completion (a single LLM call)
  • Completions are atomic LLM calls — a prompt sent to a model and its response.
  • Traces optionally group related completions, such as multiple LLM calls that power a single agent action. When building agents, Freeplay expects you to name traces to group related agent runs.
  • Sessions contain all completions and traces for a logical user interaction (e.g., a chat conversation). You can choose to provide a session ID when recording completions. Sessions are created automatically if they don’t exist.
For more detail on when to use traces vs. sessions, see Sessions, Traces, and Completions.

SDK Namespaces

All SDK operations are accessed through the Freeplay client object. The SDK is organized into namespaces that correspond to the core entities in Freeplay:
NamespacePurposeDocumentation
client.sessionsCreate and manage sessions to group related completionsSessions
client.tracesCreate traces to group related completions within a sessionTraces
client.recordingsRecord completions to Freeplay for observabilityRecording Completions
client.customer_feedbackLog user feedback associated with completionsCustomer Feedback
client.promptsFetch and format prompt templates from FreeplayPrompts
client.test_runsExecute batch tests using saved datasetsTest Runs
Some SDKs use camelCase (TypeScript, Java, Kotlin) rather than snake_case (Python) for method names, following language conventions. The functionality is identical across languages.

Common Integration Flow

A basic integration follows this pattern:
  1. Fetch a specific version of a prompt template from Freeplay with variables inserted by your code
  2. Call your LLM provider (OpenAI, Anthropic, etc.) with the formatted prompt
  3. Record the completion back to Freeplay for observability
python
# 1. Fetch and format the prompt
formatted_prompt = fp_client.prompts.get_formatted(
    project_id=project_id,
    template_name="my-prompt",
    environment="prod",
    variables={"question": user_question}
)

# 2. Call your LLM provider
response = openai_client.chat.completions.create(
    model=formatted_prompt.prompt_info.model,
    messages=formatted_prompt.llm_prompt,
    **formatted_prompt.prompt_info.model_parameters
)

# 3. Record to Freeplay
session = fp_client.sessions.create()
fp_client.recordings.create(RecordPayload(
    project_id=project_id,
    all_messages=formatted_prompt.all_messages({
        'role': response.choices[0].message.role,
        'content': response.choices[0].message.content
    }),
    session_info=session.session_info,
    prompt_version_info=formatted_prompt.prompt_info,
    # ... additional parameters
))
For complete examples, see the Recording Completions page or browse Code Recipes.

Choosing Your Integration Approach

Freeplay offers multiple ways to integrate, depending on your needs:
ApproachLanguageBest ForObservabilityPrompt Management
Freeplay SDKPython, TS, Java/KotlinDirect integration with full control
LangGraphPythonLangGraph agent workflows✅ Auto
Vercel AI SDKTypeScriptTypeScript/JS AI applications✅ Auto
Google ADKPythonGoogle Agent Development Kit✅ Auto
OpenTelemetryAnyAny framework, standard tracing
HTTP APIAnyCustom implementations, automation
OpenTelemetry integration provides observability only. For prompt management with OTel-traced applications, use the Freeplay SDK alongside your OTel instrumentation.
For framework integrations, see AI Framework Integrations.

Production Best Practices

Many Freeplay customers configure different client setups for different environments:
  • Development/Staging: Fetch prompts from the Freeplay server for rapid iteration
  • Production: Use Prompt Bundling to read prompts from local files for zero latency and resilience
See Prompt Bundling for implementation details.

Next Steps

Getting started: When you need more control:
  • Sessions - Explicitly create sessions with custom metadata (sessions are auto-created if you don’t)
  • Traces - Group related completions for agent workflows with multiple LLM calls or tool invocations