The Observability Hierarchy
Freeplay organizes your LLM application observability data in a three-level hierarchy:- Completions are atomic LLM calls — a prompt sent to a model and its response.
- Traces optionally group related completions, such as multiple LLM calls that power a single agent action. When building agents, Freeplay expects you to name traces to group related agent runs.
- Sessions contain all completions and traces for a logical user interaction (e.g., a chat conversation). You can choose to provide a session ID when recording completions. Sessions are created automatically if they don’t exist.
SDK Namespaces
All SDK operations are accessed through the Freeplay client object. The SDK is organized into namespaces that correspond to the core entities in Freeplay:| Namespace | Purpose | Documentation |
|---|---|---|
client.sessions | Create and manage sessions to group related completions | Sessions |
client.traces | Create traces to group related completions within a session | Traces |
client.recordings | Record completions to Freeplay for observability | Recording Completions |
client.customer_feedback | Log user feedback associated with completions | Customer Feedback |
client.prompts | Fetch and format prompt templates from Freeplay | Prompts |
client.test_runs | Execute batch tests using saved datasets | Test Runs |
Common Integration Flow
A basic integration follows this pattern:- Fetch a specific version of a prompt template from Freeplay with variables inserted by your code
- Call your LLM provider (OpenAI, Anthropic, etc.) with the formatted prompt
- Record the completion back to Freeplay for observability
python
Choosing Your Integration Approach
Freeplay offers multiple ways to integrate, depending on your needs:| Approach | Language | Best For | Observability | Prompt Management |
|---|---|---|---|---|
| Freeplay SDK | Python, TS, Java/Kotlin | Direct integration with full control | ✅ | ✅ |
| LangGraph | Python | LangGraph agent workflows | ✅ Auto | ✅ |
| Vercel AI SDK | TypeScript | TypeScript/JS AI applications | ✅ Auto | ✅ |
| Google ADK | Python | Google Agent Development Kit | ✅ Auto | ✅ |
| OpenTelemetry | Any | Any framework, standard tracing | ✅ | ❌ |
| HTTP API | Any | Custom implementations, automation | ✅ | ✅ |
OpenTelemetry integration provides observability only. For prompt management with OTel-traced applications, use the Freeplay SDK alongside your OTel instrumentation.
Production Best Practices
Many Freeplay customers configure different client setups for different environments:- Development/Staging: Fetch prompts from the Freeplay server for rapid iteration
- Production: Use Prompt Bundling to read prompts from local files for zero latency and resilience
Next Steps
Getting started:- Setup - Install and configure the SDK
- Prompts - Fetch and format prompt templates
- Recording Completions - Log LLM interactions

