Skip to main content
This integration method provides lightweight tracing of your AI applications. You may also use the UI to iterate on prompts, create offline evaluations, curate datasets and run tests. Note that until you integrate prompt templates via manage in code or freeplay prompt management, your prompts and variable inputs will not be connected to observability sessions and functionality will be limited.
Supported featuresUnsupported features
| * Basic observability and search
  • Manual dataset curation
  • Prompt template editor
  • Evaluations
  • Test individual versions | - Create datasets from observability records
  • Run inline evaluations targeting inputs and output
  • Sophisticated observability and search
    • (search prompts, prompt version, etc.) |

📌 Key Benefits

  • No prompt migration needed: Keep your existing prompts in your codebase
  • Start logging immediately: Begin analyzing your LLM interactions right away
  • Automatic prompt interpretation: Ability to create Freeplay templates from logged messages
  • Gradual adoption: Add prompt management and testing capabilities when you’re ready

Best For:

  • Teams wanting to start observing sessions immediately
Note: We recommend saving your prompts to Freeplay to allow strong linking between prompt template versions and observability sessions. Learn more.

Make Your LLM Call and pass the results to Freeplay

Continue using your existing LLM provider logic and prompts, collect all messages and record the call to Freeplay.
from openai import OpenAI
import os
from freeplay import Freeplay, RecordPayload, CallInfo

openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

project_name="<YOUR-PROJECT-ID>"

# Your existing prompt, including user message

messages = [
{
"role": "system",
"content": "You are a helpful assistant that provides advice about evaluating LLMs."
},
{
"role": "user",
"content": "How can I evaluate the output structure of my LLM response?"
}
]

model="gpt-4.1-mini-2025-04-14"
response = openai_client.chat.completions.create(
messages=messages,
model=model
)

# Append the response to your messages

# Note: the response should follow {"role": <role>, "content": <content>}

all_messages = messages + [{"role": "assistant", "content": response.choices[0].message.content}]

# create a freeplay client object

fpClient = Freeplay(
freeplay_api_key=os.getenv("FREEPLAY_API_KEY"),
api_base="https://app.freeplay.ai/api" #if self hosted, replace with your instance url
)

# Record the completion data to Freeplay

fpClient.recordings.create(
RecordPayload(
project_id=project_id, # available on the observability
call_info=CallInfo(provider="openai", model=model),
all_messages=all_messages
)
)

That’s it! You’re now logging data to Freeplay 🙌. Go to the Observability tab in Freeplay to view your recorded data. Learn more about searching and reviewing your application in getting started with observability.

Better: Log with Variables

Freeplay accepts the following types for inputs: Dict[str, Union[str, int, bool, float, Dict[str, Any], List[Any]]]
Pass variables to the RecordPayload to support migration to Freeplay-hosted prompts. Freeplay auto-interprets these and adds them directly to your prompt template, making for a smooth transition.
# seperate variables to enable dataset generation in Freeplay
prompt_vars = {"name": "Jill", "topic": "evaluating llm models"}

messages = [
{"role": "system", "content": "You are a helpful teacher."},
{"role": "user", "content": f"Hi {prompt_vars['name']}, explain {prompt_vars['topic']} to me."}
]

# After getting completion...

fpClient.recordings.create(
RecordPayload(
project_id=project_id,
all_messages=all_messages,
inputs=prompt_vars, # Send variables
)
)


Additional Recording Functionality

You can also record tool calls, media, and group by traces to improve search, observability and reviews. Learn more in the Recording SDK or API documentation. You can also find additional examples in the common integration options guide.