Skip to main content
Using a coding agent? Point it to docs.freeplay.ai/llms.txt for LLM-optimized documentation.
Integrating Freeplay with your application unlocks the full platform: production monitoring, dataset creation from real traffic, online and offline evaluations, and optional prompt and model deployment tools.

Choose your integration pattern

Before writing code, decide how you want to manage prompts. Freeplay becomes the source of truth for your prompt templates. Your application fetches prompts from Freeplay either at runtime, or as part of your build process (or both). Benefits:
  • Non-engineers can iterate on prompts and swap models without code changes
  • Deploy prompt updates like feature flags and/or as part of your build process
  • Automatic versioning and environment promotion
  • Detailed observability with each log connected directly to a specific version (prompt and model configuration)
Best for: Teams that want to empower PMs, domain experts, or anyone else to iterate on prompts independently of code.
You can configure your Freeplay client to retrieve prompts from the server at runtime, or “bundle” them as part of your build process (learn more about “prompt bundling”).Many Freeplay customers retrieve prompts at runtime in lower-level environments like dev or staging to get the benefit of fast server-side experimentation, then use prompt bundling in production for tighter release management and zero latency.

Pattern 2: Code manages your prompts

The source of truth for your prompts remains your codebase. You push prompt templates and model configurations to Freeplay to enable experimentation and organize your observability data. Benefits:
  • Prompts stay entirely in your code
  • Use your existing code review process for prompt changes
  • Full flexibility over prompt structure
  • Sync prompts to Freeplay automatically via API in your CI/CD pipeline
Best for: Teams with complex prompt construction expectations, strict infrastructure-as-code requirements, and/or those who prefer prompt changes remain solely the domain of engineers.
With Pattern 2, you create prompt templates in Freeplay programmatically using the API. Use the create_template_if_not_exists parameter to sync prompts from your codebase:
curl -X POST \
  "https://app.freeplay.ai/api/v2/projects/$PROJECT_ID/prompt-templates/name/my-assistant/versions?create_template_if_not_exists=true" \
  -H "Authorization: Bearer $FREEPLAY_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "template_messages": [
      {"role": "system", "content": "You are a helpful assistant. The user'\''s name is {{user_name}}."},
      {"role": "user", "content": "{{user_input}}"}
    ],
    "provider": "openai",
    "model": "gpt-4o",
    "llm_parameters": {"temperature": 0.2, "max_tokens": 1024}
  }'
See the full guide: Code as source of truth and Create prompt template version by name.

Optional: Start with observability only

You can begin by logging LLM calls to Freeplay without setting up prompt templates. This gets you started quickly, but this is not recommended as an end state since it limits core Freeplay features. What works without prompt templates:
  • Basic observability and search, including any evaluation values or other metadata you choose to log
  • Manual agent dataset curation (trace level)
  • Configuring auto-evaluations to run at the trace / agent level
What requires prompt templates:
  • Experimenting in the Freeplay playground
  • Creating prompt datasets from observability logs (requires inputs field / knowledge of variables in your prompts)
  • Running evaluations that target prompt-level changes
  • Searching by prompt template or version
  • Running test runs against saved prompts
  • Tracking prompt version performance over time
When to use this approach:
  • You want to start logging immediately and add prompt templates later
  • You’re evaluating Freeplay before committing to a prompt management pattern
Upgrade path: Set up prompt templates as soon as possible to unlock the full set of Freeplay features. You can create templates either in the Freeplay UI or via the API, then update your code to include prompt_version_info when recording.
from freeplay import Freeplay, RecordPayload, CallInfo
from openai import OpenAI
import os

fp_client = Freeplay(
    freeplay_api_key=os.getenv("FREEPLAY_API_KEY"),
    api_base="https://app.freeplay.ai/api"
)
openai_client = OpenAI()

# Your existing prompt
messages = [
    {"role": "system", "content": "You are helpful."},
    {"role": "user", "content": "Hello!"}
]

# Make LLM call
response = openai_client.chat.completions.create(
    model="gpt-4o",
    messages=messages
)

# Record to Freeplay (no prompt template)
all_messages = messages + [
    {"role": "assistant", "content": response.choices[0].message.content}
]

fp_client.recordings.create(
    RecordPayload(
        project_id=os.getenv("FREEPLAY_PROJECT_ID"),
        all_messages=all_messages,
        call_info=CallInfo(provider="openai", model="gpt-4o")
    )
)

Get started

Step 1: Install the SDK

You can integrate directly with your code using one of Freeplay’s SDKs, or select from integrations with common frameworks outlined below.
pip install freeplay

Alternative: Framework integrations

If you’re using a common AI framework, Freeplay provides native integrations that require separate packages from our standard SDKs. Learn more:

Step 2: Create a prompt template

Create your first prompt template in the UI or via the API. Once you save a prompt, the Integration tab provides code snippets tailored to your template. Integration page

Step 3: Integrate

Fetch prompts from Freeplay and log completions:
from freeplay import Freeplay
from openai import OpenAI
import os

fp_client = Freeplay(
    freeplay_api_key=os.getenv("FREEPLAY_API_KEY"),
    api_base="https://app.freeplay.ai/api"
)
openai_client = OpenAI()

project_id = os.getenv("FREEPLAY_PROJECT_ID")

## FETCH PROMPT ##
formatted_prompt = fp_client.prompts.get_formatted(
    project_id=project_id,
    template_name="my-template",
    environment="production",
    variables={"user_input": "Hello, world!"}
)

## LLM CALL ##
response = openai_client.chat.completions.create(
    model=formatted_prompt.model,
    messages=formatted_prompt.llm_messages
)

## RECORD ##
all_messages = formatted_prompt.all_messages + [
    {"role": "assistant", "content": response.choices[0].message.content}
]

fp_client.recordings.create(
    project_id=project_id,
    all_messages=all_messages,
    prompt_version_info={
        "prompt_template_version_id": formatted_prompt.prompt_template_version_id,
        "environment": "production"
    }
)

Next steps

Your integration is complete. Sessions will appear in the Observability dashboard as you log data from your application. With prompts and observability configured, you can now set up evaluations and run tests.