Skip to main content
Integrating Freeplay with your application unlocks the full platform: production monitoring, dataset creation from real traffic, online and offline evaluations, and optional prompt deployment.

Choose your integration pattern

Before writing code, decide how you want to manage prompts. Freeplay becomes the source of truth for your prompt templates. Your application fetches prompts from Freeplay either at runtime, or as part of your build process (or both). Benefits:
  • Non-engineers can iterate on prompts without code changes
  • Deploy prompt updates like feature flags
  • Automatic versioning and environment promotion
  • Full observability with prompt input variables linked to evaluations
Best for: Teams that want to empower PMs, domain experts, or anyone else to iterate on prompts independently of code. Note that you can configure your Freeplay client to retrieve prompts at runtime, or as part of your build process (learn more about “prompt bundling”). Many Freeplay customers retrieve prompts at runtime in lower-level environments like dev or staging for fast server-side experimentation, then use prompt bundling in production for tighter release management.

Pattern 2: Code manages your prompts

Your prompts live in your codebase. You sync prompt templates to Freeplay to organize your observability data. Benefits:
  • Prompts stay entirely in your code
  • Use your existing code review process for prompt changes
  • Full flexibility over prompt structure
Best for: Teams with strict infrastructure-as-code requirements, or those who prefer prompts versioned alongside application code.
Both patterns still require creating prompt templates in Freeplay. The difference is whether your application fetches prompt configurations from Freeplay (Pattern 1) or logs prompts defined in code (Pattern 2).

Get started

Step 1: Install the SDK

pip install freeplay

Step 2: Create a prompt template

Create your first prompt template in the UI. See Start in the UI for a walkthrough. Once you save a prompt, the Integration tab provides code snippets tailored to your template. Integration page

Step 3: Integrate

Fetch prompts from Freeplay and log completions:
from freeplay import Freeplay
from openai import OpenAI
import os

fp_client = Freeplay(
    freeplay_api_key=os.getenv("FREEPLAY_API_KEY"),
    api_base="https://app.freeplay.ai/api"
)
openai_client = OpenAI()

project_id = os.getenv("FREEPLAY_PROJECT_ID")

## FETCH PROMPT ##
formatted_prompt = fp_client.prompts.get_formatted(
    project_id=project_id,
    template_name="my-template",
    environment="production",
    variables={"user_input": "Hello, world!"}
)

## LLM CALL ##
response = openai_client.chat.completions.create(
    model=formatted_prompt.model,
    messages=formatted_prompt.llm_messages
)

## RECORD ##
all_messages = formatted_prompt.all_messages + [
    {"role": "assistant", "content": response.choices[0].message.content}
]

fp_client.recordings.create(
    project_id=project_id,
    all_messages=all_messages,
    prompt_version_info={
        "prompt_template_version_id": formatted_prompt.prompt_template_version_id,
        "environment": "production"
    }
)

Framework integrations

If you’re using a common AI framework, Freeplay provides native integrations:

Next steps