Skip to main content
Integrating Freeplay with your application unlocks the full platform: production monitoring, dataset creation from real traffic, online and offline evaluations, and optional prompt and model deployment tools.

Choose your integration pattern

Before writing code, decide how you want to manage prompts. Freeplay becomes the source of truth for your prompt templates. Your application fetches prompts from Freeplay either at runtime, or as part of your build process (or both). Benefits:
  • Non-engineers can iterate on prompts and swap models without code changes
  • Deploy prompt updates like feature flags and/or as part of your build process
  • Automatic versioning and environment promotion
  • Detailed observability with each log connected directly to a specific version (prompt and model configuration)
Best for: Teams that want to empower PMs, domain experts, or anyone else to iterate on prompts independently of code.
You can configure your Freeplay client to retrieve prompts from the server at runtime, or “bundle” them as part of your build process (learn more about “prompt bundling”).Many Freeplay customers retrieve prompts at runtime in lower-level environments like dev or staging to get the benefit of fast server-side experimentation, then use prompt bundling in production for tighter release management and zero latency.

Pattern 2: Code manages your prompts

The source of truth for your prompts remains your codebase. You push prompt templates and model configurations to Freeplay to enable experimentation and organize your observability data. Benefits:
  • Prompts stay entirely in your code
  • Use your existing code review process for prompt changes
  • Full flexibility over prompt structure
Best for: Teams with complex prompt construction expertations, strict infrastructure-as-code requirements, and/or those who prefer prompt changes remain solely the domain of engineers.
Both patterns still require creating prompt templates in Freeplay. The difference is whether your application fetches prompt configurations from Freeplay (Pattern 1) or logs prompts defined in code (Pattern 2).

Get started

Step 1: Install the SDK

You can integrate directly with your code using one of Freeplay’s SDKs, or select from integrations with common frameworks outlined below.
pip install freeplay

Step 2: Create a prompt template

Create your first prompt template in the UI. See Start in the UI for a walkthrough. Once you save a prompt, the Integration tab provides code snippets tailored to your template. Integration page

Step 3: Integrate

Fetch prompts from Freeplay and log completions:
from freeplay import Freeplay
from openai import OpenAI
import os

fp_client = Freeplay(
    freeplay_api_key=os.getenv("FREEPLAY_API_KEY"),
    api_base="https://app.freeplay.ai/api"
)
openai_client = OpenAI()

project_id = os.getenv("FREEPLAY_PROJECT_ID")

## FETCH PROMPT ##
formatted_prompt = fp_client.prompts.get_formatted(
    project_id=project_id,
    template_name="my-template",
    environment="production",
    variables={"user_input": "Hello, world!"}
)

## LLM CALL ##
response = openai_client.chat.completions.create(
    model=formatted_prompt.model,
    messages=formatted_prompt.llm_messages
)

## RECORD ##
all_messages = formatted_prompt.all_messages + [
    {"role": "assistant", "content": response.choices[0].message.content}
]

fp_client.recordings.create(
    project_id=project_id,
    all_messages=all_messages,
    prompt_version_info={
        "prompt_template_version_id": formatted_prompt.prompt_template_version_id,
        "environment": "production"
    }
)

Framework integrations

If you’re using a common AI framework, Freeplay provides native integrations:

Next steps