Skip to main content
The OpenAI Responses API is OpenAI’s latest API for generating completions. It supports text, images, tool calling, structured outputs, and more in a unified interface. Freeplay supports formatting prompts and recording completions with Responses API so you can use it seamlessly in your code.

Setting up in the prompt playground

To use the Responses API with a prompt template in Freeplay:
  1. Open your prompt template in the Prompt Playground
  2. Select a compatible OpenAI model (e.g. gpt-5.x, gpt-4.x)
  3. Open Model Settings
  4. Change the API Format to Responses
Image
Once configured, Freeplay formats the prompt for the Responses API when fetched via the SDK. This means formatted_prompt.llm_prompt returns the input array expected by openai.responses.create() instead of the chat completions message format.

How it works

When the API Format is set to Responses API:
  • formatted_prompt.llm_prompt returns the input array for responses.create()
  • formatted_prompt.system_content returns the system instructions (passed as instructions)
  • formatted_prompt.tool_schema returns tools in the Responses API format
  • formatted_prompt.formatted_output_schema returns the JSON schema for structured outputs
  • formatted_prompt.prompt_info.model_parameters contains model settings like temperature

1. Setup clients

Initialize Freeplay and OpenAI client SDKs.

2. Fetch prompt from Freeplay

Pull in the formatted prompt. Since the API Format is set to Responses API, the prompt is formatted accordingly.

3. Build the Responses API call

Map the formatted prompt fields to the Responses API parameters — instructions, tools, structured output text format, etc.

4. Call OpenAI Responses API

Pass the formatted input and parameters to openai.responses.create().

5. Handle the response

The Responses API returns an output array. Iterate through it to handle text outputs and tool calls.

6. Record the interaction

Pass the messages, tool schema, and media inputs back to Freeplay for observability.

Examples

import base64
import json
import os
import time
from pathlib import Path
from typing import Any, Dict, Optional

from openai import OpenAI

from freeplay import Freeplay, RecordPayload, CallInfo
from freeplay.model import MediaInputBase64
from freeplay.resources.recordings import UsageTokens

## SETUP ##
fp_client = Freeplay(
    freeplay_api_key=os.environ["FREEPLAY_API_KEY"],
    api_base=f"{os.environ['FREEPLAY_API_URL']}/api",
)
openai_client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

project_id = os.environ["FREEPLAY_PROJECT_ID"]
input_variables = {"location": "San Francisco"}

## IMAGE INPUT (OPTIONAL) ##
image_url: Optional[str] = None  # Set to an image file path to include an image input
media_inputs = {}
if image_url:
    image_path = Path(image_url)
    with open(image_path, "rb") as f:
        encoded_image = base64.b64encode(f.read()).decode("utf-8")
    media_inputs["image_input"] = MediaInputBase64(
        type="base64",
        content_type="image/jpeg",
        data=encoded_image,
    )

## PROMPT FETCH ##
formatted_prompt = fp_client.prompts.get_formatted(
    project_id=project_id,
    template_name="my-openai-prompt",
    environment="latest",
    variables=input_variables,
    media_inputs=media_inputs if media_inputs else None,
)

## BUILD RESPONSES API PARAMS ##
response_params: Dict[str, Any] = {
    **formatted_prompt.prompt_info.model_parameters,
}
if formatted_prompt.system_content:
    response_params["instructions"] = formatted_prompt.system_content
if formatted_prompt.tool_schema:
    response_params["tools"] = formatted_prompt.tool_schema
if formatted_prompt.formatted_output_schema:
    response_params["text"] = {
        "format": {
            "type": "json_schema",
            "strict": True,
            "schema": formatted_prompt.formatted_output_schema,
            "name": "structured_output",
        }
    }

## LLM CALL ##
start = time.time()
completion = openai_client.responses.create(
    input=formatted_prompt.llm_prompt,
    model=formatted_prompt.prompt_info.model,
    **response_params,
)
end = time.time()

## HANDLE RESPONSE ##
messages = [*formatted_prompt.llm_prompt]
for output in completion.output:
    if output.type == "function_call":
        tool_name = output.name
        tool_args = output.arguments
        tool_id = output.id
        # Replace with your actual tool implementation
        tool_result = "70 and sunny"
        messages = [
            *messages,
            {
                "role": "user",
                "content": str(tool_result),
                "tool_call_id": tool_id,
                "name": tool_name,
            },
            {
                "role": "assistant",
                "content": None,
                "tool_calls": [
                    {
                        "id": tool_id,
                        "function": {
                            "name": tool_name,
                            "arguments": json.dumps(tool_args),
                        },
                        "type": "function",
                    }
                ],
            },
        ]
    elif output.type == "output_text":
        messages = [
            *messages,
            {"role": "assistant", "content": str(output.content[0].text)},
        ]

## RECORD ##
session = fp_client.sessions.create()
call_info = CallInfo.from_prompt_info(
    formatted_prompt.prompt_info,
    start,
    end,
    UsageTokens(completion.usage.input_tokens, completion.usage.output_tokens),
)

fp_client.recordings.create(
    RecordPayload(
        project_id=project_id,
        all_messages=messages,
        session_info=session.session_info,
        inputs=input_variables,
        prompt_version_info=formatted_prompt.prompt_info,
        call_info=call_info,
        tool_schema=formatted_prompt.tool_schema,
        media_inputs=media_inputs if media_inputs else None,
    )
)