Start from SDK
Learn about the three main ways to integrate with ReadMe: API, Webhooks, and SDKs. Choose the integration method that best fits your development workflow.
Developer Quick Start
Integrate Freeplay into your application in less than 15 minutes. This guide walks you through a complete end-to-end integration—from installation to seeing your first completion in the dashboard.
What You'll Build
By the end of this guide, you'll have a working integration that fetches prompts from Freeplay, calls your LLM, and logs everything for observability. You'll see your first completion in the dashboard and understand the foundation for testing, evaluation, and continuous improvement.
Before You Start
Make sure you have:
- A Freeplay account at app.freeplay.ai (or custom domain)
- API keys for your LLM provider (OpenAI, Anthropic, etc.)
- Your Freeplay API key and Project ID (Get them here)
Step 1: Install the Freeplay SDK
Freeplay offers native SDKs for Python, Node.js, and Java (for use with any JVM language). Don't see an SDK you need? Reach out at [email protected].
pip install freeplaynpm install freeplayAll languages:
- Java/Kotlin: See Maven/Gradle instructions
Packages
Step 2: Initialize the Freeplay Client
Set up your Freeplay client with your API key:
from freeplay import Freeplay
import os
# Initialize your Freeplay client
fp_client = Freeplay(
freeplay_api_key=os.getenv("FREEPLAY_API_KEY"),
api_base="https://app.freeplay.ai/api"
)
project_id = os.getenv("FREEPLAY_PROJECT_ID")import { Freeplay } from 'freeplay';
// Initialize your Freeplay client
const fpClient = new Freeplay({
apiKey: process.env.FREEPLAY_API_KEY,
apiBase: 'https://app.freeplay.ai/api'
});
const projectId = process.env.FREEPLAY_PROJECT_ID;Custom subdomain? If your organization uses a custom Freeplay domain like https://acmecorp.freeplay.ai, update api_base to https://acmecorp.freeplay.ai/api.
Step 3: Choose Your Integration Approach
There are three ways to integrate Freeplay, each with different tradeoffs. Choose the one that fits your workflow best.
Manage Prompts in Freeplay (Recommended)
Set up prompts in the Freeplay UI and fetch them dynamically in your application. This approach makes it easy for non-technical team members to iterate on prompts while you focus on building features.
📌 Key Benefits
- No-Code Prompt Updates: Change prompts without deploying new code
- Version Control: Track all prompt changes with automatic versioning
- Team Collaboration: Product managers and domain experts can iterate independently
- Environment Management: Deploy different prompt versions to dev, staging, and production
Best For:
- Teams who want non-technical stakeholders to iterate on prompts
- Organizations needing strong version control and deployment workflows
- Applications where prompt changes should be independent of code deployments
Create Your Prompt Template
Navigate to Prompts in the main menu and click "Create Prompt Template". Use mustache syntax ({{variable}}) for dynamic inputs:
System: You are a helpful assistant that provides concise answers.
User: {{user_question}}Click "Save" and name it (e.g., "assistant-prompt"), then "Deploy" to the latest environment.
Fetch the Prompt from Freeplay
# Define your variables
variables = {"user_question": "What is the capital of France?"}
# Fetch the formatted prompt
formatted_prompt = fp_client.prompts.get_formatted(
project_id=project_id,
template_name="assistant-prompt",
environment="latest",
variables=variables
) // Define your variables
const variables = { user_question: "What is the capital of France?" };
// Fetch the formatted prompt
const formattedPrompt = await fpClient.prompts.getFormatted({
projectId: projectId,
templateName: "assistant-prompt",
environment: "latest",
variables: variables
});What you get back:
formatted_prompt.llm_prompt- Messages formatted for your LLM providerformatted_prompt.prompt_info.model- The model to useformatted_prompt.prompt_info.model_parameters- Temperature, max_tokens, etc.
Call Your LLM
Use the formatted prompt to call your LLM provider:
from openai import OpenAI
import time
# Initialize your LLM client
openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# Call the LLM
start_time = time.time()
response = openai_client.chat.completions.create(
model=formatted_prompt.prompt_info.model,
messages=formatted_prompt.llm_prompt,
**formatted_prompt.prompt_info.model_parameters
)
end_time = time.time()
print(f"AI Response: {response.choices[0].message.content}") import OpenAI from 'openai';
// Initialize your LLM client
const openaiClient = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
// Call the LLM
const startTime = Date.now();
const response = await openaiClient.chat.completions.create({
model: formattedPrompt.promptInfo.model,
messages: formattedPrompt.llmPrompt,
...formattedPrompt.promptInfo.modelParameters
});
const endTime = Date.now();
console.log(`AI Response: ${response.choices[0].message.content}`);Log to Freeplay
Record the interaction for observability and analysis:
from freeplay import RecordPayload, CallInfo, ResponseInfo
# Create a session to group related interactions
session = fp_client.sessions.create()
# Add the assistant's response to message history
all_messages = formatted_prompt.all_messages(response.choices[0].message)
# Record the completion
completion = fp_client.recordings.create(
RecordPayload(
all_messages=all_messages,
inputs=variables,
session_info=session,
prompt_info=formatted_prompt.prompt_info,
call_info=CallInfo.from_prompt_info(
formatted_prompt.prompt_info,
start_time=start_time,
end_time=end_time
),
response_info=ResponseInfo(
is_complete=response.choices[0].finish_reason == 'stop'
)
)
)
print(f"✓ Logged to Freeplay: {completion.completion_id}") import { RecordPayload, CallInfo, ResponseInfo } from 'freeplay';
// Create a session
const session = await fpClient.sessions.create();
// Add the assistant's response
const allMessages = formattedPrompt.allMessages(response.choices[0].message);
// Record the completion
const completion = await fpClient.recordings.create(
new RecordPayload({
allMessages: allMessages,
inputs: variables,
sessionInfo: session,
promptInfo: formattedPrompt.promptInfo,
callInfo: CallInfo.fromPromptInfo(
formattedPrompt.promptInfo,
startTime,
endTime
),
responseInfo: new ResponseInfo({
isComplete: response.choices[0].finish_reason === 'stop'
})
})
);
console.log(`✓ Logged to Freeplay: ${completion.completionId}`);Complete Example
Here's everything together in one script:
import os
import time
from freeplay import Freeplay, RecordPayload, CallInfo, ResponseInfo
from openai import OpenAI
# Initialize clients
fp_client = Freeplay(
freeplay_api_key=os.getenv("FREEPLAY_API_KEY"),
api_base="https://app.freeplay.ai/api"
)
openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
project_id = os.getenv("FREEPLAY_PROJECT_ID")
# Fetch prompt from Freeplay
variables = {"user_question": "What is the capital of France?"}
formatted_prompt = fp_client.prompts.get_formatted(
project_id=project_id,
template_name="assistant-prompt",
environment="latest",
variables=variables
)
# Call your LLM
start_time = time.time()
response = openai_client.chat.completions.create(
model=formatted_prompt.prompt_info.model,
messages=formatted_prompt.llm_prompt,
**formatted_prompt.prompt_info.model_parameters
)
end_time = time.time()
# Log to Freeplay
session = fp_client.sessions.create()
all_messages = formatted_prompt.all_messages(response.choices[0].message)
completion = fp_client.recordings.create(
RecordPayload(
all_messages=all_messages,
inputs=variables,
session_info=session,
prompt_info=formatted_prompt.prompt_info,
call_info=CallInfo.from_prompt_info(
formatted_prompt.prompt_info,
start_time=start_time,
end_time=end_time
),
response_info=ResponseInfo(
is_complete=response.choices[0].finish_reason == 'stop'
)
)
)
print(f"AI Response: {response.choices[0].message.content}")
print(f"✓ Logged to Freeplay: {completion.completion_id}")From here, you'll see data flowing into Freeplay in the Observability tab.
View Your Data in Freeplay
Once you've logged your first completion, navigate to your project and go to Observability in the sidebar. You'll see your logged completion with:
- All messages (user input + AI response)
- Latency and performance metrics
- Token usage and costs
- Model and parameters used
- Session groupings for multi-turn conversations
Use filters to drill into specific completions, create review queues for quality assurance, or export data for analysis.
Common Integration Patterns
Multi-Turn Conversations
For chatbots and assistants, pass conversation history when fetching prompts:
# Your conversation history
history = [
{'role': 'user', 'content': 'What is pasta?'},
{'role': 'assistant', 'content': 'Pasta is an Italian dish...'}
]
# Fetch prompt with history
formatted_prompt = fp_client.prompts.get_formatted(
project_id=project_id,
template_name="chat-assistant",
environment="latest",
variables={"user_question": "How do I make it?"},
history=history # Freeplay handles formatting
)Tool Calling
if completion.choices[0].message.tool_calls:
for tool_call in completion.choices[0].message.tool_calls:
if tool_call.function.name == "weather_of_location":
args = json.loads(tool_call.function.arguments)
temperature = get_temperature(args["location"])
tool_response_message = {
"tool_call_id": tool_call.id,
"role": "tool",
"content": str(temperature),
}
messages.append(tool_response_message)// Append the completion to list of messages
const messages = formattedPrompt.allMessages(completion.choices[0].message);
if (completion.choices[0].message.tool_calls) {
for (const toolCall of completion.choices[0].message.tool_calls) {
if (toolCall.function.name === "weather_of_location") {
const args = JSON.parse(toolCall.function.arguments);
const temperature = getTemperature(args.location);
const toolResponseMessage = {
tool_call_id: toolCall.id,
role: "tool",
content: temperature.toString(),
};
messages.push(toolResponseMessage);
}
}
}See full tool calling example →
Adding Custom Metadata
Track user IDs, feature flags, or any custom data:
# Create session with metadata
session = fp_client.sessions.create(
custom_metadata={
"user_id": "user_123",
"environment": "production",
"feature_flag": "new_ui_enabled"
}
)Logging User Feedback
Capture thumbs up/down or other user reactions:
# After the user rates your response
fp_client.customer_feedback.update(
completion_id=completion.completion_id,
feedback={
'thumbs_up': True,
'user_comment': 'Very helpful!'
}
)Tracking Multi-Step Workflows
For agents and complex workflows, use traces to group related completions:
# Create a trace for multi-step workflow
trace_info = session.create_trace(
input="Research and write a blog post about AI",
agent_name="blog_writer",
custom_metadata={"version": "2.0"}
)
# Log each LLM call with the trace_id
# ... your LLM calls here ...
# Record final output
trace_info.record_output(
project_id=project_id,
output="[Final blog post content]"
)Now that you're integrated, here's how to level up:
Build Your Testing Workflow
- Create Datasets - Build collections of test cases to validate prompt changes
- Set Up Evaluations - Automatically measure quality with code, model-graded, or human evals
- Run Tests - Test prompts against datasets programmatically or in the UI
Updated about 4 hours ago
