Common integration patterns
Common integration requirements like multi turn chat, workflows, tool calls, and customer feedback
Multi-Turn Conversations
For chatbots and assistants, pass conversation history when fetching prompts:
# Your conversation history
history = [
{'role': 'user', 'content': 'What is pasta?'},
{'role': 'assistant', 'content': 'Pasta is an Italian dish...'}
]
# Fetch prompt with history
formatted_prompt = fp_client.prompts.get_formatted(
project_id=project_id,
template_name="chat-assistant",
environment="latest",
variables={"user_question": "How do I make it?"},
history=history # Freeplay handles formatting
)Tool Calling
if completion.choices[0].message.tool_calls:
for tool_call in completion.choices[0].message.tool_calls:
if tool_call.function.name == "weather_of_location":
args = json.loads(tool_call.function.arguments)
temperature = get_temperature(args["location"])
tool_response_message = {
"tool_call_id": tool_call.id,
"role": "tool",
"content": str(temperature),
}
messages.append(tool_response_message)// Append the completion to list of messages
const messages = formattedPrompt.allMessages(completion.choices[0].message);
if (completion.choices[0].message.tool_calls) {
for (const toolCall of completion.choices[0].message.tool_calls) {
if (toolCall.function.name === "weather_of_location") {
const args = JSON.parse(toolCall.function.arguments);
const temperature = getTemperature(args.location);
const toolResponseMessage = {
tool_call_id: toolCall.id,
role: "tool",
content: temperature.toString(),
};
messages.push(toolResponseMessage);
}
}
}See full tool calling example →
Adding Custom Metadata
Track user IDs, feature flags, or any custom data:
# Create session with metadata
session = fp_client.sessions.create(
custom_metadata={
"user_id": "user_123",
"environment": "production",
"feature_flag": "new_ui_enabled"
}
)Logging User Feedback
Capture thumbs up/down or other user reactions:
# After the user rates your response
fp_client.customer_feedback.update(
completion_id=completion.completion_id,
feedback={
'thumbs_up': True,
'user_comment': 'Very helpful!'
}
)Tracking Multi-Step Workflows
For agents and complex workflows, use traces to group related completions:
# Create a trace for multi-step workflow
trace_info = session.create_trace(
input="Research and write a blog post about AI",
agent_name="blog_writer",
custom_metadata={"version": "2.0"}
)
# Log each LLM call with the trace_id
# ... your LLM calls here ...
# Record final output
trace_info.record_output(
project_id=project_id,
output="[Final blog post content]"
)Now that you're integrated, here's how to level up:
Build Your Testing Workflow
- Create Datasets - Build collections of test cases to validate prompt changes
- Set Up Evaluations - Automatically measure quality with code, model-graded, or human evals
- Run Tests - Test prompts against datasets programmatically or in the UI
Updated about 3 hours ago
