Skip to main content

Multi-Turn Conversations

For chatbots and assistants, pass conversation history when fetching prompts:
# Your conversation history
history = [
    {'role': 'user', 'content': 'What is pasta?'},
    {'role': 'assistant', 'content': 'Pasta is an Italian dish...'}
]

# Fetch prompt with history

formatted_prompt = fp_client.prompts.get_formatted(
project_id=project_id,
template_name="chat-assistant",
environment="latest",
variables={"user_question": "How do I make it?"},
history=history # Freeplay handles formatting
)

See full multi-turn example →

Tool Calling

if completion.choices[0].message.tool_calls:
    for tool_call in completion.choices[0].message.tool_calls:
        if tool_call.function.name == "weather_of_location":
            args = json.loads(tool_call.function.arguments)
            temperature = get_temperature(args["location"])

            tool_response_message = {
                "tool_call_id": tool_call.id,
                "role": "tool",
                "content": str(temperature),
            }
            messages.append(tool_response_message)
See full tool calling example →

Adding Custom Metadata

Track user IDs, feature flags, or any custom data:
# Create session with metadata
session = fp_client.sessions.create(
    custom_metadata={
        "user_id": "user_123",
        "environment": "production",
        "feature_flag": "new_ui_enabled"
    }
)

Logging User Feedback

Capture thumbs up/down or other user reactions:
# After the user rates your response
fp_client.customer_feedback.update(
    completion_id=completion.completion_id,
    feedback={
        'thumbs_up': True,
        'user_comment': 'Very helpful!'
    }
)
See full feedback example →

Tracking Multi-Step Workflows

For agents and complex workflows, use traces to group related completions:
# Create a trace for multi-step workflow
trace_info = session.create_trace(
    input="Research and write a blog post about AI",
    agent_name="blog_writer",
    custom_metadata={"version": "2.0"}
)

# Log each LLM call with the trace_id

# ... your LLM calls here ...

# Record final output

trace_info.record_output(
project_id=project_id,
output="[Final blog post content]"
)

See full agent example →