Integrate Freeplay with LangGraph
Supercharge Your LangGraph Workflows with Freeplay
LangGraph gives developers the power to define complex, multi-step agent workflows with stateful orchestration. But as your graph grows, so does the challenge of managing prompts, tracking behavior across nodes, and evaluating whether your system is getting better or worse in collaboration with your team. That’s where Freeplay comes in.
Freeplay connects directly to your LangGraph application, giving you full-stack visibility and control over every LLM interaction inside your graph. You get centralized prompt management, built-in evals tailored to your product, and powerful observability that makes debugging and iteration feel like a feature, not a fire drill.
If you're building with LangGraph, Freeplay helps you move faster with more confidence and gives your whole team the tools to ship and improve AI workflows, not just prototype them.
This guide walks through how to integrate Freeplay with your LangGraph application, how to monitor and evaluate your graph in production, and how to scale your prompt engineering process without drowning in complexity.
Before you get started
Integrating LangGraph with Freeplay requires you to first set up a Freeplay account and import your prompt and model configurations to Freeplay. You'll then be able to use Freeplay's full experimentation and prompt management workflow together with observability features.
This guide will walk you through:
- Setting up the Freeplay client in your code - Installing dependencies and configuring your environment
- Getting your prompts into LangGraph - Migrating prompt and model configurations to Freeplay and accessing them in your app
- Integrating observability - Adding Freeplay logging to your LangGraph nodes for comprehensive data capture and agent tracing
Let's get started!
Prerequisites
Before starting, ensure you have:
- A LangGraph application with defined nodes and workflows
- A Freeplay account with prompt templates configured, see our quick start guide for details
Step 1: Environment Setup
First, configure your environment with the necessary credentials and dependencies by installing Freeplay and any required LangGraph libraries.
Install Dependencies
Add Freeplay to your requirements:
pip install freeplay
You'll also need the Freeplay integration files:
FreeplayLLM
- Handles prompt management and LLM initializationFreeplayRuntimeCallback
- Manages observability and logging
Both of these and a full walkthrough can be found in this recipe.
Configure Environment Variables
Set up your environment variables:
FREEPLAY_API_KEY="your-api-key"
FREEPLAY_PROJECT_ID="your-project-id"
FREEPLAY_API_BASE="https://api.freeplay.ai"
Step 2: Initialize Freeplay Components
Update your LangGraph application to use Freeplay components. You need to initialize two components, the FreeplayLLM
class wraps your LLM creation while the FreeplayRuntimeCallback
handles logging data to Freeplay.
Basic Initialization
At the start of your application, initialize the Freeplay components:
from freeplay import Freeplay
from freeplay_integration import FreeplayLLM, FreeplayRuntimeCallback, TraceContext
# Initialize Freeplay client
fp_client = Freeplay(
api_key=os.getenv("FREEPLAY_API_KEY"),
project_id=os.getenv("FREEPLAY_PROJECT_ID")
)
# Initialize FreeplayLLM for prompt management
freeplay_llm = FreeplayLLM(
freeplay_client=fp_client,
project_id=os.getenv("FREEPLAY_PROJECT_ID"),
)
# Create session for tracking related prompt and agent interactions below
fp_session = fp_client.sessions.create(os.getenv("FREEPLAY_PROJECT_ID"))
# Optionally, create trace context for end-to-end agent tracing
trace_context = TraceContext()
Step 3: Update LLM Initialization
Replace your standard LLM instances with Freeplay-managed prompt and model configurations. Freeplay's SDK will fetch your prompt text and model configurations based on the versions you save in Freeplay. See our Prompt Management guide for more details.
In the example below, we are pulling the research_prompt
, validation_prompt
, and final_response_prompt
from Freeplay.
Before Integration
# Standard LangGraph setup
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.3)
research_llm = llm.bind_tools([tavily_search])
validation_llm = llm.bind_tools([validate_sources])
response_llm = llm
# Prompt strings would then be managed in code for each node
After Integration
# Freeplay integration - prompts now managed centrally
research_llm = freeplay_llm.create_llm("research_prompt", tools=[tavily_search])
validation_llm = freeplay_llm.create_llm("validation_prompt", tools=[validate_sources])
response_llm = freeplay_llm.create_llm("final_response_prompt")
Step 4: Add Runtime Observability
Update your LangGraph nodes to log interactions to Freeplay. The prompts will be bound and formatted based on the information passed, ensuring you are passing and logging the right information at each node.
Data is then logged to Freeplay using runtime callbacks, this provides Freeplay with the right information to be logged, this will include assistant and human messages along with tool calls and results.
Before Integration
def research_node(state):
user_question = state["question"]
response = research_llm.invoke([HumanMessage(content=user_question)])
return {"research_results": response.content}
After Integration
def research_node(state):
user_question = state["question"]
# Get formatted prompt from Freeplay
formatted_prompt = freeplay_llm.bind_and_format_prompt(
"research_prompt",
{"user_question": user_question}
)
# Make LLM call with Freeplay observability
response = research_llm.invoke(
formatted_prompt._llm_prompt,
config={
"callbacks": [
FreeplayRuntimeCallback(
client=fp_client,
session_id=fp_session.session_id,
formatted_prompt=formatted_prompt,
variables={"user_question": user_question},
trace_context=trace_context,
)
]
},
)
return {"research_results": response.content}
[Optional] Step 5: Implement Trace Management
You can use traces in Freeplay to group related completions together. This allows you to tie workflows and agents into logical groups. See our docs on agents and traces for more details.
To use them with LangGraph, use the TraceContext
in the node to get the right information to the LLM call and recording call.
Trace Management
# Create trace for this graph execution
trace_context = TraceContext()
...
def research_node(state: State) -> dict:
"""Node 1: Search for information using Tavily."""
messages = state["messages"]
user_question = messages[-1].content
# Create the trace at the first node
trace_context.set_trace(fp_session.create_trace(user_question))
print("User question: ", user_question)
research_prompt = freeplay_llm.bind_and_format_prompt(
"research_prompt",
{"user_question": user_question},
)
response = research_llm.invoke(
research_prompt._llm_prompt,
config={
"callbacks": [
FreeplayRuntimeCallback(
client=fp_client,
session_id=fp_session.session_id,
formatted_prompt=research_prompt,
variables={"user_question": user_question},
trace_context=trace_context, # Pass the trace context to the call back to tie this recording to the trace
)
]
},
)
return {"messages": [response]}
# Record the result at the end of the graph/agent
def respond_node(state: State) -> dict:
"""Node 3: Generate final response to user based on validated research."""
messages = state["messages"]
original_question = messages[0].content # First message is the user's question
response_prompt = freeplay_llm.bind_and_format_prompt(
"final_response_prompt",
{"original_question": original_question},
history=messages,
)
response = response_llm.invoke(
response_prompt._llm_prompt,
config={
"callbacks": [
FreeplayRuntimeCallback(
client=fp_client,
session_id=fp_session.session_id,
formatted_prompt=response_prompt,
variables={"original_question": original_question},
trace_context=trace_context, # Pass the trace context
)
]
},
)
# Record the final output of the trace
trace_context.current_trace.record_output(project_id, str(response.content))
return {"messages": [response]}
Multi-Step Workflows
For nodes that depend on previous results, you can pass the history and any variables when you bind and format to provide the right context to your LLM call:
def validation_node(state):
research_results = state["research_results"]
original_question = state["question"]
formatted_prompt = freeplay_llm.bind_and_format_prompt(
"validation_prompt",
variables={
"research_results": research_results,
"original_question": original_question
},
history=state["messages"]
)
response = validation_llm.invoke(
formatted_prompt._llm_prompt,
config={
"callbacks": [
FreeplayRuntimeCallback(
client=fp_client,
session_id=fp_session.session_id,
formatted_prompt=formatted_prompt,
variables={
"research_results": research_results,
"original_question": original_question
},
trace_context=trace_context,
)
]
},
)
return {"validation_results": response.content}
Next Steps
Once your integration is complete:
- Set up evaluations to monitor your graph's performance
- Create datasets from your logged interactions for testing
- Implement review queues for human evaluation of complex outputs
Your LangGraph application now has centralized prompt management, comprehensive observability, and the foundation for robust evaluation workflows through Freeplay's platform.
Updated 2 days ago