Use this file to discover all available pages before exploring further.
Retrieve your prompt templates from the Freeplay server. All methods associated with your Freeplay prompt template are accessible from the client.prompts namespace.
Get a formatted prompt template object with variables inserted, messages formatted for the configured LLM provider (e.g. OpenAI, Anthropic), and model parameters for the LLM call.
Get a formatted prompt object with variables inserted, messages formatted for the associated LLM provider, and model parameters to use for the LLM call. This is the most convenient method for most prompt fetch use cases given that formatting is handled for you server side in Freeplay.
The environment parameter determines which version of your prompt is fetched. Default environment names are:
"prod" - Production environment
"dev" - Development environment
"latest" - Latest version (use for development/testing)
# get a formatted promptformatted_prompt = fp_client.prompts.get_formatted( project_id=project_id, template_name="template_name", environment="latest", variables={"keyA": "valueA"})# Sample use in an LLM callstart = time.time()chat_response = openai_client.chat.completions.create( model=formatted_prompt.prompt_info.model, messages=formatted_prompt.llm_prompt, **formatted_prompt.prompt_info.model_parameters)end = time.time()# add the response to your message setall_messages = formatted_prompt.all_messages({ 'role': chat_response.choices[0].message.role, 'content': chat_response.choices[0].message.content})
Get a prompt template object that does not yet have variables inserted and has messages formatted in consistent LLM provider agnostic structure. It is particularly useful when you want to reuse a prompt template with different variables in the same execution path, like Test Runs.This method gives you more control to handle formatting in your own code rather than server side in Freeplay, but requires a few more lines of code.
# get an unformatted prompt templatetemplate_prompt = fp_client.prompts.get( project_id=project_id, template_name="template_name", environment="latest")# to format the promptformatted_prompt = template_prompt.bind({"keyA": "valueA"}).format()
history is a special object in Freeplay prompt templates for managing state over multiple LLM interactions by passing in previous messages. It accepts an array of prior messages when relevant.Before using history in the SDK, you must configure it on your prompt template. See more details in the Multi-Turn Chat Support section.Once you have history configured for a prompt template, you can pass it during the formatting process. The history messages will be inserted wherever you have your history placeholder in your prompt template.
previous_messages = [{"role": "user", "content": "what are some dinner ideas..."}, {"role": "assistant", "content": "here are some dinner ideas..."}]prompt_vars = {"question": "how do I make them healthier?"}formatted_prompt = fp_client.prompts.get_formatted( project_id=project_id, template_name="SamplePrompt", environment="latest", variables=prompt_vars, history=previous_messages # pass the history messages here)# llm_prompt contains messages formatted for the providerprint(formatted_prompt.llm_prompt)# output:[{'role': 'system', 'content': 'You are a polite assistant...'},{'role': 'user', 'content': 'what are some dinner ideas...'},{'role': 'assistant', 'content': 'here are some dinner ideas...'},{'role': 'user', 'content': 'how do I make them healthier?'}]
See a full implementation of using history in the context of a multi-turn chatbot application here
You can define tool schemas alongside your prompt templates. The Freeplay SDK will format the tool schema based on the configured LLM provider, so you can pass the tool schema to the LLM provider as is.
# get a formatted promptformatted_prompt = fp_client.prompts.get_formatted( project_id=project_id, template_name="template_name", environment="latest", variables={"keyA": "valueA"})# Sample use in an LLM callstart = time.time()chat_response = openai_client.chat.completions.create( model=formatted_prompt.prompt_info.model, messages=formatted_prompt.llm_prompt, # Pass the tool schema to the LLM call tool_schema=formatted_prompt.tool_schema, **formatted_prompt.prompt_info.model_parameters)end = time.time()
Freeplay allows you to record client-side executed evals to Freeplay during the record step (more here). Code evals are useful for running objective assertions or pairwise comparisons against ground truth data. They are passed as a key-value pair and can be associated with either a regular Session or a Test Run session.
Freeplay allows you to provide your own client-side UUIDs for both Sessions and Completions. This can be useful if you already have natural identifiers in your application code. Providing your own Completion Id also allows you to store the completion Id to be used for recording customer feedback without having to wait for the record call to complete. Thus making the record call entirely non-blocking.
from freeplay import Freeplay, RecordPayload, CallInfo, SessionInfofrom uuid import uuid4## PROMPT FETCH# set the prompt variablesprompt_vars = {"keyA": "valueA"}# get a formatted promptformatted_prompt = fp_client.prompts.get_formatted( project_id=project_id, template_name="template_name", environment="latest", variables=prompt_vars)## LLM CALL# Make an LLM call to your provider of choicestart = time.time()chat_response = openai_client.chat.completions.create( model=formatted_prompt.prompt_info.model, messages=formatted_prompt.llm_prompt, **formatted_prompt.prompt_info.model_parameters)end = time.time()# add the response to your message setall_messages = formatted_prompt.all_messages({ 'role': chat_response.choices[0].message.role, 'content': chat_response.choices[0].message.content})## RECORD### CUSTOM IDS# create your Ids (must be UUIDs)session_id = uuid4()completion_id = uuid4()# Create sessionInfo with custom Idssession_info = SessionInfo(session_id=session_id,custom_metadata={'keyA': 'valueA'})### Extra datacall_info = CallInfo(provider=formatted_prompt.prompt_info.provider,model=formatted_prompt.prompt_info.model,start_time=s,end_time=e,model_parameters=all_params # pass the full parameter set)# build the record payloadpayload = RecordPayload( project_id=project_id, all_messages=all_messages, inputs=prompt_vars, session_info=session_info, completion_id=completion_id, prompt_version_info=formatted_prompt.prompt_info, call_info=call_info,)# record the LLM interactionfp_client.recordings.create(payload)