Skip to main content
Retrieve your prompt templates from the Freeplay server. All methods associated with your Freeplay prompt template are accessible from the client.prompts namespace.

Methods Overview

Some SDKs will use camel case rather than snake case depending on convention for the given language
Method NameParametersDescription
get_formattedproject_id: string template_name: string environment: stringGet a formatted prompt template object with variables inserted, messages formatted for the configured LLM provider (e.g. OpenAI, Anthropic), and model parameters for the LLM call.
getproject_id: string template_name: string environment: stringGet a prompt template by environment.Note: The prompt template will not have variables substituted or be formatted for the configured LLM provider.

Get a Formatted Prompt

Get a formatted prompt object with variables inserted, messages formatted for the associated LLM provider, and model parameters to use for the LLM call. This is the most convenient method for most prompt fetch use cases given that formatting is handled for you server side in Freeplay.

# get a formatted prompt

formatted_prompt = fpClient.prompts.get_formatted(
project_id=project_id,
template_name="template_name",
environment="latest",
variables={"keyA": "valueA"}
)

# Sample use in an LLM call

start = time.time()
chat_response = openaiClient.chat.completions.create(
model=formatted_prompt.prompt_info.model,
messages=formatted_prompt.llm_prompt,
\*\*formatted_prompt.prompt_info.model_parameters
)
end = time.time()

# add the response to your message set

all_messages = formatted_prompt.all_messages(
{'role': chat_response.choices[0].message.role,
'content': chat_response.choices[0].message.content}
)

Get a Prompt Template

Get a a prompt template object that does not yet have variables inserted and has messages formatted in consistent LLM provider agnostic structure. It is particularly useful when you want to reuse a prompt template with different variables in the same execution path, like Test Runs. This method gives you more control to handle formatting in your own code rather than server side in Freeplay, but requires a few more lines of code.
# get an unformatted prompt template
template_prompt = fpClient.prompts.get(project_id=project_id,
                                       template_name="template_name",
                                       environment="latest"
                                       )
# to format the prompt
formatted_prompt = template_prompt.bind({"keyA": "valueA"}).format()

Using History with Prompt Templates

history is a special object in Freeplay prompt templates for managing state over multiple LLM interactions by passing in previous messages. It accepts an array of prior messages when relevant. Before using history in the SDK, you most configure it on your prompt template. See more details in the Multi-Turn Chat Support section. Once you have history configured for a prompt template, you can pass it during the formatting process. The history messages will be inserted wherever you have your history placeholder in your prompt template.
previous_messages = [{"role": "user", "content": "what are some dinner ideas..."},
                      {"role": "assistant", "content": "here are some dinner ideas..."}]
prompt_vars = {"question": "how do I make them healthier?"}

formatted_prompt = fpClient.prompts.get_formatted(
project_id=project_id,
template_name="SamplePrompt",
environment="latest",
variables=prompt_vars,
history=previous_messages # pass the history messages here
)

# llm_prompt contains messages formatted for the provider

print(formatted_prompt.llm_prompt)

# output:

[
{'role': 'system', 'content': 'You are a polite assitant...'},
{'role': 'user', 'content': 'what are some dinner ideas...'},
{'role': 'assistant', 'content': 'here are some dinner ideas...'},
{'role': 'user', 'content': 'how do I make them healthier?'}
]

See a full implementation of using history in the context of a multi-turn chatbot application here

Using Tool Schemas with Prompt Templates

You can define tool schemas alongside your prompt templates. The Freeplay SDK will format the tool schema based on the configured LLM provider, so you can pass the tool schema to the LLM provider as is.

# get a formatted prompt
formatted_prompt = fpClient.prompts.get_formatted(project_id=project_id,
                                                  template_name="template_name",
                                                  environment="latest",
                                                  variables={"keyA": "valueA"})

# Sample use in an LLM call
start = time.time()
chat_response = openaiClient.chat.completions.create(
    model=formatted_prompt.prompt_info.model,
    messages=formatted_prompt.llm_prompt,
    # Pass the tool schema to the LLM call
    tool_schema=formatted_prompt.tool_schema,
    **formatted_prompt.prompt_info.model_parameters
)
end = time.time()

Record Evals From Your Code

Freeplay allows you to record client-side executed evals to Freeplay during the record step (more here). Code evals are useful for running objective assertions or pairwise comparisons against ground truth data. They are passed as a key-value pair and can be associated with either a regular Session or a Test Run session.
# record the results
payload = RecordPayload(
  project_id=project_id,
  all_messages=all_messages,
  inputs=prompt_vars,
  session_info=session.session_info,
  prompt_version_info=prompt_info,
  call_info=call_info,
  response_info=ResponseInfo(
    is_complete=chat_response.choices[0].finish_reason == 'stop'
  ),
  eval_results={
      "valid schema": True,
      "valid category": True,
      "string distance": 0.81
  }
)
completion_info = fpClient.recordings.create(payload)

Using your own IDs

Freeplay allows you to provide your own client-side UUIDs for both Sessions and Completions. This can be useful if you already have natural identifiers in your application code. Providing your own Completion Id also allows you to store the completion Id to be used for recording customer feedback without having to wait for the record call to complete. Thus making the record call entirely non-blocking.
from freeplay import Freeplay, RecordPayload, CallInfo, ResponseInfo, SessionInfo, CallInfo, SessionInfo
from uuid import uuid4

## PROMPT FETCH

# set the prompt variables

prompt_vars = {"keyA": "valueA"}

# get a formatted prompt

formatted_prompt = fpClient.prompts.get_formatted(project_id=project_id,
template_name="template_name",
environment="latest",
variables=prompt_vars)

## LLM CALL

# Make an LLM call to your provider of choice

start = time.time()
chat_response = openaiClient.chat.completions.create(
model=formatted_prompt.prompt_info.model,
messages=formatted_prompt.llm_prompt,
\*\*formatted_prompt.prompt_info.model_parameters
)
end = time.time()

# add the response to your message set

all_messages = formatted_prompt.all_messages(
{'role': chat_response.choices[0].message.role,
'content': chat_response.choices[0].message.content}
)

## RECORD

### CUSTOM IDS

# create your Ids (must be UUIDs)

session_id = uuid4()
completion_id = uuid4()

# Create sessionInfo with custom Ids

session_info = SessionInfo(
session_id=session_id,
custom_metadata={'keyA': 'valueA'}
)

### Extra data

call_info = CallInfo(
provider=formatted_prompt.prompt_info.provider,
model=formatted_prompt.prompt_info.model,
start_time=s,
end_time=e,
model_parameters=all_params # pass the full parameter set
)

# build the record payload

payload = RecordPayload(
project_id=project_id,
all_messages=all_messages,
inputs=prompt_vars, # Pass the custom session_info and completion_id
session_info=session_info,
completion_id=completion_id,
prompt_version_info=formatted_prompt.prompt_info,
call_info=call_info,
)

# record the LLM interaction

fpClient.recordings.create(payload)