Overview
Freeplay simplifies the process of iterating and testing different versions of your prompts and provides a comprehensive prompt management system. Here’s how it works.Prompt Templates
A prompt template in Freeplay is a pre-defined configuration for a given component of your LLM system. These prompt templates serve as the blueprint for your LLM interactions. Given the iterative nature of prompt development, any given prompt template can have many different prompt template versions associated with it.Components of Prompt Template Version
The following 3 things make up the configuration of a prompt template version. When you make a change to any one of the following 3 things that is considered a new prompt template version. The creation of new prompt template versions should be a very frequent action as you’re constantly iterating on your prompt. You can think of creating a new version as akin to commit to your code base. We’ll talk more about deployment down below.
Content
This is the actual text content of your prompt. The constant parts of your prompt will be written as normal text while the variable parts of your prompt will be denoted with variable place holders. When the prompt is invoked, the application specific content will be injected into the variables at runtime. Take the following example:
question, conversation_history, and supporting_information. When invoked the prompt will become fully hydrated and what is actually sent to the LLM would look like this
Variables in Freeplay are defined via mustache syntax, you can find more information on advanced mustache usage for things like conditionals and structured inputs in this guide.
Model Config
Model configuration includes model selection as well as associated parameters like temperature, max tokens, ect.
Tools
Tool schemas can also be managed in Freeplay as part of your prompt templates. Just like with content messages and model configuration the tool schema configured in Freeplay will be passed down in the SDK to be used in code. See more details on working with tools in Freeplay here.
Structured Outputs
Structured outputs allow you to rely on the model outputting its results in a consistent format. Many models support a JSON output mode that can be defined and they will respect that format. Additionally, some model providers support the ability to define typed outputs that are more structured and formal than JSON outputs. Freeplay supports both. You can learn more here.
Prompt Management
Freeplay offers a number of different features related to prompt management including- Native versioning with full version history for transparent traceability
- An interactive prompt editor equipped with dozens of different models and integrated with your dataset
- A deployment mechanism tied into the Freeplay SDK
- A structured templating language for writing prompts and associated evaluations
- Freeplay as the source of truth for prompts
- Code as the source of truth for prompts
Freeplay as the source of truth for prompts
In this usage pattern new prompts are created within Freeplay and then passed down in code. Freeplay becomes the source of truth for the most up to date version of a given prompt. The flow follows the steps below. Step 1: Create a new prompt template version If you don’t already have a prompt template set up you can create one by going to Prompts → Create Prompt Template. If you do already have a prompt template set up you can start making changes to your prompt template and you’ll see that prompt turn to an unsaved draft. As you make edits you can run your prompt against your dataset examples in real-time to understand the impacts of your changes.



Code as the source of truth for prompts
In this usage pattern new prompt versions originate in code and are pushed to Freeplay programmatically. Source code becomes the source of truth for the latest prompt version but prompt version are also reflected in Freeplay to help power experimenting and evaluation. Step 1: Push new prompt version to Freeplay Freeplay provides APIs and SDK methods to push new versions of a prompt. When you create a new version in code you’ll push that version to Freeplay. (This is a method that doesn’t exist yet, exact naming and structure not meant to be prescriptive)
What’s Next Now that we’ve set up a robust Prompt Template and learned about Version Control and Environments, let’s set up some Evaluations. Ask AI

