How it works
- You select a prompt template version to optimize and choose a dataset or set of evaluated sessions
- You configure what data sources to use:
- Human labels: Scores and feedback from your team’s reviews
- Customer feedback: Direct feedback captured from end users
- Best practices: Provider-specific prompting guides (OpenAI or Anthropic)
- You can optionally provide specific instructions about what to improve
- Freeplay’s AI analyzes the data and generates:
- An optimized prompt template
- An explanation of changes made
- A description of the new version
Use cases
- Prompt iteration: Get AI-suggested improvements based on where your current prompt is failing
- Model migration: Update prompts optimized for one model to work well with another
- Data-driven improvement: Use production signals to guide prompt changes
Configuration
Prompt optimization is available from the prompt template editor:- Open a prompt template and select a version
- Click Optimize to open the optimization panel
- Select your data source (dataset or evaluated sessions)
- Choose which signals to include (labels, feedback, best practices)
- Optionally add specific instructions
- Run the optimization
Prompt optimization works best with at least 10-20 evaluated examples that include a mix of good and poor outputs.

