Skip to main content
Prompt optimization uses AI to analyze your production data, evaluation results, and customer feedback to suggest improved prompts. It can also help update prompts when switching between models.

How it works

  1. You select a prompt template version to optimize and choose a dataset or set of evaluated sessions
  2. You configure what data sources to use:
    • Human labels: Scores and feedback from your team’s reviews
    • Customer feedback: Direct feedback captured from end users
    • Best practices: Provider-specific prompting guides (OpenAI or Anthropic)
  3. You can optionally provide specific instructions about what to improve
  4. Freeplay’s AI analyzes the data and generates:
    • An optimized prompt template
    • An explanation of changes made
    • A description of the new version

Use cases

  • Prompt iteration: Get AI-suggested improvements based on where your current prompt is failing
  • Model migration: Update prompts optimized for one model to work well with another
  • Data-driven improvement: Use production signals to guide prompt changes

Configuration

Prompt optimization is available from the prompt template editor:
  1. Open a prompt template and select a version
  2. Click Optimize to open the optimization panel
  3. Select your data source (dataset or evaluated sessions)
  4. Choose which signals to include (labels, feedback, best practices)
  5. Optionally add specific instructions
  6. Run the optimization
After optimization completes, Freeplay creates a new prompt version and automatically runs a comparative test so you can evaluate the results side-by-side.
Prompt optimization works best with at least 10-20 evaluated examples that include a mix of good and poor outputs.