Skip to main content
Datasets in Freeplay are an essential part of organizing data to test your LLM systems. They can also be used to curate data for human review or fine-tuning. Datasets are the foundation of Test Runs in Freeplay.
API Reference: Freeplay supports two types of datasets:
  • Prompt Datasets (for component-level testing)
  • Agent Datasets (for end-to-end testing)
Each is automatically built to enforce schemas that maintain compatibility with your separate prompts and agents.
A key benefit of using Freeplay to curate Datasets is that it’s seamless to save new examples that you observe in real-world testing or production to existing Datasets. This keeps the data fresh and representative of the actual use of your application. Datasets can be created to test LLM systems across a variety of scenarios, such as:
  • Golden Set: For detecting regressions vs. your ideal ground truth
  • Failure Cases: For tracking failures you observe and testing in the future to confirm they are fixed
  • Red Teaming: For managing adversarial test cases and confirming appropriate behavior by your system
  • Random Samples: For representative testing across a distributed set of values
Instructions on how to save observed data or upload data are below.

Understanding the output field

Every dataset entry has an output field. While not strictly required, we strongly recommend including an output for each example — it plays a central role in evaluations and test runs, and examples without an output have limited utility for testing. There are two primary ways to use the output field:
  • Golden output: The output represents the ideal, correct response for the given inputs. This is common in golden sets and broad-based datasets where you want to benchmark new prompt versions against a curated standard. When used in test runs or in the playground, these outputs can be viewed to see how the newly generated data compares to the ideal output.
  • Failure case: The output captures a real failure observed in production — such as a hallucination, incorrect answer, or off-tone response. This is useful for building targeted datasets that track known issues so you can confirm they are fixed in future prompt versions.
The output can come from any source — uploaded files, completions saved from observed logs, or manually written examples. When saving completions from production logs, you have the option to edit the output before saving, which allows you to curate it into a golden response or preserve it as a failure case depending on your testing goals.

Curating Datasets

Datasets in Freeplay can be curated in one of two ways: by saving completions that are recorded to Freeplay straight from the Sessions view, or by uploading existing test cases to a Dataset.

Saving Data from Recorded Sessions

While working with recorded Sessions or Traces in Freeplay, if you encounter values that are relevant for future testing, you can save it directly. You will be given the option to curate the inputs and outputs before saving to the dataset. This can be useful if you want to make this sample represent a specific type of data sample such as a golden or failure case. This can be done at the trace or completion view. To do this, simply:
  • Click + Dataset above the completion/trace view
  • Optionally, make adjustments to the inputs, history or outputs
  • Select the relevant dataset(s)
  • Optionally, click the + button to create a new dataset from this menu

Bulk Add

You can also select multiple completions or traces at once and add a large group of completions to a dataset at one time, even across pages.
  • Select the “Completions” or “Traces” view on Observability (instead of Sessions)
  • Click the radio buttons in the table for the rows you want

Adding Metadata to Dataset Entries

image.png Metadata can now be added to entries in your datasets, allowing you to store additional information with each entry. To add or edit metadata for a dataset entry:
  1. Navigate to a specific dataset entry
  2. Click the “Edit” option in the dropdown menu
  3. In edit mode, you’ll see a dedicated “Metadata” section at the top of the entry
  4. Add customizable key-value pairs such as:
    • Customer identifiers (e.g., “customerId”: “2382721”)
  5. Click “Add Metadata” to create additional fields as needed
  6. Click “Save” to store your changes

Uploading Datasets

Uploading Data Using JSONL

If you have existing data that is relevant to use for testing prompts in Freeplay, you can upload it directly as a JSONL file.
  • Navigate directly to the Dataset
  • Click the “Upload” button
  • Select a JSONL file that uses the following format. Be sure to append the filename with .jsonl
    • The "inputs" are your test cases, and are therefore required. At least one key name must match a variable value from your prompt template in Freeplay for it to be compatible for testing.
    • The "output" value is not strictly required but is strongly recommended. It represents the recorded or expected response for the given inputs — either a golden output (the ideal response) or a failure case captured from production. See Understanding the Output Field above.
    • Note that JSONL is NOT normal JSON. The syntax is the same, except each value must be flattened down to a single line. Normal JSON will not be accepted. (See https://jsonlines.org/)
{"inputs": {"tasks": "improve landing page"}, "output": "some good stuff"}
{"inputs": {"tasks": "do other stuff"}, "output": "some other stuff"}
{"inputs": {"tasks": "protect our website from bad actors"}, "output": "no more bad stuff"}

Uploading Data With CSV

Freeplay supports CSV uploads for datasets, so that you can easily upload your spreadsheets to use as datasets for testing and evaluation. This can be used to add data to a new or existing dataset.

Adding a Dataset With CSV

  1. Click the Upload Button On the dataset page, select the Upload button. image.png
  2. Download the CSV Template In the bottom-left corner of the upload dialog, click Download CSV Template to get a CSV file with the correct column names for your dataset.
image.png
  1. Format Your Data Replace the default CSV values with your dataset content, ensuring that each entry aligns with your selected prompt template. Follow these key formatting rules:
  • Use the inputs. prefix for prompt variables
    • Any variable referenced within a prompt must be prefixed with inputs. (e.g., inputs.name for a {{name}} variable).
    • This ensures that Freeplay correctly maps your dataset to your prompt template.
    • For more details on variable usage, see our Advanced Prompt Templating guide.
  • Add conversation history
    • Use history to provide previous interactions or context relevant to the prompt.
  • Specify the output (recommended)
    • Use output to define the output for each input — either a golden response or a captured failure case. See Understanding the Output Field above for details.
Before uploading your CSV file, ensure your dataset follows the required formatting rules to avoid import errors. If there are issues, Freeplay will flag them and block the upload. For example, if your CSV contains invalid inputs, Freeplay will display a warning and prevent the upload. image.png

Dataset Compatibility

We’ve found that it’s important to allow for relatively flexible compatibility rules to accommodate complex prompting strategies. The following compatibility rules may be important to know:
  • Compatibility for testing is based on the input {{variable_names}} in your prompt templates. These must match with the key names in your Datasets.
  • A Dataset is treated as compatible if one or more key names match for a given prompt template. This is important so that datasets can be treated as compatible even when some variable names are optional in practice. (See Advanced Prompt Templating Using Mustache)
  • Datasets can be used across multiple prompt templates in a Project, as long as at least one variable name is shared. For instance, if you have four prompt templates that all use the variable {{question}}, then any Dataset that contains values for {{question}} will be compatible.

What’s Next