Quick Start
Get up and running with Freeplay in minutes. Choose the path that fits your workflow best.
Before you get started, make sure you have set up your Freeplay account, project and models. Follow the guide here.
Choose Your Starting Point
Start from UI
Build and test prompts in Freeplay's visual editor, no integration required. Perfect for iterating on and testing prompts.
Light weight observability
Get data flowing quickly with lightweight tracing and evaluate and test prompts in the UI, no integration required.
Manage prompts in code
Store prompts in your code and sync them with Freeplay to enable team collaboration and dataset curation from real sessions.
Full Integration
The complete path: prompt management, logging, testing, and evaluation. Unlock Freeplay's full platform capabilities.
Core Concepts
Master the foundational features that power your LLM workflow. These guides will help you build, test, and improve your AI applications systematically.
Prompt Management
Create, version, and deploy your prompts with Freeplay's template editor. Manage variables, test outputs, and iterate quickly across environments.
Observability
Monitor LLM completions in real-time with searchable logs, filters, and graphs. Track costs, latency, and performance across sessions and traces.
Evaluations
Build custom model-graded, code-based, and human evals to measure quality. Align auto-evals with your team's standards for reliable testing.
Review Queues
Organize human review workflows by assigning completions to team members. Generate insight reports and turn observations into improvements.
Datasets
Curate test datasets from production logs or CSV uploads. Build benchmark sets with ground truth labels for comprehensive testing.
Test Runs
Run automated batch tests to compare prompt versions head-to-head. Execute tests from the UI or SDK with full eval scoring.
Updated 5 days ago
