Vercel AI SDK
Build a powerful and well evaluated agent with Freeplay and the Vercel AI SDK
Build a State of the Art Agent with Freeplay and the AI SDK
Vercel's AI SDK is a robust, end to end framework for powering your Agent. Freeplay provides a simple, lightweight integration to instrument your Agent, making it easy for you to:
- Log agent interactions
 - Evaluate your agent's behavior and identify issues
 - Iterate on your system prompt and monitor the impact
 
This guide will walk you through:
- Initializing Freeplay in your code - Installing dependencies and configuring your environment
 - Managing your prompts in Freeplay - Migrating prompt and model configurations to Freeplay and accessing them in your app
 - Integrating observability - Adding Freeplay logging to your SDK code for comprehensive data capture and agent tracing
 
Prerequisites
Before you begin, make sure you have
- A Freeplay account with an active project
 - You're using Vercel AI SDK v5 or greater
 
Installation
Install the Freeplay Vercel SDK along with the required dependencies
npm install @freeplayai/vercel @vercel/otel @arizeai/openinference-vercel @opentelemetry/api @opentelemetry/sdk-trace-base(Note: If you're using the NextJS implementation, you can omit @opentelemetry/sdk-trace-base)
Configuration
Set up your credentials
Configure the following environment variables:
# Freeplay credentials
FREEPLAY_API_KEY=your_freeplay_api_key_here
FREEPLAY_PROJECT_ID=your_freeplay_project_id_here
# Optional: Override the default Freeplay OTEL endpoint - only relevant if you're on a custom ?deployment
# FREEPLAY_OTEL_ENDPOINT=https://api.freeplay.ai/api/v0/otel/v1/traces
# Allowed provider API keys, select at least one:
# AI Gateway - Model still must be OpenAI, Anthropic, Google or Vertex
AI_GATEWAY_API_KEY=
# OpenAI
OPENAI_API_KEY=
# Anthropic
ANTHROPIC_API_KEY=
# Google
GOOGLE_GENERATIVE_AI_API_KEY=
# Google Vertex
GOOGLE_VERTEX_LOCATION=
GOOGLE_VERTEX_PROJECT=
GOOGLE_CLIENT_EMAIL=
GOOGLE_PRIVATE_KEY=
GOOGLE_PRIVATE_KEY_ID=You can find your API key and Project ID in your Freeplay project settings.
Initialize the FreeplaySpanProcessor
FreeplaySpanProcessor Next JS instrumentation.ts
instrumentation.tsIf you're using Next's included Telemetry harness, you can simply add the Freeplay processor to your existing setup
import { registerOTel } from "@vercel/otel";
import { createFreeplaySpanProcessor } from "@freeplayai/vercel";
export function register() {
  registerOTel({
    serviceName: "otel-nextjs-example",
    spanProcessors: [createFreeplaySpanProcessor(), ...otherProcessors],
  });
} Node manual telemetry setup
If you're not using Vercel's Telemetry, you can add Freeplay's Span Processor to another library when you initialize your app. For example:
import { NodeSDK } from "@opentelemetry/sdk-node";
// Initialize OpenTelemetry with Freeplay
const sdk = new NodeSDK({
  spanProcessors: [createFreeplaySpanProcessor()],
});Using Freeplay-Hosted Prompts (Recommended)
One of Freeplay's most powerful features is centralized prompt management. Instead of hardcoding prompts in your application, call them from Freeplay with version control and environment management.
import { streamText } from "ai";
import {
  getPrompt,
  FreeplayModel,
  createFreeplayTelemetry,
} from "@freeplayai/vercel";
export async function POST(req: Request) {
  const { messages, chatId } = await req.json();
  const inputVariables = {
    customer_issue: "I can't log into my account",
  };
  // Get prompt from Freeplay
  const prompt = await getPrompt({
    templateName: "customer-support-agent", // Replace with your prompt name
    variables: inputVariables,
    messages,
  });
  // Automatically select the correct model provider based on the prompt
  const model = await FreeplayModel(prompt);
  const result = streamText({
    model,
    messages,
    system: prompt.systemContent,
    experimental_telemetry: createFreeplayTelemetry(prompt, {
      functionId: "my-streamText-agent",
      sessionId: chatId,
      inputVariables,
    }),
  });
  return result.toDataStreamResponse();
}
Using Raw OTEL
With this method, you can use the AI SDK as you normally may, and just need to add the following code snippet to your streamText (or similar) method you're using for chat.
NOTE: This is not recommended except for initial testing, as many Freeplay features will not be available if you do not implement Freeplay prompt management.
  experimental_telemetry: {
    isEnabled: true,
    functionId: "my-streamText-agent",
    metadata: {
      sessionId: chatId,
    },
  },
Automatic Observability
Once initialized, the Freeplay SDK automatically instruments your Vercel AI SDK application with OpenTelemetry. This means every chat turn is traced and sent to Freeplay without any additional code.
What Gets Tracked
Freeplay automatically captures:
- LLM Interactions: Prompt version, inputs, provider, model name, tokens used, latency
 - Tool executions: Which tools were called and their results
 - Conversation flows: Multi-turn interactions and state transitions
 
You can view all of this data in the Freeplay dashboard, making it easy to debug issues, optimize performance, and understand how your application behaves in production.
Next Steps
Now that you've integrated Freeplay with your AI SDK application, you can:
- Create and manage prompts in the Freeplay dashboard with version control
 - Set up environments to test changes in staging before deploying to production
 - Build evaluation datasets to systematically test your application's performance
 - Analyze traces to identify bottlenecks and optimize your agent workflows
 - Collaborate with your team on prompt engineering and application improvements
 
Visit the Freeplay documentation to learn more about advanced features like prompt experiments, A/B testing, and custom evaluation metrics.
Updated about 6 hours ago
