SDK Reference

Installation

We offer the SDK in several popular programming languages.

pip install freeplay
npm install freeplay
<!-- Add the Freeplay SDK to your pom.xml -->
<dependency>
    <groupId>ai.freeplay</groupId>
    <artifactId>client</artifactId>
    <version>x.x.xx</version>
</dependency>
<!-- If you are using Vertex/Gemini, add the following dependency as well. -->
<!-- If you are not using Gemini models, you do NOT need this dependency. -->
<dependency>
    <groupId>com.google.cloud</groupId>
    <artifactId>google-cloud-vertexai</artifactId>
    <!-- This is the current version Freeplay supports at the time of this writing. -->
    <!-- It may need to be updated to a newer version. -->
    <version>1.5.0</version>
</dependency>
// Add the Freeplay SDK to your build.gradle.kts
dependencies { 
   implementation("ai.freeplay:client:x.x.xx")
}

Freeplay Client

The Freeplay client object will be your point of entry for all SDK usage.

Custom Domain

Your organization will be assigned a custom Freeplay domain which will be used in Client instantiation. Freeplay domains will look like this: https://acme.freeplay.ai and your SDK connection URL will have an additional /apiappended to it.

Authentication

Freeplay authenticates your API request using your API Key which can be managed through the Freeplay application at https://acme.freeplay.ai/settings/api-access

Client Instantiation

The first step to using Freeplay is to create a client

from freeplay import Freeplay
import os

# create your a freeplay client object
fpClient = Freeplay(
    freeplay_api_key=os.getenv("FREEPLAY_API_KEY"),
    api_base="https://acme.freeplay.ai/api"
)
import Freeplay from "freeplay/thin";

// create your freeplay client
const fpClient = new Freeplay({
    freeplayApiKey: process.env["FREEPLAY_KEY"],
    baseUrl: "https://acme.freeplay.ai/api",
});
import ai.freeplay.client.thin.Freeplay;

String projectId = System.getenv("FREEPLAY_PROJECT_ID");
String customerDomain = System.getenv("FREEPLAY_CUSTOMER_NAME");

Freeplay fpClient = new Freeplay(Config()
                                 .freeplayAPIKey(freeplayApiKey)
                                 .customerDomain(customerDomain)
                                );
// create the freeplay client
val fpClient = Freeplay(
  Freeplay.Config()
    .freeplayAPIKey(freeplayApiKey)
    .customerDomain(customerDomain)
)

Organizing Principles

The Freeplay SDKs are organized into a set of core nouns which make up namespaces off of the core client object.

Sessions

Sessions are a collection of 1 or more completions enabling you to tie completions together in whatever logical grouping makes sense for your application. All methods are accessible from the client.prompts namespace

Methods Overview

Method NameParametersDescription
createcustom_metadata dict (optional) Metadata to associate with the sessionGenerate a SessionInfo object that will be used to log completions
deleteproject_id: str
session_id: UUID
Delete a Session

Create a Session

# create a session
session = fpClient.sessions.create()
// create a session
const session = fpClient.sessions.create({});
import ai.freeplay.client.thin.Freeplay;
import ai.freeplay.client.thin.resources.sessions.Session;

// create a session
Session session = fpClient.sessions().create();
val session = fpClient.sessions().create()

Custom Metadata

Freeplay enables you to log Custom Metadata associated with any Session. This is a fully customizable and can take any arbitrary key-value pairs.

Use Custom Metadata to record things like customer ID, thread ID, RAG version, Git commit SHA, or any other information you want to track.

# create a session with custom metadata
session = fpClient.sessions.create(
  custom_metadata={
    "keyA": "valueA",
    "keyB": false
  }
)
// create a session with custom metadata
const session = fpClient.sessions.create({
  keyA: "valueA",
  keyB: "valueB"
});
import ai.freeplay.client.thin.resources.sessions.Session;

// create a session with Custom Metadata
Session sessionB = fpClient.sessions().create().customMetadata(Map.of("keyA", "valueA"));
val session = fpClient.sessions().create()
  .customMetadata(mapOf("keyA" to "valueA"))

Delete a Session

project_id = 'bf56b063-80dc-4ad5-91f6-f7067ad1fa06'
session_id = session.session_id

fpClient.sessions.delete(project_id, session_id)
const projectId = 'bf56b063-80dc-4ad5-91f6-f7067ad1fa06'
const sessionId = session.sessionId;

await fpClient.sessions.delete(projectId, sessionId);
String projectId = 'bf56b063-80dc-4ad5-91f6-f7067ad1fa06';
SessionInfo sessionInfo = fpClient.sessions().create().getSessionInfo();

fpClient.sessions().delete(projectId, sessionInfo.getSessionId());

Traces

Traces are a more fine-grained way to group LLM interactions within a Session. A Trace can contain one or more completions and a Session can contain one or more Traces. Find a more detailed guide on how Sessions, Traces, and Completions fit together here.

Traces are created off of an existing session object

input_question = "What color is the sky?"

# create or restore a session
session = fpclient.sessions.create()

# create the trace
trace_info = session.create_trace(input=question)

# run series of LLM interactions here

# record the trace
output_answer = "blue" # from the LLM
trace_info.record_output(project_id=project_id, output=output_answer)
const inputQuestion = "Why is the sky blue?"
// create or restore the session
const session = await fpClient.sessions.create();

// create the trace
const traceInfo = await session.createTrace(userQuestion);

// run series of LLM completions here

// from last llm call
const outputAnswer = "blue"

// record the trace
await traceInfo.recordOutput(projectId, outputAnswer);
String inputQuestion = "Why is the sky blue?";
  
// create or restore a session
Session session = fpClient.sessions().create();

// create the trace
TraceInfo traceInfo = session.createTrace(inputQuestion);

// run series of LLM completions here


String outputAnswer = "Blue"; // final LLM answer

// record the trace
traceInfo.recordOutput(projectId, outputAnswer);

To tie a Completion to a given Trace you will pass the trace id in the record call

from freeplay import Freeplay, RecordPayload, ResponseInfo, CallInfo, SessionInfo, TraceInfo

# create or restore a session
session = fpclient.sessions.create()

# create the trace
trace_info = session.create_trace(input=question)

# fetch prompt
# call LLM 

# record with trace id
record_response = fpclient.recordings.create(
  RecordPayload(
    all_messages=all_messages,
    session_info=session_info,
    inputs=input_variables,
    prompt_info=formatted_prompt.prompt_info,
    call_info=call_info,
    response_info=response_info,
    trace_info=trace_info,
  )
)
import Freeplay, {getCallInfo, getSessionInfo} from "freeplay/thin";

// create or restore the session
const session = await fpClient.sessions.create();

// create the trace
const traceInfo = await session.createTrace(userQuestion);

// fetch prompt
// call LLM

const completionResponse = await fpClient.recordings.create(
  {
    allMessages: messages,
    inputs: input_variables,
    sessionInfo: session,
    promptInfo: formattedPrompt.promptInfo,
    callInfo: getCallInfo(formattedPrompt.promptInfo, start, end),
    responseInfo: {
      isComplete: "stop_sequence" === llmResponse.stop_reason
    },
    traceInfo: traceInfo
  }
)
// create or restore a session
Session session = fpClient.sessions().create();

// create the trace
TraceInfo traceInfo = session.createTrace(inputQuestion);

// fetch prompt
// run llm completion

// record with the trace id
RecordInfo recordInfo = new RecordInfo(
  allMessages,
  variables,
  session.getSessionInfo(),
  formattedPrompt.getPromptInfo(),
  callInfo,
  responseInfo
 );
// add the trace info
recordInfo.traceInfo(traceInfo);

fpClient.recordings().create(recordInfo);

See a full code example here

Prompts

Retrieve your prompt templates from the Freeplay server. All methods associated with your Freeplay prompt template are accessible from the client.prompts namespace.

Methods Overview

Some SDKs will use camel case rather than snake case depending on convention for the given language

Method NameParametersDescription
get_formattedproject_id: string
template_name: string
environment: string
Get a formatted prompt template object with variables inserted, messages formatted for the configured LLM provider (e.g. OpenAI, Anthropic), and model parameters for the LLM call.
getproject_id: string
template_name: string
environment: string
Get a prompt template by environment.

Note: The prompt template will not have variables substituted or be formatted for the configured LLM provider.

Get a Formatted Prompt

Get a formatted prompt object with variables inserted, messages formatted for the associated LLM provider, and model parameters to use for the LLM call. This is the most convenient method for most prompt fetch use cases given that formatting is handled for you server side in Freeplay.


# get a formatted prompt
formatted_prompt = fpclient.prompts.get_formatted(project_id=project_id,
                                                  template_name="template_name",
                                                  environment="latest",
                                                  variables={"keyA": "valueA"})


# Sample use in an LLM call
start = time.time()
chat_response = openaiClient.chat.completions.create(
    model=formatted_prompt.prompt_info.model,
    messages=formatted_prompt.llm_prompt,
    **formatted_prompt.prompt_info.model_parameters
)
end = time.time()

# add the response to your message set
all_messages = formatted_prompt.all_messages(
    {'role': chat_response.choices[0].message.role, 
     'content': chat_response.choices[0].message.content}
)

// set the prompt variables
let promptVars = {"keyA": "valueA"};

// fetch a formatted prompt template
let formattedPrompt = await fpClient.prompts.getFormatted({
    projectId: projectID,
    templateName: "template_name",
    environment: "latest",
    variables: promptVars,
});
import ai.freeplay.client.thin.resources.prompts.FormattedPrompt;

// create the prompt variables
Map<String, Object> promptVars = Map.of("keyA", "valueA");
// get a formatted prompt
CompletableFuture<FormattedPrompt<Object>> formattedPrompt = fpClient.prompts()
  .getFormatted(projectId, "template_name", "latest", promptVars, null);
// set the prompt variables
val promptVars = mapOf("keyA" to "valA")

/* PROMPT FETCH */
val formattedPrompt = fpClient.prompts().getFormatted<String>(
  projectId,
  "template-name",
  "latest",
  promptVars
).await()

Get a Prompt Template

Get a a prompt template object that does not yet have variables inserted and has messages formatted in consistent LLM provider agnostic structure. It is particularly useful when you want to reuse a prompt template with different variables in the same execution path, like Test Runs.

This method gives you more control to handle formatting in your own code rather than server side in Freeplay, but requires a few more lines of code.

# get an unformatted prompt template
template_prompt = fpClient.prompts.get(project_id=project_id,
                                       template_name="template_name",
                                       environment="latest"
                                       )
# to format the prompt
formatted_prompt = template_prompt.bind({"keyA": "valueA"}).format() 
// get a prompt template
let promptTemplate = await fpClient.prompts.get({
    projectId: projectID,
    templateName: "album_bot",
    environment: "latest",
});

// format the prompt temlate
// set the prompt variables
let promptVars = {"keyA": "valueA"};
// format the prompt temlate
let formattedPrompt = promptTemplate.bind(promptVars).format();
import ai.freeplay.client.thin.resources.prompts.TemplatePrompt;
import ai.freeplay.client.thin.resources.prompts.FormattedPrompt;

// create the prompt variables
Map<String, Object> promptVars = Map.of("keyA", "valueA");

// get prompt template
CompletableFuture<TemplatePrompt> promptTemplate = fpClient.prompts().get(
  projectId, "template_name", "environment");

// bind and format client side
 FormattedPrompt<Object> formattedPrompt = promptTemplate.join().bind(promptVars).format(
   promptTemplate.get().getPromptInfo().getFlavorName()
 );
// set the prompt variables
val promptVars = mapOf("keyA" to "valA")

// fetch prompt template
val promptTemplate = fpClient.prompts().get(
  projectId,
  "template-name",
  "latest"
).await()

// bind and format client side
val formattedPrompt = promptTemplate.bind(promptVars).format<String>()

Using History with Prompt Templates

history is a special object in Freeplay prompt templates for managing state over multiple LLM interactions by passing in previous messages. It accepts an array of prior messages when relevant.

Before using history in the SDK, you most configure it on your prompt template. See more details in the Multi-Turn Chat Support section.

Once you have history configured for a prompt template, you can pass it during the formatting process. The history messages will be inserted wherever you have your history placeholder in your prompt template.

previous_messages = [{"role": "user": "what are some dinner ideas...",
                      "role": "assistant": "here are some dinner ideas..."}]
prompt_vars = {"question": "how do I make them healthier?"}

formatted_prompt = fpClient.prompts.get_formatted(
    project_id=project_id,
    template_name="SamplePrompt",
    environment="latest",
    variables=prompt_vars,
    history=previous_messages # pass the history messages here
)

print(formatted_prompt.messages)
# output:
[
{'role': 'system', 'content': 'You are a polite assitant...'},
{'role': 'user', 'content': 'what are some dinner ideas...'},
{'role': 'assistant', 'content': 'here are some dinner ideas...'}, 
{'role': 'user', 'content': 'how do I make them healthier?'}
]

See a full implementation of using history in the context of a multi-turn chatbot application here

Using Tool Schemas with Prompt Templates

You can define tool schemas alongside your prompt templates. The Freeplay SDK will format the tool schema based on the configured LLM provider, so you can pass the tool schema to the LLM provider as is.


# get a formatted prompt
formatted_prompt = fpclient.prompts.get_formatted(project_id=project_id,
                                                  template_name="template_name",
                                                  environment="latest",
                                                  variables={"keyA": "valueA"})


# Sample use in an LLM call
start = time.time()
chat_response = openaiClient.chat.completions.create(
    model=formatted_prompt.prompt_info.model,
    messages=formatted_prompt.llm_prompt,
    # Pass the tool schema to the LLM call
    tool_schema=formatted_prompt.tool_schema,
    **formatted_prompt.prompt_info.model_parameters
)
end = time.time()

// set the prompt variables
let promptVars = {"keyA": "valueA"};

// fetch a formatted prompt template
let formattedPrompt = await fpClient.prompts.getFormatted({
    projectId: projectID,
    templateName: "template_name",
    environment: "latest",
    variables: promptVars,
});

// Sample use in an LLM call
const chatCompletion = await openai.chat.completions.create({
    messages: formattedPrompt.llmPrompt,
    model: formattedPrompt.promptInfo.model,
    toolSchema: formattedPrompt.toolSchema,
    ...formattedPrompt.promptInfo.modelParameters
});

val variables = mapOf("keyA" to "valA")
fpClient.prompts()
    .getFormatted<List<ChatMessage>>(
        projectId,
        "my-prompt",
        "latest",
        variables,
    ).thenCompose { formattedPrompt ->
        val startTime = System.currentTimeMillis()
        callAnthropic(
            objectMapper,
            anthropicApiKey,
            formattedPrompt.promptInfo.model,
            formattedPrompt.promptInfo.modelParameters,
            formattedPrompt.formattedPrompt,
            formattedPrompt.systemContent.orElse(null),
            formattedPrompt.toolSchema
        ).thenApply { Triple(formattedPrompt, it, startTime) }
    }
    .join()

Recordings

Record LLM interactions to the Freeplay server for observability and evaluation. All methods associated with recordings are accessible via the client.recordings namespace

Methods Overview

Method NameParametersDescription
createRecordPayloadLog your LLM interaction

Record an LLM Interaction

Log your LLM interaction with Freeplay. This is assuming your have already retrieved a formatted prompt and made an LLM call as demonstrated in the Prompts Section

from freeplay import Freeplay, RecordPayload, CallInfo, ResponseInfo, SessionInfo, CallInfo

## PROMPT FETCH ##
# set the prompt variables
prompt_vars = {"keyA": "valueA"}
# get a formatted prompt
formatted_prompt = fpclient.prompts.get_formatted(project_id=project_id,
                                                  template_name="template_name",
                                                  environment="latest",
                                                  variables=prompt_vars)

## LLM CALL ##
# Make an LLM call to your provider of choice
start = time.time()
chat_response = openaiClient.chat.completions.create(
    model=formatted_prompt.prompt_info.model,
    messages=formatted_prompt.llm_prompt,
    **formatted_prompt.prompt_info.model_parameters
)
end = time.time()

# add the response to your message set
all_messages = formatted_prompt.all_messages(
    {'role': chat_response.choices[0].message.role, 
     'content': chat_response.choices[0].message.content}
)

## RECORD ##
# create a session
session = fpClient.sessions.create()

# build the record payload
payload = RecordPayload(
    all_messages=all_messages,
    inputs=prompt_vars,
    session_info=session, 
    prompt_info=formatted_prompt.prompt_info,
    call_info=CallInfo.from_prompt_info(formatted_prompt.prompt_info, start_time=start, end_time=end), 
    response_info=ResponseInfo(
        is_complete=chat_response.choices[0].finish_reason == 'stop'
    )
)
# record the LLM interaction
fpClient.recordings.create(payload)
import Freeplay, { getCallInfo, getSessionInfo } from "freeplay/thin";
import OpenAI from "openai";

/* PROMPT FETCH */
// set the prompt variables
let promptVars = {"pop_star": "Taylor Swift"};
// fetch a formatted prompt template
let formattedPrompt = await fpClient.prompts.getFormatted({
    projectId: projectID,
    templateName: "template_name",
    environment: "latest",
    variables: promptVars,
});

/* LLM CALL */
// make the llm call
const openai = new OpenAI(process.env["OPENAI_API_KEY"]);

let start = new Date();
const chatCompletion = await openai.chat.completions.create({
    messages: formattedPrompt.llmPrompt,
    model: formattedPrompt.promptInfo.model,
  	...formattedPrompt.promptInfo.modelParameters
});
let end = new Date();
console.log(chatCompletion.choices[0].message);

// update the messages
let messages = formattedPrompt.allMessages({
    role: chatCompletion.choices[0].message.role,
    content: chatCompletion.choices[0].message.content,
});

/* RECORD */
// create a session
let session = fpClient.sessions.create({});

// record the LLM interaction with Freeplay
await fpClient.recordings.create({
    allMessages: messages,
    inputs: promptVars,
    sessionInfo: getSessionInfo(session),
    promptInfo: formattedPrompt.promptInfo,
    callInfo: getCallInfo(formattedPrompt.promptInfo, start, end),
    responseInfo: {
        isComplete: "stop_sequence" === chatCompletion.choices[0].stop
    }
});
package ai.freeplay.example.java;

import ai.freeplay.client.thin.Freeplay;
import ai.freeplay.client.thin.resources.prompts.FormattedPrompt;
import ai.freeplay.client.thin.resources.recordings.*;
import ai.freeplay.client.thin.resources.sessions.SessionInfo;
import ai.freeplay.client.thin.resources.prompts.ChatMessage;
import ai.freeplay.example.java.ThinExampleUtils.Tuple2;

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;

import java.util.Map;
import java.util.List;
import java.net.http.HttpResponse;
import java.util.concurrent.CompletableFuture;


import static java.lang.String.format;
import static ai.freeplay.client.thin.Freeplay.Config;
import static ai.freeplay.example.java.ThinExampleUtils.callOpenAI;


public class FreeplayRecord {
    private static final ObjectMapper objectMapper = new ObjectMapper();

    public static void main(String[] args) {

        // set your environment variables
        String openaiApiKey = System.getenv("OPENAI_API_KEY");
        String freeplayApiKey = System.getenv("FREEPLAY_API_KEY");
        String projectId = System.getenv("FREEPLAY_PROJECT_ID");
        String customerDomain = System.getenv("FREEPLAY_CUSTOMER_NAME");
        // create your url from your customer domain
        String baseUrl = format("https://%s.freeplay.ai/api", customerDomain);


        // initialize the freeplay client
        Freeplay fpClient = new Freeplay(Config()
                .freeplayAPIKey(freeplayApiKey)
                .customerDomain(customerDomain)
        );

        // create the prompt variables
        Map<String, Object> promptVars = Map.of("pop_star", "J Cole");
        /* FETCH PROMPT */
        var promptFuture = fpClient.prompts().<String>getFormatted(
                projectId,
                "album_bot",
                "latest",
                promptVars
        );

        /* LLM CALL */
        // set timer to be used to measure latency
        long startTime = System.currentTimeMillis();
        var llmFuture = promptFuture.thenCompose((FormattedPrompt<String> formattedPrompt) ->
                callOpenAI(
                        objectMapper,
                        openaiApiKey,
                        formattedPrompt.getPromptInfo().getModel(), // get the model name
                        formattedPrompt.getPromptInfo().getModelParameters(), // get the model parameters
                        formattedPrompt.getBoundMessages() // get the messages, formatted for your specified provider
                ).thenApply((HttpResponse<String> response) ->
                        new Tuple2<>(formattedPrompt, response)
                        )
                );

        /* RECORD TO FREEPLAY */
        var recordFuture = llmFuture.thenCompose((Tuple2<FormattedPrompt<String>, HttpResponse<String>> promptAndResponse) ->
                recordResult(
                        fpClient,
                        promptAndResponse.first, promptVars, startTime, promptAndResponse.second
                )
        );

        recordFuture.thenApply(recordResponse -> {
            System.out.println("Recorded call succeeded with completionId: " + recordResponse.getCompletionId());
            return null;
        }).exceptionally(exception -> {
            System.out.println("Got exception: " + exception.getMessage());
            return new RecordResponse(null);
        }).join();
    }

    public static CompletableFuture<RecordResponse> recordResult(
            Freeplay fpClient,
            FormattedPrompt<String> formattedPrompt,
            Map<String, Object> promptVars,
            long startTime,
            HttpResponse<String> response
    ) {
        JsonNode bodyNode;
        try{
            bodyNode = objectMapper.readTree(response.body());
        } catch (JsonProcessingException e){
            throw new RuntimeException("Unable to parse response body", e);
        }

        // add the returned message to the list of messages
        String role = bodyNode.path("choices").path(0).path("message").path("role").asText();
        String content = bodyNode.path("choices").path(0).path("message").path("content").asText();
        List<ChatMessage> allMessages = formattedPrompt.allMessages(
                new ChatMessage(role, content)
        );

        // construct the call info from the formatted prompt
        CallInfo callInfo = CallInfo.from(
                formattedPrompt.getPromptInfo(),
                startTime,
                System.currentTimeMillis()
        );
        // construct the response info
        ResponseInfo responseInfo = new ResponseInfo(
                "stop_sequence".equals(bodyNode.path("stop_sequence").asText())
        );

        // construct the session info from the session object
        SessionInfo sessionInfo = fpClient.sessions().create().getSessionInfo();

        System.out.println("Completion: " + content);

        // make the record call to freeplay
        return fpClient.recordings().create(
                new RecordInfo(
                        allMessages,
                        promptVars,
                        sessionInfo,
                        formattedPrompt.getPromptInfo(),
                        callInfo,
                        responseInfo
                )
        );
    }
}
}
package ai.freeplay.example.kotlin

import ai.freeplay.client.thin.Freeplay
import ai.freeplay.client.thin.resources.prompts.ChatMessage
import ai.freeplay.client.thin.resources.recordings.CallInfo
import ai.freeplay.client.thin.resources.recordings.RecordInfo
import ai.freeplay.client.thin.resources.recordings.ResponseInfo
import ai.freeplay.example.java.ThinExampleUtils.callOpenAI
import com.fasterxml.jackson.databind.ObjectMapper
import kotlinx.coroutines.future.await
import kotlinx.coroutines.runBlocking


private val objectMapper = ObjectMapper()

fun main(): Unit = runBlocking {
    // set your environment variables
    val openaiApiKey = System.getenv("OPENAI_API_KEY")
    val freeplayApiKey = System.getenv("FREEPLAY_API_KEY")
    val projectId = System.getenv("FREEPLAY_PROJECT_ID")
    val customerDomain = System.getenv("FREEPLAY_CUSTOMER_NAME")

    // create your url from your customer domain
    val baseUrl = String.format("https://%s.freeplay.ai/api", customerDomain)

    // create the freeplay client
    val fpClient = Freeplay(
            Freeplay.Config()
                    .freeplayAPIKey(freeplayApiKey)
                    .customerDomain(customerDomain)
    )

    // set the prompt variables
    val promptVars = mapOf("keyA" to "valueA")

    /* PROMPT FETCH */
    val formattedPrompt = fpClient.prompts().getFormatted<String>(
            projectId,
            "template-name",
            "latest",
            promptVars
    ).await()

    /* LLM CALL */
    // set timer to measure latency
    val startTime = System.currentTimeMillis()
    val llmResponse = callOpenAI(
            objectMapper,
            openaiApiKey,
            formattedPrompt.promptInfo.model, // get the model name
            formattedPrompt.promptInfo.modelParameters, // get the model params
            formattedPrompt.getBoundMessages()
    ).await()
    val endTime = System.currentTimeMillis()

    // add the LLM response to the message set
    val bodyNode = objectMapper.readTree(llmResponse.body())
    val role = bodyNode.path("choices").path(0).path("message").path("role").asText()
    val content = bodyNode.path("choices").path(0).path("message").path("content").asText()
    println("Completion: " + content)

    val allMessage: List<ChatMessage> = formattedPrompt.allMessages(
            ChatMessage(role, content)
    )

    /* RECORD */
    // construct the call info from the prompt object
    val callInfo = CallInfo.from(
            formattedPrompt.getPromptInfo(),
            startTime,
            endTime
    )

    // create the response info
    val responseInfo = ResponseInfo("stop_sequence" == bodyNode.path("stop_reason").asText())

    // create the session and get the session info
    val sessionInfo = fpClient.sessions().create().sessionInfo

    // build the final record payload
    val recordResponse = fpClient.recordings().create(
            RecordInfo(
                    allMessage,
                    promptVars,
                    sessionInfo,
                    formattedPrompt.getPromptInfo(),
                    callInfo,
                    responseInfo
            )
    ).await()
    println("Completion Record Succeeded with ${recordResponse.completionId}")

}

Recording Tools

You can record tool calls and their associated schemas for both OpenAI and Anthropic. These recorded completions and tool schemas can be viewed in the observability tab.

from freeplay import Freeplay, RecordPayload, ResponseInfo, CallInfo
from openai import OpenAI

## FETCH PROMPT ##
# get your formatted prompt from freeplay including any associated tool schemas
question = "What is the latest AI news?"
prompt_vars = {"question": question}
formatted_prompt = fpClient.prompts.get_formatted(project_id=project_id,
                                        template_name="NewsSummarizerFuncEnabled",
                                        variables=prompt_vars,
                                        environment="latest")

## LLM CALL ##
# make your llm call with your tool schemas passed in
s = time.time()
openaiClient = OpenAI(api_key=openai_key)
chat_response = openaiClient.chat.completions.create(
    model=formatted_prompt.prompt_info.model,
    messages=formatted_prompt.llm_prompt,
    **formatted_prompt.prompt_info.model_parameters,
    tools=formatted_prompt.tool_schema
)
e = time.time()

## RECORD ##
# create a session
session = fpClient.sessions.create()

# Append the response to the messages
messages = formatted_prompt.all_messages(chat_response.choices[0].message)


fpclient.recordings.create(
    RecordPayload(
        all_messages=messages,
        session_info=session.session_info,
        inputs=prompt_vars,
        prompt_info=formatted_prompt.prompt_info,
        call_info=CallInfo.from_prompt_info(formatted_prompt.prompt_info, start, end),
        tool_schema=formatted_prompt.tool_schema, #record the tool schema as well
        response_info=ResponseInfo(
            is_complete=chat_response.choices[0].finish_reason == 'stop'
        ),
    )
)
import Freeplay, { getCallInfo, getSessionInfo } from "freeplay/thin";

/* FTECH PROMPT */
// set the prompt variables
let promptVars = {"question": "What is the latest AI news?"};
// fetch a formatted prompt template
let formattedPrompt = await fpClient.prompts.getFormatted({
    projectId: projectID,
    templateName: "NewsSummarizerFuncEnabled",
    environment: "latest",
    variables: promptVars,
});

/* LLM CALL */
const openai = new OpenAI(process.env["OPENAI_API_KEY"]);

let start = new Date();
const chatCompletion = await openai.chat.completions.create({
    messages: formattedPrompt.llmPrompt,
    model: formattedPrompt.promptInfo.model,
    tools=formatted_prompt.tool_schema,
  	...formattedPrompt.promptInfo.modelParameters
});
let end = new Date();
console.log(chatCompletion.choices[0]);

/* RECORD */
// create the session
let session = fpClient.sessions.create({});

// Append the response to the messages
let messages = formattedPrompt.allMessages(chatCompletion.choices[0].message);

fpClient.recordings.create({
    // Last message captures the tool call if it exists
    allMessages: messages,
    sessionInfo: session,
    inputs: promptVars,
    promptInfo: formattedPrompt.promptInfo,
    callInfo: CallInfo.fromPromptInfo(formattedPrompt.promptInfo, start, end),
    // Records the associated tool schema
    toolSchema: formattedPrompt.toolSchema,
    responseInfo: ResponseInfo(
        isComplete: chatCompletion.choices[0].finish_reason === 'function_call'
    )
});

import ai.freeplay.client.thin.Freeplay
import ai.freeplay.client.thin.resources.prompts.ChatMessage
import ai.freeplay.client.thin.resources.recordings.CallInfo
import ai.freeplay.client.thin.resources.recordings.RecordInfo
import ai.freeplay.client.thin.resources.recordings.ResponseInfo

object OpenAIToolsExample {
    @JvmStatic
    fun main(args: Array<String>) {
        /* FTECH PROMPT */
        // set the prompt variables
        val variables = mapOf("question" to "What is the latest AI news?")
        fpClient.prompts()
            .getFormatted<List<ChatMessage>>(
                projectId,
                "NewsSummarizerFuncEnabled",
                "latest",
                variables,
                null
            ).thenCompose { formattedPrompt ->
                /* LLM CALL */
                val startTime = System.currentTimeMillis()
                callOpenAIWithTools(
                    openaiApiKey,
                    formattedPrompt.promptInfo.model,
                    formattedPrompt.promptInfo.modelParameters,
                    formattedPrompt.formattedPrompt,
                    formattedPrompt.toolSchema
                ).thenApply { response ->
                    Triple(formattedPrompt, response, startTime)
                }
            }.thenCompose { (formattedPrompt, bodyNody, startTime) ->
                # messageNode is nested Object which is Map<String, Object>
                val messageNode = bodyNode["choices"][0]["message"]

                # Append the response to the messages
                val allMessages = formattedPrompt.allMessages(message).toMutableList()

                val callInfo = CallInfo.from(
                    formattedPrompt.promptInfo,
                    startTime,
                    System.currentTimeMillis()
                )

                val responseInfo = ResponseInfo(
                    "stop" == bodyNode["choices"][0]["finish_reason"].asText()
                )

                /* RECORD */
                // create the session                    
                val sessionInfo = fpClient.sessions().create().sessionInfo

                fpClient.recordings().create(
                    RecordInfo(
                        allMessages,
                        variables,
                        sessionInfo,
                        formattedPrompt.promptInfo,
                        callInfo,
                        responseInfo
                    ).toolSchema(formattedPrompt.toolSchema)
                )
            }
            .join()
    }
}

Calling Any Model

Freeplay allows you to record LLM interactions from any model or provider, including hosts or formats Freeplay doesn't natively support in our application (see that list here). When calling other models, you'll retrieve a Freeplay prompt template in our common format and reformat it as needed for the LLM you want to use.

Here is an example of calling Mistral 7B hosted on BaseTen

from freeplay import Freeplay, RecordPayload, ResponseInfo, CallInfo
from freeplay.llm_parameters import LLMParameters
import os
import requests
import time

# retrieve your prompt template from freeplay
prompt_vars = {"keyA": "valueA"}
prompt = fpClient.prompts.get(project_id=project_id,
                                        template_name="template_name",
                                        environment="latest")
# bind your variables to the prompt
formatted_prompt = prompt.bind(prompt_vars).format()
# customize the messages for your provider API
# In this case, mistral does not support system messages
# we need to merge the system message into the initial user message
messages = [{'role': 'user',
             'content': formatted_prompt.messages[0]['content'] + '' + formatted_prompt.messages[1]['content']}]

# make your LLM call to your custom provider
# call mistral 7b hosted with baseten
s = time.time()
baseten_url = "https://model-xyz.api.baseten.co/production/predict"
headers = {
    "Authorization": "Api-Key " + baseten_key,
}
data = {'messages': messages}
req = requests.post(
    url=baseten_url,
    headers=headers,
    json=data
)
e = time.time()

# add the response to ongoing list of messages
resText = req.json()
messages.append({'role': 'assistant', 'content': resText})

# create a freeplay session
session = fpClient.sessions.create()

# record the LLM interaction with Freeplay
payload = RecordPayload(
    all_messages=messages,
    inputs=prompt_vars,
    session_info=session,
    prompt_info=prompt.prompt_info,
    call_info=CallInfo(
        provider="mistral",
        model="mistral-7b",
        start_time=s,
        end_time=e,
        model_parameters=LLMParameters(
            {"paramA": "valueA", "paramB": "valueB"}
        )
    ),
    response_info=ResponseInfo(
        is_complete=True
    )
)

fpClient.recordings.create(payload)
import Freeplay from "freeplay/thin";
import axios from "axios";

// configure freeplay client
const fpClient = Freeplay({..});

/* PROMPT FETCH */
// set the prompt variables
let promptVars = {"keyA": "valueA"};
// fetch your prompt from freeplay
let promptTemplate = await fpClient.prompts.get({
    projectId: projectID,
    templateName: "template_name",
    environment: "latest"
});
// format the prompt
let formattedPrompt = promptTemplate.bind(promptVars).format();


/* LLM CALL */
// build the messages for mistral
// mistral does not support system messages
// we need to merge the system message into the initial user message
let messages = [{'role': 'user', 'content': formattedPrompt.messages[0]['content'] + ' ' + formattedPrompt.messages[1]['content']}];

// configure baseten requests
const basetenUrl = "https://model-xyz.api.baseten.co/production/predict";
const headers = {
    "Authorization": "Api-Key " + basetenKey
};
let data = {'messages': messages};

async function pingBaseTen(data) {
    // send the requests
    let start = new Date();
    let responseData;
    const response = await axios.post(basetenUrl, data, {headers: headers})
    responseData = response.data;
    console.log(responseData);
    let end = new Date();
    return [responseData, start, end];
}

let [responseData, start, end] = await pingBaseTen(data);

messages.push({'role': 'assistant', 'content': responseData});
console.log(messages);

/* RECORD */
// create a session
let session = fpClient.sessions.create({});

// log the interaction with freeplay
fpClient.recordings.create({
    allMessages: messages,
    inputs: promptVars,
    sessionInfo: session,
    promptInfo: formattedPrompt.promptInfo,
    callInfo: {provider: "mistral", model: "mistral-7B", startTime: start, endTime: end, modelParams: {'paramA': 'valueA', 'paramB': 'valueB'}},
    responseInfo: {isComplete: true}
});

import ai.freeplay.client.thin.Freeplay;
import ai.freeplay.client.thin.resources.prompts.FormattedPrompt;
import ai.freeplay.client.thin.resources.recordings.*;
import ai.freeplay.client.thin.resources.recordings.CallInfo;
import ai.freeplay.client.thin.resources.sessions.SessionInfo;
import ai.freeplay.client.thin.resources.prompts.ChatMessage;
import ai.freeplay.example.java.ThinExampleUtils.Tuple2;

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.List;
import java.net.http.HttpResponse;
import java.util.concurrent.CompletableFuture;


import static java.lang.String.format;
import static ai.freeplay.client.thin.Freeplay.Config;
import static java.net.http.HttpRequest.BodyPublishers.ofString;

public class CustomModel {
    private static final ObjectMapper objectMapper = new ObjectMapper();

    public static void main(String[] args) {

        // set your environment variables
        String basetenApiKey = System.getenv("BASETEN_KEY");
        String freeplayApiKey = System.getenv("FREEPLAY_API_KEY");
        String projectId = System.getenv("FREEPLAY_PROJECT_ID");
        String customerDomain = System.getenv("FREEPLAY_CUSTOMER_NAME");
        // create your url from your customer domain
        String baseUrl = format("https://%s.freeplay.ai/api", customerDomain);

        // initialize the freeplay client
        Freeplay fpClient = new Freeplay(Config()
                .freeplayAPIKey(freeplayApiKey)
                .customerDomain(customerDomain)
        );

        // create the prompt variables
        Map<String, Object> promptVars = Map.of("keyA", "varA");
        /* FETCH PROMPT */
        var promptFuture = fpClient.prompts().<String>getFormatted(
                projectId,
                "template-name",
                "latest",
                promptVars
        );

        /* LLM CALL */
        // set timer to be used to measure latency
        long startTime = System.currentTimeMillis();
        var llmFuture = promptFuture.thenCompose((FormattedPrompt<String> formattedPrompt) ->

                callMistral(
                        objectMapper,
                        basetenApiKey,
                        formattedPrompt.getPromptInfo().getModel(), // get the model name
                        formattedPrompt.getPromptInfo().getModelParameters(), // get the model parameters
                        formattedPrompt.getBoundMessages() // get the messages, formatted for your specified provider
                ).thenApply((HttpResponse<String> response) ->
                        new Tuple2<>(formattedPrompt, response)
                        )
                );

        /* RECORD TO FREEPLAY */
        var recordFuture = llmFuture.thenCompose((Tuple2<FormattedPrompt<String>, HttpResponse<String>> promptAndResponse) ->
                recordResult(
                        fpClient,
                        promptAndResponse.first, promptVars, startTime, promptAndResponse.second
                )
        );

        recordFuture.thenApply(recordResponse -> {
            System.out.println("Recorded call succeeded with completionId: " + recordResponse.getCompletionId());
            return null;
        }).exceptionally(exception -> {
            System.out.println("Got exception: " + exception.getMessage());
            return new RecordResponse(null);
        }).join();
    }

    public static CompletableFuture<RecordResponse> recordResult(
            Freeplay fpClient,
            FormattedPrompt<String> formattedPrompt,
            Map<String, Object> promptVars,
            long startTime,
            HttpResponse<String> response
    ) {
        JsonNode bodyNode;
        try{
            bodyNode = objectMapper.readTree(response.body());
            System.out.println(bodyNode);
        } catch (JsonProcessingException e){
            throw new RuntimeException("Unable to parse response body", e);
        }

        // add the returned message to the list of messages
        String role = "assistant";
        String content = bodyNode.asText();
        List<ChatMessage> allMessages = formattedPrompt.allMessages(
                new ChatMessage(role, content)
        );

        // construct the call info from scratch
        CallInfo callInfo = new CallInfo(
                "mistral",
                "mistral-7b",
                startTime,
                System.currentTimeMillis(),
                formattedPrompt.getPromptInfo().getModelParameters()
        );
        // construct the response info
        ResponseInfo responseInfo = new ResponseInfo(
                "stop_sequence".equals(bodyNode.path("stop_sequence").asText())

        );

        // construct the session info from the session object
        SessionInfo sessionInfo = fpClient.sessions().create().getSessionInfo();

        System.out.println("Completion: " + content);

        // make the record call to freeplay
        return fpClient.recordings().create(
                new RecordInfo(
                        allMessages,
                        promptVars,
                        sessionInfo,
                        formattedPrompt.getPromptInfo(),
                        callInfo,
                        responseInfo
                )
        );
    }

    public static CompletableFuture<HttpResponse<String>> callMistral(
            ObjectMapper objectMapper,
            String basetenApiKey,
            String model,
            Map<String, Object> llmParameters,
            List<ChatMessage> messages
    ) {
        try {
            String basetenChatURL = "https://model-xyz.api.baseten.co/production/predict";

            Map<String, Object> bodyMap = new LinkedHashMap<>();
            bodyMap.put("messages", messages);

            bodyMap.putAll(llmParameters);
            String body = objectMapper.writeValueAsString(bodyMap);

            HttpRequest.Builder requestBuilder = HttpRequest
                    .newBuilder(new URI(basetenChatURL))
                    .header("Content-Type", "application/json")
                    .header("Authorization", format("Api-Key %s", basetenApiKey))
                    .POST(ofString(body));

            return HttpClient.newBuilder()
                    .build()
                    .sendAsync(requestBuilder.build(), HttpResponse.BodyHandlers.ofString());
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }
}
package ai.freeplay.example.kotlin

import ai.freeplay.client.thin.Freeplay
import ai.freeplay.client.thin.resources.prompts.ChatMessage
import ai.freeplay.client.thin.resources.recordings.CallInfo
import ai.freeplay.client.thin.resources.recordings.RecordInfo
import ai.freeplay.client.thin.resources.recordings.ResponseInfo
import ai.freeplay.example.java.ThinExampleUtils.callOpenAI
import com.fasterxml.jackson.databind.ObjectMapper
import kotlinx.coroutines.future.await
import kotlinx.coroutines.runBlocking

// for baseten mistral call
import java.net.URI
import java.net.http.HttpClient
import java.net.http.HttpRequest
import java.net.http.HttpResponse
import java.net.http.HttpRequest.BodyPublishers
import java.util.concurrent.CompletableFuture

private val objectMapper = ObjectMapper()

fun callMistral(
        objectMapper: ObjectMapper,
        basetenApiKey: String,
        model: String,
        llmParameters: Map<String, Any>,
        messages: List<ChatMessage>
): CompletableFuture<HttpResponse<String>> {
    return try {
        val basetenChatURL = "https://model-xyz.api.baseten.co/production/predict"

        val bodyMap = LinkedHashMap<String, Any>()
        bodyMap["messages"] = messages
        bodyMap.putAll(llmParameters)
        val body = objectMapper.writeValueAsString(bodyMap)

        val requestBuilder = HttpRequest.newBuilder(URI(basetenChatURL))
                .header("Content-Type", "application/json")
                .header("Authorization", "Api-Key $basetenApiKey")
                .POST(BodyPublishers.ofString(body))

        HttpClient.newBuilder()
                .build()
                .sendAsync(requestBuilder.build(), HttpResponse.BodyHandlers.ofString())
    } catch (e: Exception) {
        throw RuntimeException(e)
    }
}

fun main(): Unit = runBlocking {
    // set your environment variables
    val openaiApiKey = System.getenv("OPENAI_API_KEY")
    val freeplayApiKey = System.getenv("FREEPLAY_API_KEY")
    val projectId = System.getenv("FREEPLAY_PROJECT_ID")
    val customerDomain = System.getenv("FREEPLAY_CUSTOMER_NAME")
    val basetenApiKey = System.getenv("BASETEN_KEY")

    // create your url from your customer domain
    val baseUrl = String.format("https://%s.freeplay.ai/api", customerDomain)

    // create the freeplay client
    val fpClient = Freeplay(
            Freeplay.Config()
                    .freeplayAPIKey(freeplayApiKey)
                    .customerDomain(customerDomain)
    )

    // set the prompt variables
    val promptVars = mapOf("keyA" to "varA")

    /* PROMPT FETCH */
    val formattedPrompt = fpClient.prompts().getFormatted<String>(
            projectId,
            "template-name",
            "latest",
            promptVars
    ).await()

    /* LLM CALL */
    // set timer to measure latency
    val startTime = System.currentTimeMillis()
    val llmResponse = callMistral(
            objectMapper,
            basetenApiKey,
            formattedPrompt.promptInfo.model, // get the model name
            formattedPrompt.promptInfo.modelParameters, // get the model params
            formattedPrompt.getBoundMessages()
    ).await()
    val endTime = System.currentTimeMillis()

    // add the LLM response to the message set
    val bodyNode = objectMapper.readTree(llmResponse.body())
    println(bodyNode)
    val role = "assistant"
    val content = bodyNode.asText()
    println("Completion: " + content)

    val allMessage: List<ChatMessage> = formattedPrompt.allMessages(
            ChatMessage(role, content)
    )

    /* RECORD */
    // construct the call info from the prompt object
    val callInfo = CallInfo(
            "mistral", // hard code the provider
            "mistral-7b", // hard code the model
            startTime,
            System.currentTimeMillis(),
            formattedPrompt.promptInfo.modelParameters
    )


    // create the response info
    val responseInfo = ResponseInfo("stop_sequence" == bodyNode.path("stop_reason").asText())

    // create the session and get the session info
    val sessionInfo = fpClient.sessions().create().sessionInfo

    // build the final record payload
    val recordResponse = fpClient.recordings().create(
            RecordInfo(
                    allMessage,
                    promptVars,
                    sessionInfo,
                    formattedPrompt.getPromptInfo(),
                    callInfo,
                    responseInfo
            )
    ).await()
    println("Completion Record Succeeded with ${recordResponse.completionId}")

}

Record Custom Model Parameters

The majority of critical model parameters like temperature and max_tokens can be configured in the Freeplay UI. However, if you are using additional parameters these can still be recorded during the Record call and will be displayed in the UI alongside the Completion.

from freeplay import Freeplay, RecordPayload, ResponseInfo, CallInfo

# create a session which will create a UID
session = fpClient.sessions.create()

# build call info from scratch to log additional params
# get the base params
start_params = formatted_prompt.prompt_info.model_parameters
# set the additional parameters
additional_params = {"presence_penalty": 0.8, "n": 5}
# combine the two parameter sets
all_params = {**start_params, **additional_params}
call_info = CallInfo(
    provider=formatted_prompt.prompt_info.provider,
    model=formatted_prompt.prompt_info.model,
    start_time=s,
    end_time=e,
    model_parameters=all_params # pass the full parameter set
)

# record the results
payload = RecordPayload(
    all_messages=all_messages,
    inputs=prompt_vars,
    session_info=session, 
    prompt_info=prompt_info,
    call_info=call_info,
    response_info=ResponseInfo(
        is_complete=chat_response.choices[0].finish_reason == 'stop'
    )
)
completion_info = fpClient.recordings.create(payload)
import Freeplay, { getSessionInfo } from "freeplay/thin";

// create a session
let session = fpClient.sessions.create({});

// get the primary parameters
const baseParams = formattedPrompt.promptInfo.modelParameters;
// set the additional parameters
const additionalParams = { presence_penalty: 0.8, n: 5 };
// merge the parameters
const allParams = { ...baseParams, ...additionalParams };
console.log(allParams);

// construct the callInfo
const callInfoPayload = {
    provider: formattedPrompt.promptInfo.provider,
    model: formattedPrompt.promptInfo.model,
    startTime: start,
    endTime: end,
    modelParameters: allParams,
};

// record the interaction with Freeplay
const completionInfo = await fpClient.recordings.create({
    allMessages: messages,
    inputs: promptVars,
    sessionInfo: getSessionInfo(session),
    promptInfo: formattedPrompt.promptInfo,
    callInfo: callInfoPayload,
    responseInfo: {
        isComplete: "stop_sequence" === chatCompletion.choices[0].stop
    }
});
import ai.freeplay.client.thin.Freeplay;
import ai.freeplay.client.thin.resources.prompts.FormattedPrompt;
import ai.freeplay.client.thin.resources.recordings.*;
import ai.freeplay.client.thin.resources.sessions.SessionInfo;
import ai.freeplay.client.thin.resources.recordings.CallInfo;

// construct the response info
ResponseInfo responseInfo = new ResponseInfo(
  "stop_sequence".equals(bodyNode.path("stop_sequence").asText())
);

// construct the session info from the session object
SessionInfo sessionInfo = fpClient.sessions().create().getSessionInfo();

// get the existing parameters
Map<String, Object> modelParams = formattedPrompt.getPromptInfo().getModelParameters();
// add the additional parameters
modelParams.put("presence_penalty", 0.8);
modelParams.put("n", 5);

// construct the call info
CallInfo callInfo = new CallInfo(
  formattedPrompt.getPromptInfo().getProvider(),
  formattedPrompt.getPromptInfo().getModel(),
  startTime,
  System.currentTimeMillis(),
  modelParams
        );
System.out.println("Completion: " + content);

// make the record call to freeplay
fpClient.recordings().create(
  new RecordInfo(
    allMessages,
    promptVars,
    sessionInfo,
    formattedPrompt.getPromptInfo(),
    callInfo,
    responseInfo
    )
);
import ai.freeplay.client.thin.Freeplay
import ai.freeplay.client.thin.resources.prompts.ChatMessage
import ai.freeplay.client.thin.resources.recordings.CallInfo
import ai.freeplay.client.thin.resources.recordings.RecordInfo
import ai.freeplay.client.thin.resources.recordings.ResponseInfo

/* RECORD */
// get the existing parameters
val modelParams = formattedPrompt.promptInfo.modelParameters;
modelParams["presence_penalty"] = 0.8;
modelParams["n"] = 5
// construct the call info from the prompt object
val callInfo = CallInfo(
  formattedPrompt.promptInfo.provider,
  formattedPrompt.promptInfo.model,
  startTime,
  endTime,
  modelParams
)

// create the response info
val responseInfo = ResponseInfo("stop_sequence" == bodyNode.path("stop_reason").asText())

// create the session and get the session info
val sessionInfo = fpClient.sessions().create().sessionInfo

// build the final record payload
val recordResponse = fpClient.recordings().create(
  RecordInfo(
    allMessage,
    promptVars,
    sessionInfo,
    formattedPrompt.promptInfo,
    callInfo,
    responseInfo
  )
).await()

Record Evals From Your Code

Freeplay allows you to record client-side executed evals to Freeplay during the record step (more here). Code evals are useful for running objective assertions or pairwise comparisons against ground truth data. They are passed as a key-value pair and can be associated with either a regular Session or a Test Run session.

# record the results
payload = RecordPayload(
    all_messages=all_messages,
    inputs=prompt_vars,
    session_info=session, 
    prompt_info=prompt_info,
    call_info=call_info,
    response_info=ResponseInfo(
        is_complete=chat_response.choices[0].finish_reason == 'stop'
    ),
  	eval_results={
      "valid schema": True,
      "valid category": True,
      "string distance": 0.81
    }
)
completion_info = fpClient.recordings.create(payload)
// record the interaction with Freeplay
const completionInfo = await fpClient.recordings.create({
    allMessages: messages,
    inputs: promptVars,
    sessionInfo: getSessionInfo(session),
    promptInfo: formattedPrompt.promptInfo,
    callInfo: callInfoPayload,
    responseInfo: {
        isComplete: "stop_sequence" === chatCompletion.choices[0].stop
    },
  	evalResults: {
      "valid schema": True,
      "valid category": True,
      "string distance": 0.81
    }
});
import ai.freeplay.client.thin.Freeplay
import ai.freeplay.client.thin.resources.prompts.ChatMessage
import ai.freeplay.client.thin.resources.recordings.CallInfo
import ai.freeplay.client.thin.resources.recordings.RecordInfo
import ai.freeplay.client.thin.resources.recordings.ResponseInfo

/* RECORD */
// get the existing parameters
val modelParams = formattedPrompt.promptInfo.modelParameters;
modelParams["presence_penalty"] = 0.8;
modelParams["n"] = 5
// construct the call info from the prompt object
val callInfo = CallInfo(
  formattedPrompt.promptInfo.provider,
  formattedPrompt.promptInfo.model,
  startTime,
  endTime,
  modelParams
)

// create the response info
val responseInfo = ResponseInfo("stop_sequence" == bodyNode.path("stop_reason").asText())

// create the session and get the session info
val sessionInfo = fpClient.sessions().create().sessionInfo

val evalResults = mapOf(
    "valid schema" to true,
    "valid category" to true,
    "string distance" to 0.81
)

// build the final record payload
val recordResponse = fpClient.recordings().create(
  RecordInfo(
    allMessage,
    promptVars,
    sessionInfo,
    formattedPrompt.promptInfo,
    callInfo,
    responseInfo,
    evalResults
  )

Using your own IDs

Freeplay allows you to provide your own client-side UUIDs for both Sessions and Completions. This can be useful if you already have natural identifiers in your application code. Providing your own Completion Id also allows you to store the completion Id to be used for recording customer feedback without having to wait for the record call to complete. Thus making the record call entirely non-blocking.

from freeplay import Freeplay, RecordPayload, CallInfo, ResponseInfo, SessionInfo, CallInfo, SessionInfo
from uuid import uuid4

## PROMPT FETCH ##
# set the prompt variables
prompt_vars = {"keyA": "valueA"}
# get a formatted prompt
formatted_prompt = fpclient.prompts.get_formatted(project_id=project_id,
                                                  template_name="template_name",
                                                  environment="latest",
                                                  variables=prompt_vars)

## LLM CALL ##
# Make an LLM call to your provider of choice
start = time.time()
chat_response = openaiClient.chat.completions.create(
    model=formatted_prompt.prompt_info.model,
    messages=formatted_prompt.llm_prompt,
    **formatted_prompt.prompt_info.model_parameters
)
end = time.time()

# add the response to your message set
all_messages = formatted_prompt.all_messages(
    {'role': chat_response.choices[0].message.role, 
     'content': chat_response.choices[0].message.content}
)

## RECORD ##
# create your Ids
session_id = uuid4()
completion_id = uuid4()

# build the record payload
payload = RecordPayload(
    all_messages=all_messages,
    inputs=prompt_vars,
    session_info=SessionInfo(session_id=session_id, custom_metadata={'keyA': 'valueA'}),
  	completion_id=completion_id,
    prompt_info=formatted_prompt.prompt_info,
    call_info=CallInfo.from_prompt_info(formatted_prompt.prompt_info, start_time=start, end_time=end), 
    response_info=ResponseInfo(
        is_complete=chat_response.choices[0].finish_reason == 'stop'
    )
)
# record the LLM interaction
fpClient.recordings.create(payload)
import Freeplay, { getCallInfo, getSessionInfo } from "freeplay/thin";
import { v4 as uuidv4 } from "uuid";
import OpenAI from "openai";

/* PROMPT FETCH */
// set the prompt variables
let promptVars = {"pop_star": "Taylor Swift"};
// fetch a formatted prompt template
let formattedPrompt = await fpClient.prompts.getFormatted({
    projectId: projectID,
    templateName: "template_name",
    environment: "latest",
    variables: promptVars,
});

/* LLM CALL */
// make the llm call
const openai = new OpenAI(process.env["OPENAI_API_KEY"]);

let start = new Date();
const chatCompletion = await openai.chat.completions.create({
    messages: formattedPrompt.llmPrompt,
    model: formattedPrompt.promptInfo.model,
  	...formattedPrompt.promptInfo.modelParameters
});
let end = new Date();
console.log(chatCompletion.choices[0].message);

// update the messages
let messages = formattedPrompt.allMessages({
    role: chatCompletion.choices[0].message.role,
    content: chatCompletion.choices[0].message.content,
});

/* RECORD */
// create your own Ids
const sessionId = uuidv4();
const completionId = uuidv4();

// record the LLM interaction with Freeplay
await fpClient.recordings.create({
    allMessages: messages,
    inputs: promptVars,
    sessionInfo: {sessionId: sessionId, customMetadata: {"keyA": "valueA"}},
  	completionId: completionId,
    promptInfo: formattedPrompt.promptInfo,
    callInfo: getCallInfo(formattedPrompt.promptInfo, start, end),
    responseInfo: {
        isComplete: "stop_sequence" === chatCompletion.choices[0].stop
    }
});
package ai.freeplay.example.kotlin

import ai.freeplay.client.thin.Freeplay
import ai.freeplay.client.thin.resources.prompts.ChatMessage
import ai.freeplay.client.thin.resources.recordings.CallInfo
import ai.freeplay.client.thin.resources.recordings.RecordInfo
import ai.freeplay.client.thin.resources.recordings.ResponseInfo
import ai.freeplay.example.java.ThinExampleUtils.callOpenAI
import com.fasterxml.jackson.databind.ObjectMapper
import java.util.UUID
import kotlinx.coroutines.future.await
import kotlinx.coroutines.runBlocking


private val objectMapper = ObjectMapper()

fun main(): Unit = runBlocking {
    // set your environment variables
    val openaiApiKey = System.getenv("OPENAI_API_KEY")
    val freeplayApiKey = System.getenv("FREEPLAY_API_KEY")
    val projectId = System.getenv("FREEPLAY_PROJECT_ID")
    val customerDomain = System.getenv("FREEPLAY_CUSTOMER_NAME")

    // create your url from your customer domain
    val baseUrl = String.format("https://%s.freeplay.ai/api", customerDomain)

    // create the freeplay client
    val fpClient = Freeplay(
            Freeplay.Config()
                    .freeplayAPIKey(freeplayApiKey)
                    .customerDomain(customerDomain)
    )

    // set the prompt variables
    val promptVars = mapOf("keyA" to "valueA")

    /* PROMPT FETCH */
    val formattedPrompt = fpClient.prompts().getFormatted<String>(
            projectId,
            "template-name",
            "latest",
            promptVars
    ).await()

    /* LLM CALL */
    // set timer to measure latency
    val startTime = System.currentTimeMillis()
    val llmResponse = callOpenAI(
            objectMapper,
            openaiApiKey,
            formattedPrompt.promptInfo.model, // get the model name
            formattedPrompt.promptInfo.modelParameters, // get the model params
            formattedPrompt.getBoundMessages()
    ).await()
    val endTime = System.currentTimeMillis()

    // add the LLM response to the message set
    val bodyNode = objectMapper.readTree(llmResponse.body())
    val role = bodyNode.path("choices").path(0).path("message").path("role").asText()
    val content = bodyNode.path("choices").path(0).path("message").path("content").asText()
    println("Completion: " + content)

    val allMessage: List<ChatMessage> = formattedPrompt.allMessages(
            ChatMessage(role, content)
    )

    /* RECORD */
    // construct the call info from the prompt object
    val callInfo = CallInfo.from(
            formattedPrompt.getPromptInfo(),
            startTime,
            endTime
    )

    // create the response info
    val responseInfo = ResponseInfo("stop_sequence" == bodyNode.path("stop_reason").asText())

    // create the session and get the session info
    val sessionInfo = fpClient.sessions().create().sessionInfo
  	// create your completion Id
  	val completionId = UUID.randomUUID()

    // build the final record payload
    val recordResponse = fpClient.recordings().create(
            RecordInfo(
                    allMessage,
                    promptVars,
                    sessionInfo,
                    formattedPrompt.getPromptInfo(),
                    callInfo,
                    responseInfo,
              			completionId
            )
    ).await()
    println("Completion Record Succeeded with ${recordResponse.completionId}")

}

Customer Feedback & Events

Freeplay lets you log customer feedback, client events, and any other customer experience-related metadata associated with any LLM completion. This can be useful to tie feedback from your application back to Freeplay creating a feedback loop, whether for explicit signals like a feedback score, or implicit signals like a request to regenerate a completion or edits to a draft.

Customer Feedback supports arbitrary key-value pairs and accepts any string, boolean, integer or float.

There is one special key-value pair to consider:

The key freeplay_feedback is a special case to capture your primary positive/negative user feedback signals from customers. It accepts only the following string values:

  • positive will render a 👍 in the UI
  • negative will render as a 👎 in the UI

Methods Overview

Method NameParametersDescription
updatecompletion_id: string
feedback: dict
Log feedback associated with a specified completion

Log Customer Feedback

# create a session which will create a UID
session = fpClient.sessions.create()

# record the results
payload = RecordPayload(
    all_messages=all_messages,
    inputs=prompt_vars,
    session_info=session, 
    prompt_info=prompt_info,
    call_info=CallInfo.from_prompt_info(prompt_info, start_time=s, end_time=e),
    response_info=ResponseInfo(
        is_complete=chat_response.choices[0].finish_reason == 'stop'
    )
)
# this will create the completion id needed for the logging of customer feedback
completion_info = fpClient.recordings.create(payload)

# add some customer feedback
fpClient.customer_feedback.update(
    completion_id=completion_info.completion_id,
    feedback={'freeplay_feedback': 'positive',
              'link_clicked': True}
)
// create a session
let session = fpClient.sessions.create({});

// record the interaction with Freeplay
const completionInfo = await fpClient.recordings.create({
    allMessages: messages,
    inputs: promptVars,
    sessionInfo: getSessionInfo(session),
    promptInfo: formattedPrompt.promptInfo,
    callInfo: getCallInfo(formattedPrompt.promptInfo, start, end),
    responseInfo: {
        isComplete: "stop_sequence" === chatCompletion.choices[0].stop
    }
});

// record customer feedback
await fpClient.customerFeedback.update({
    completionId: completionInfo.completionId,
    customerFeedback: {
        "freeplay_feedback": "positive",
        "link_click": true
    },
});
fpClient.recordings().create(
                new RecordInfo(
                        allMessages,
                        promptVars,
                        sessionInfo,
                        formattedPrompt.getPromptInfo(),
                        callInfo,
                        responseInfo
                )
        ).thenCompose(recordResponse ->
                fpClient.customerFeedback().update(
                        recordResponse.getCompletionId(),
                        Map.of("freeplay_feedback", "positive")
                ).thenApply(feedbackResponse -> recordResponse)
                );
// build the final record payload
val recordResponse = fpClient.recordings().create(
  RecordInfo(
    allMessage,
    promptVars,
    sessionInfo,
    formattedPrompt.getPromptInfo(),
    callInfo,
    responseInfo
  )
).await()

// log customer feedback
val feedbackResponse = fpClient.customerFeedback().update(
  recordResponse.completionId,
  mapOf("freeplay_feedback" to "positive")
).await()

Test Runs

Test Runs in Freeplay provide a structured way for you to run batch tests of your LLM prompts and chains. All methods associated with the Test Runs concept in Freeplay are accessible via the client.test_runs namespace.
We will focus on code in this section, but for more detail on the Test Runs concept see Test Runs

Methods Overview

Method NameParameters Description
createproject_id: string
testlist: string
name: string (optional)
description: string (optional)
Instantiate a Test Run object server side and get an Id to reference your Test Run instance.

Step by Step Usage

Create a new Test Run

from freeplay import Freeplay, RecordPayload, ResponseInfo, TestRunInfo
from openai import OpenAI

# create a new test run
test_run = fpClient.test_runs.create(
  project_id=project_id, testlist="test-list-name",
  name="mytestrun",
  description="this is a test test!"
)
import Freeplay, { getSessionInfo, getCallInfo, getTestRunInfo} from "freeplay/thin";

// create a test run
const testRun = await fpClient.testRuns.create({
    projectId: fpProjectId,
    testList: 'test-list-name',
  	name: 'my test run',
  	description: 'this is a test test'
});
// see full implmentation in Iterate over each Test Case
val testRun = fpClient.testRuns().create(projectId, "test-list",
                                         "my test run", "this is a test test!"
                                        ).await()

Retrieve your Prompts

Retrieve the prompts needed for your Test Run

# get the prompt associated with the test run
template_prompt = fpClient.prompts.get(project_id=project_id,
                                       template_name="template-name",
                                       environment="latest"
                                       )
// fetch the prompt template for the test run
let templatePrompt = await fpClient.prompts.get({
    projectId: fpProjectId,
    templateName: "template-name",
    environment: "latest",
});
// see full implmentation in Iterate over each test Case
 val templatePrompt = fpClient.prompts().get(projectId, "template-name", "prod").await()

Iterate over each Test Case

For the code you want to test: loop over each Test Case from the Test List, make an LLM call, and record the results with a link to your Test Run.

# iterate over each test case
for test_case in test_run.test_cases:
    # format the prompt with the test case variables
    formatted_prompt = template_prompt.bind(test_case.variables).format()

    # make your llm call
    s = time.time()
    openaiClient = OpenAI(api_key=openai_key)
    chat_response = openaiClient.chat.completions.create(
        model=formatted_prompt.prompt_info.model,
        messages=formatted_prompt.llm_prompt,
        **formatted_prompt.prompt_info.model_parameters
    )
    e = time.time()

    # append the results to the messages
    all_messages = formatted_prompt.all_messages(
        {'role': chat_response.choices[0].message.role,
         'content': chat_response.choices[0].message.content}
    )

    # create a session which will create a UID
    session = fpClient.sessions.create()
    # build the record payload
    payload = RecordPayload(
        all_messages=all_messages,
        inputs=test_case.variables, # the variables from the test case are the inputs
        session_info=session # use the session object created above
        test_run_info=test_run.get_test_run_info(test_case.id), # link the record call to the test run and test case
        prompt_info=formatted_prompt.prompt_info, # log the prompt information 
        call_info=CallInfo.from_prompt_info(formatted_prompt.prompt_info, start_time=s, end_time=e), # log call information
        response_info=ResponseInfo(
            is_complete=chat_response.choices[0].finish_reason == 'stop'
        )
    )
    # record the results to freeplay
    fpClient.recordings.create(payload)
for (const testCase of testRun.testCases) {
    // create a formatted prompt from the test case
    const formattedPrompt = templatePrompt.bind(testCase.variables).format();

    // make the llm call
    let start = new Date();
    const chatCompletion = await openai.chat.completions.create({
        messages: formattedPrompt.llmPrompt,
        model: formattedPrompt.promptInfo.model,
      	...formattedPrompt.promptInfo.modelParameters
    });
    let end = new Date();
    console.log(chatCompletion.choices[0].message);

    // update the messages
    let messages = formattedPrompt.allMessages({
        role: chatCompletion.choices[0].message.role,
        content: chatCompletion.choices[0].message.content,
    });

    // create a session
    let session = fpClient.sessions.create({});

    // record the test case interaction with Freeplay
    await fpClient.recordings.create({
        allMessages: messages,
        inputs: testCase.variables,
        sessionInfo: getSessionInfo(session),
        promptInfo: formattedPrompt.promptInfo,
        callInfo: getCallInfo(formattedPrompt.promptInfo, start, end),
        responseInfo: {
            isComplete: "stop_sequence" === chatCompletion.choices[0].stop
        },
        testRunInfo: getTestRunInfo(testRun, testCase.id)
    });
}
import ai.freeplay.client.thin.Freeplay;
import ai.freeplay.client.thin.resources.prompts.ChatMessage;
import ai.freeplay.client.thin.resources.prompts.FormattedPrompt;
import ai.freeplay.client.thin.resources.prompts.TemplatePrompt;
import ai.freeplay.client.thin.resources.recordings.CallInfo;
import ai.freeplay.client.thin.resources.recordings.RecordInfo;
import ai.freeplay.client.thin.resources.recordings.RecordResponse;
import ai.freeplay.client.thin.resources.recordings.ResponseInfo;
import ai.freeplay.client.thin.resources.sessions.SessionInfo;
import ai.freeplay.client.thin.resources.testruns.TestCase;
import ai.freeplay.client.thin.resources.testruns.TestRun;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;

import java.net.http.HttpResponse;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;

import static ai.freeplay.example.java.ThinExampleUtils.callOpenAI;
import static java.lang.String.format;
import static ai.freeplay.client.thin.Freeplay.Config;
import static java.util.stream.Collectors.toList;

public class TestClass {
    private static final ObjectMapper objectMapper = new ObjectMapper();

    public static void main(String[] args) throws ExecutionException, InterruptedException {
        // set your environment variables
        String openaiApiKey = System.getenv("OPENAI_API_KEY");
        String freeplayApiKey = System.getenv("FREEPLAY_API_KEY");
        String projectId = System.getenv("FREEPLAY_PROJECT_ID");
        String customerDomain = System.getenv("FREEPLAY_CUSTOMER_NAME");
        // create your url from your customer domain
        String baseUrl = format("https://%s.freeplay.ai/api", customerDomain);

        Freeplay fpClient = new Freeplay(Config()
                .freeplayAPIKey(freeplayApiKey)
                .customerDomain(customerDomain)
        );
        // create the test run
        List<RecordResponse> recordResponses = fpClient.testRuns().create(projectId, "test-list-name")
                .thenCompose(testRun ->
                        fpClient.prompts().get(projectId, "template-name", "latest")
                                .thenCompose(templatePrompt -> {

                                    var futures =
                                            testRun.getTestCases().stream()
                                                    .map(testCase ->
                                                            handleTestCase(
                                                                    fpClient,
                                                                    openaiApiKey,
                                                                    templatePrompt,
                                                                    testRun,
                                                                    testCase))
                                                    .collect(toList());

                                    return CompletableFuture.allOf(futures.toArray(CompletableFuture[]::new))
                                            .thenApply((Void v) -> futures.stream().map(CompletableFuture::join).collect(toList()));
                                })
                ).join();

        System.out.println("Record responses: " + recordResponses);
    }

    private static CompletableFuture<RecordResponse> handleTestCase(
            Freeplay fpClient,
            String openaiApiKey,
            TemplatePrompt templatePrompt,
            TestRun testRun,
            TestCase testCase
    ) {
        FormattedPrompt<String> formattedPrompt =
                templatePrompt.bind(testCase.getVariables()).format();

        long startTime = System.currentTimeMillis();
        return callOpenAI(
                objectMapper,
                openaiApiKey,
                formattedPrompt.getPromptInfo().getModel(),
                formattedPrompt.getPromptInfo().getModelParameters(),
                formattedPrompt.getBoundMessages()
        ).thenCompose((HttpResponse<String> response) ->
                recordOpenAI(fpClient, testRun, testCase, formattedPrompt, startTime, response)
        );
    }

    private static CompletableFuture<RecordResponse> recordOpenAI(
            Freeplay fpClient,
            TestRun testRun,
            TestCase testCase,
            FormattedPrompt<String> formattedPrompt,
            long startTime,
            HttpResponse<String> response
    ) {
        JsonNode bodyNode;
        try {
            bodyNode = objectMapper.readTree(response.body());
        } catch (JsonProcessingException e) {
            throw new RuntimeException("Unable to parse response body.", e);
        }

        // add the returned message to the list of messages
        String role = bodyNode.path("choices").path(0).path("message").path("role").asText();
        String content = bodyNode.path("choices").path(0).path("message").path("content").asText();
        List<ChatMessage> allMessages = formattedPrompt.allMessages(
                new ChatMessage(role, content)
        );

        CallInfo callInfo = CallInfo.from(
                formattedPrompt.getPromptInfo(),
                startTime,
                System.currentTimeMillis()
        );
        ResponseInfo responseInfo = new ResponseInfo(
                "stop_sequence".equals(bodyNode.path("stop_reason").asText())
        );
        SessionInfo sessionInfo = fpClient.sessions().create().getSessionInfo();

        System.out.println("Completion: " + content);

        return fpClient.recordings().create(
                new RecordInfo(
                        allMessages,
                        testCase.getVariables(),
                        sessionInfo,
                        formattedPrompt.getPromptInfo(),
                        callInfo,
                        responseInfo
                ).testRunInfo(testRun.getTestRunInfo(testCase.getTestCaseId())));
    }
}
for (testCase in testRun.testCases) {
  // format the prompt with test case variables
  val formattedPrompt = templatePrompt.bind(testCase.variables).format<String>()
	
  // make your llm call
  val startTime = System.currentTimeMillis()
  val llmResponse = callOpenAI(
    objectMapper,
    anthropicApiKey,
    formattedPrompt.promptInfo.model,
    formattedPrompt.promptInfo.modelParameters,
    formattedPrompt.formattedPrompt
  ).await()

  val bodyNode = objectMapper.readTree(llmResponse.body())

  println("Recording the result")
 	// append the results to your message set
  val allMessages = formattedPrompt.allMessages(
    ChatMessage("Assistant", bodyNode.path("completion").asText())
  )
  val callInfo = CallInfo.from(
    formattedPrompt.getPromptInfo(),
    startTime,
    System.currentTimeMillis()
  )
  val responseInfo = ResponseInfo("stop_sequence" == bodyNode.path("stop_reason").asText())
  // create a session
  val sessionInfo = fpClient.sessions().create().sessionInfo
  
  // record the test case results
  val recordResponse = fpClient.recordings().create(
    RecordInfo(
      allMessages,
      testCase.variables,
      sessionInfo,
      formattedPrompt.getPromptInfo(),
      callInfo,
      responseInfo
    ).testRunInfo(testRun.getTestRunInfo(testCase.testCaseId))
  ).await()
  println("Recorded with completionId ${recordResponse.completionId}")
}

Data Model

Below are details of the Freeplay data model, which can be helpful to understand for more advanced usage.
Note the parameter names are written in snake case but some SDK languages make use of camel case instead

Record Payload

Parameter NameData TypeDescriptionRequired
all_messagesList[dict[str, str]]All messages in the conversation so farY
inputsdictThe input variablesY
session_infoSessionInfoThe session id for which the recording should be associatedY
call_infoCallInfoInformation associated with the LLM callY
response_infoResponseInfoInformation associated with the LLM responseY
test_run_infoTestRunInfoInformation associated with the Test Run if this recording is part of a Test RunN
from freeplay import RecordPayload
import { RecordPayload } from "freeplay/thin";
import ai.freeplay.client.thin.resources.recordings.RecordPayload

Call Info

Parameter NameData TypeDescriptionRequired
providerstringThe name of your LLM providerY
modelstringThe name of your modelY
start_timefloatThe start time of the LLM call.
This will be used to measure latency
Y
end_timefloatThe end time of the LLM call.
This will be used to measure latency
Y
model_parametersLLMParametersThe parameters associated with your LLM callY
from freeplay import CallInfo
import { getCallInfo } from "freeplay/thin";
import ai.freeplay.client.thin.resources.recordings.CallInfo

Response Info

Parameter NameData TypeDescriptionRequired
is_completebooleanIndicates the reason for stoping the LLM generationY
function_call_responseOpenAIFunctionCallThe results of a function callN
prompt_tokensintegerThe number of tokens in the promptN
response_tokensintegerThe number of tokens in the responseN
from freeplay import ResponseInfo
import { ResponseInfo } from "freeplay/thin";
import ai.freeplay.client.thin.resources.recordings.ResonseInfo

LLM Parameters

Parameter NameData TypeDescriptionRequired
membersDict[str, any]Any parameters associated with your LLM call that you want recordedY
from freeplay.llm_parameters import LLMParameters
import { LLMParameters } from "freeplay";
import ai.freeplay.client.thin.resources.recordings.LLMParameters

Test Run Info

Parameter NameData TypeDescriptionRequired
test_run_idstringThe id of your Test RunY
test_case_idstringThe id of your Test CaseY
from freeplay import TestRunInfo
import { getTestRunInfo } from "freeplay/thin";
import ai.freeplay.client.thin.resources.testruns.TestRun;

testRun.getTestRunInfo()

OpenAI Function Call

Parameter NameData TypeDescriptionRequired
namestringThe name of the invoked function callY
argumentsstringThe arguments for the invoked function callY
from freeplay.completions import OpenAIFunctionCall
import { OpenAIFunctionCall } from "freeplay";
import ai.freeplay.client.thin.resources.recordings.OpenAIFunctionCall