Encapsulated SDK Reference
Freeplay Encapsulated SDK has been Deprecated
The original Freeplay SDK encapsulated prompt fetching, LLM requests, and recording into a single method. This makes it simple to get started but limits developer control. Freeplay has deprecated this encapsulated approach in favor of a thin approach which provides separate methods for each step, giving developers greater control. Those docs can be found here.
Installation
Freeplay integrates into your application via a simple developer SDK. Basic concepts include:
- Completions: Individual LLM requests. The Freeplay SDK will fetch the right version of your prompt, construct the HTTP request to the LLM, and then record the response.
- Sessions: Useful for grouping >1 Completions for observability and evaluation purposes.
- Test Runs: A special method to fetch test cases from Freeplay and run a batch of test cases through your code.
- Customer Feedback: Freeplay allows you to record async feedback for any Completion.
We offer SDKs in several popular programming languages:
npm install freeplay
pip install freeplay
<!-- Add the Freeplay SDK to your pom.xml -->
<dependency>
<groupId>ai.freeplay</groupId>
<artifactId>client</artifactId>
<version>x.x.xx</version>
</dependency>
Don't see an SDK you need? Contact us to request an SDK!
Quick Reference
The examples below use camelCase for method names. Others SDKs may use snake_case.
Method | Available Via | Description |
---|---|---|
createSession() | client | Create a new session |
restoreSession() | client | Restore a Session from a session ID |
getCompletion() | session, client | Issue a request to an LLM |
getCompletionStream() | session, client | Issue a request to an LLM and stream the results |
startChat() | client | Start a chat session |
startChatStream() | client | Start a chat session and stream the results |
continueChat() | client | Continue a chat session |
continueChatStream() | client | Continue a chat session and stream the results |
lastMessage() | session | Retrieve the last message from a chat session |
Authentication
Freeplay authenticates your API requests using your account's API key. API requests made without authentication or with an incorrect key will return a 401
error.
Set your API key when initializing the client:
const fpclient = new freeplay.Freeplay({
freeplayApiKey: "a9c...",
// ...
})
fpclient = Freeplay(
freeplay_api_key="a9c...",
# ...
)
Freeplay fpClient = new Freeplay(freeplayApiKey,
// ...
);
You can view and manage your API keys in the Freeplay Dashboard.
Errors
Freeplay uses standard HTTP response codes to indicate the success or failure of your API requests:
200
- Successful request401
- Invalid API key403
- Bad request, invalid parameters404
- Resource not found5xx
- Internal server error
Client
The first step to using Freeplay is to create a client.
const fpclient = new freeplay.Freeplay({
freeplayApiKey: process.env["FREEPLAY_API_KEY"],
baseUrl: "https://acme.freeplay.ai/api",
providerConfig: {
openai: {
apiKey: process.env['OPENAI_API_KEY']
}
}
})
fpclient = Freeplay(
provider_config=ProviderConfig(openai=OpenAIConfig(os.environ["OPENAI_API_KEY"])),
freeplay_api_key=os.environ["FREEPLAY_API_KEY"],
api_base=f'https://acme.freeplay.ai/api'
)
Freeplay fpClient = new Freeplay(
freeplayApiKey,
"https://acme.freeplay.ai/api",
new OpenAIProviderConfig(System.getenv("OPENAI_API_KEY"))
);
Projects
Projects are the top-level workspaces for a Prompt or a collection of Prompts used together for a single feature that you want to iterate on, test and evaluate. We recommend you create a separate Project per feature.
Once you create your first project via the Freeplay dashboard, you can reference it in your code via a UUID.
projectId: "c07011b7-a26c-45b2-a33d-25196ae19d73"
project_id = "c07011b7-a26c-45b2-a33d-25196ae19d73"
String projectId = "c07011b7-a26c-45b2-a33d-25196ae19d73"
Sessions
Sessions represent a grouping of one or more Completion requests (single requests to an LLM). In cases where you make a single Completion request, Freeplay will implicitly create a Session. For chains/multiple prompts, you can first create a Session, then log each subsequent Completion as part of the Session. In the continuous chat use case, a Session is also used to store the conversation history.
Create a Session
Create a Session to load your Prompts and to begin making Completion requests.
Request
const session = await fpclient.createSession(
projectId: "c07...",
templateName: "tmpl...",
variables: {
"variable1": "value1",
}
);
session = fpclient.create_session(project_id="c07...")
first_completion = session.get_completion(template_name="templ...",
variables={"variable1": "value1"})
CompletionSession session = fpClient.createSession("c07...", "latest");
Map<String, Object> llmParameters = Collections.emptyMap();
CompletionResponse response = session.getCompletion(
"tmpl...",
Map.of("variabale1", "value1"),
llmParameters
);
Implicit Session Creation
It can feel verbose to create a session before making a completion for a single prompt. For this case, we've included a convenience getCompletion
method on the Freeplay client that that will implicitly create a session for you.
const completion = await fpclient.getCompletion({
projectId: "c07...",
templateName: "tmpl...",
variables: {
"variable1": "value1",
}
})
completion = fpclient.get_completion(project_id="c07...",
template_name="templ...",
variables={"variable1": "value1"})
CompletionResponse completion = fpClient.getCompletion(
"c07...",
"templ...",
Map.of("variable1", "value1"),
"latest"
);
Restore a Session
In some cases, you may want to reload a session at a later time to append more completions. This is often used in the continuous chat use case.
Request
// Restore a completion session
const restoredSession = await fpclient.restoreSession({
projectId: "c07...",
sessionId: "641...",
// ...
});
// Restore a continous chat session
const restoredChatSession = await fpclient.restoreChatSession({
projectId: "c07...",
sessionId: "641...",
templateName: "templ...",
messages: chatSession.messageHistory,
// ...
});
# Restore a normal session
restored_session = fpclient.restore_session(
project_id="c07...",
session_id="641...",
# ...
)
# Restore a continuous chat session
restored_chat_session = fpclient.restore_chat_session(
project_id="c07...",
session_id="641...",
messages=chat_session.message_history,
# ...
)
// This feature is not yet implemented in the Java SDK
Completions
Single Completion
An individual request/response pair sent to an LLM for a single prompt.
Request
const completion = await fpclient.getCompletion({
projectId: "c07...",
templateName: "tmpl...",
variables: {
"variable1": "value1",
}
})
completion = fpclient.get_completion(project_id="c07...",
template_name="templ...",
variables={"variable1": "value1"})
CompletionResponse completion = fpClient.getCompletion(
"c07...",
"templ...",
Map.of("variable1", "value1"),
"latest"
);
Continuous Chat Completions
Continuous chat completions are a special type of completion in Freeplay that are used to power a chatbot style UX. They are a sequence of one or more completions that are used to represent a conversation.
Start Chat
Request
const chatSession = await fpclient.startChat({
projectId: "c07...",
templateName: "tmpl...",
variables: {
"variable1": "value1",
}
})
session, first_completion = fplient.start_chat(project_id="c07...",
template_name="templ...",
variables={"variable1": "value1"})
Map<String, Object> llmParameters = Collections.emptyMap();
ChatStart<IndexedChatMessage> chatStart = fpClient.startChat(
"c07...",
"tmpl...",
Map.of("variable1", "value1"),
llmParameters,
"latest"
);
In the case of a continuous chat, we recommend that you save the sessionId
from the response so that that can be used to restore the session at a later time.
Continue Chat - continueChat()
Request
const response2 = await chatSession.continueChat({newMessages: [{'role': 'user', 'content': 'content'}]});
response2 = session.continue_chat(first_completion.message_history,
new_messages=[{'role': 'user', 'content': 'content'}])
ChatCompletionResponse response = chatStart.getSession().continueChat(
new ChatMessage("user", "content"), llmParameters
);
Message History
The message history is a list of messages that have been sent to the LLM. It is used to generate the context for the next completion.
Environments
Environments are a way to manage multiple versions of your Prompts. For example, you may have a development
environment and a production
environment. In the Freeplay Dashboard, you can create multiple versions of your Prompt and deploy them to different environments.
const completion = await fpclient.getCompletion({
tag: "development"
// ... other params
})
completion = fpclient.get_completion(tag="development",
# ... other params
)
CompletionResponse completion = fpClient.getCompletion(
projectID,
templateName,
inputVariables,
environment
);
latest
Environment BehaviorIf you do not specify an environment in your
getCompletion
request, thelatest
version of your Prompt will be used. This means that any changes you make to your Prompt in the Freeplay Dashboard will be immediately used by your code.
LLM Parameters
By default any model and parameter settings will be retrieved from the Prompt defined in the Freeplay Dashboard. You can always override these settings or define additional parameters (e.g. model, temperature) in your code. They are passed to the LLM via the llmParams field on the completion request.
LLM Parameter Override Behavior
LLM parameters are a powerful tool for testing your completions in your code. However, they will always take presedence over what is defined in the Prompt you defined in your Freeplay Dashboard. This means that whatever you define in your code will be used regardless of what is set in the Dashboard. Because of this, we only recommend using these for testing purposes.
Model
The model parameter specifies the LLM model to use for the completion.
model: "gpt-3.5-turbo"
model="gpt-3.5-turbo"
String model = "gpt-3.5-turbo"
Temperature
The temperature parameter controls the randomness of the completions. Lower values result in more predictable completions while higher values result in more surprising completions.
temperature: 0.1
temperature=0.1
String temperature = "0.1"
Max Tokens
The max tokens parameter controls the length of the completions.
maxTokens: 256
max_tokens=256
String maxTokens = "256"
OpenAI Functions
Freeplay supports OpenAI API functions by passing them as an LLM Parameter. You can find more information about them in the OpenAI API documentation.
functions: [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
]
const completion = await fpclient.getCompletion({
// ... other params
functions
})
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
]
completion = fp_client.get_completion(
# ... other params
functions=functions
)
// This functionality is not yet available in the Java SDK
OpenAI Function Behavior
Note that you can currently only define OpenAI Functions in your Code. We plan to add the ability to define them in the Freeplay Dashboard in the future.
Streaming
Freeplay supports streaming completions via the getCompletionStream
, startChatStream
and continueChatStream
methods. This is useful for applications that require low latency responses.
const completionStream = await fpclient.getCompletionStream({
//...
})
console.log("Chat response: ");
for await (const chunk of completionStream) {
if (chunk.choices[0].role) {
process.stdout.write(`${chunk.choices[0].role} message: `);
}
process.stdout.write(chunk.content);
}
console.log('\n')
completion_stream = freeplay_chat.get_completion_stream(
# ...
)
print("Chat response: ")
for chunk in completion_stream:
print("Chunk: %s" % chunk.text.strip())
Stream<CompletionResponse> completion = session.getCompletionStream(
// ...
);
completion.forEach((CompletionResponse chunk) ->
System.out.printf("Completion text: '%s'%n", chunk.getContent())
);
Exceptions
Category | Retry-able? | Examples | Exception or Error Name |
---|---|---|---|
Client code configuration errors | No | Prompt template not found, cannot format template, LLM Provider not configured | FreeplayConfigurationException |
Client error calling Freeplay | No | No API key | FreeplayClientException |
Server error calling Freeplay | Yes | 500, 501, etc on any call to Freeplay service | FreeplayServerException |
Client error calling LLM | No | Model parameter not supplied, any 4xx we get back from the LLM | LLMClientException |
Server error calling LLM | Yes | No choices back from the LLM, invalid event definitions in a streaming response, any 5xx level error | LLMServerException |
Note that the Exception/Error Name will vary across the SDKs to follow the conventions of the language. Java SDK will use *Exception
, while the Python and Node SDKs will use *Error
.
Do Not Record Functionality
For scenarios where you do not wish to record session data in the Sessions table, the Freeplay SDK provides a DO_NOT_RECORD_PROCESSOR
option. This can be useful for privacy reasons, or when you are handling sensitive information that should not be stored.
Using the Do Not Record Processor
When initializing your Freeplay client, you can specify the recordProcessor
configuration as freeplay.DO_NOT_RECORD_PROCESSOR
. Here's how you can set it up:
import * as freeplay from "freeplay-typescript-node";
// Initialize Freeplay client with Do Not Record Processor
const fpclientNoRecord = new freeplay.Freeplay({
// Other params ...
recordProcessor: freeplay.DO_NOT_RECORD_PROCESSOR
});
// Example request with Do Not Record functionality
const responseNoRecord = await fpclientNoRecord.getCompletion({
// Normal params ...
})
console.log(responseNoRecord)
from freeplay.record import no_op_recorder
freeplay_client_no_record = Freeplay(
# Other params ...
record_processor=no_op_recorder
)
chat_completion = freeplay_client_no_record.get_completion(
project_id=freeplay_project_id,
template_name="template_name",
variables={"variable_name": "variable_value"}
)
import static ai.freeplay.client.RecordProcessor.DO_NOT_RECORD_PROCESSOR;
Freeplay fpClient = new Freeplay(
freeplayApiKey,
baseUrl,
new ProviderConfigs(new OpenAIProviderConfig(openaiApiKey)),
DO_NOT_RECORD_PROCESSOR
);
Map<String, Object> llmParameters = Collections.emptyMap();
CompletionSession session = fpClient.createSession(projectId, "prod");
CompletionResponse response = session.getCompletion(
"template_name",
Map.of("variable_name", "variable_value"),
llmParameters
);
System.out.printf("Completion text: %s%n", response.getContent());
Recording Customer Feedback
Freeplay supports the ability to log both explicit (thumbs up/down, comments, etc.) and implicit feedback (client events like “draft_dismissed”), which you can incorporate into analysis and prompt optimization workflows. You can pass booleans, strings or integers through our API or using our SDKs.
Using the recordCompletionFeedback
Method
After initializing your Freeplay client and obtaining a response from a completion session, you can record feedback for that specific completion. Here's how to set it up:
import static ai.freeplay.client.CompletionFeedback.POSITIVE_FEEDBACK;
import static ai.freeplay.client.CompletionFeedback.NEGATIVE_FEEDBACK;
// Obtain a completion response
CompletionResponse response = session.getCompletion(
"template_name",
Map.of("variable_name", "variable_value"),
Collections.emptyMap()
);
// Record feedback for the completion
fpClient.recordCompletionFeedback(
response.getCompletionId(),
Map.of("feedback", POSITIVE_FEEDBACK,
"other feedback", NEGATIVE_FEEDBACK,
"draft_dismissed", true,
"rating", 3.14159
)
);
import * as freeplay from "freeplay";
// create client
const fpclient = new freeplay.Freeplay({//...});
// run a completion
const completion = await fp.getCompletion({
projectId: "project_id",
templateName: 'template_name',
variables: {
'variable_name': "variable_value
}
})
// record the commpletion feedback
fpclient.recordCompletionFeedback(
completion.completionId,
{
customer_feedback: "freeplay-positive-feedback",
draft_dismissed: false,
rating: 3.5
}
)
# This feature is not yet implemented in the Python SDK
In this example:
- The
recordCompletionFeedback
method is used after obtaining aCompletionResponse
. - The method takes two parameters:
- The ID of the completion (
response.getCompletionId()
). - A map containing the feedback details (e.g.,
Map.of("some_feedback", "some_value")
). - Note that there is a special case for the string values
"freeplay-positive-feedback"
or"freeplay-negative-feedback"
. For any field these values are passed to, Freeplay will render this as a thumbs 👍/👎 in the Freeplay dashboard. For convenience, these constants are available as statically defined strings on the CompletionFeedback class in the Java SDK:CompletionFeedback.POSITIVE_FEEDBACK
andCompletionFeedback.NEGATIVE_FEEDBACK
.
- The ID of the completion (
- This approach allows for the recording of specific feedback on individual completions, which is essential for tracking and improving the system's performance based on user inputs.
Custom Metadata
The Freeplay SDK offers the capability to log custom metadata for each session. This feature allows you to add additional contextual information like customer IDs, code versions, embedding chunk sizes, and more. This metadata is logged at the session level and is treated separately from customer feedback. In the Freeplay app, it is displayed alongside other session metadata such as cost, token count, and latency.
Using the Custom Metadata Feature
To use this feature, include a metadata
field when making a completion or chat request. This field should contain a map of key-value pairs representing the metadata you wish to log.
const completion = await fpclient.getCompletion({
// ...
metadata: {
customer_id: '123-456',
gitSHA: 'd5afe656acfedad35ef75eb55c8a1b853fcd1cd2',
some_number: 123,
},
});
completion = freeplay_chat.get_completion(
# ...
metadata={
"customer_id": 123456,
"gitSHA": "d5afe656acfedad35ef75eb55c8a1b853fcd1cd2",
}
)
completion = fpClient.getCompletion(
projectId,
templateName,
inputVariables,
modelParameters,
environment,
ChatFlavor.DEFAULT,
ChatPromptProcessor.DEFAULT,
Map.of("customer_id", 123)
);
Updated 10 months ago