Rystem.OpenAi
.Net wrapper for OpenAI with Dependency injection integration, factory integration: you may inject more than one endpoint, azure integration: you may swap among openai endpoint and any azure endpoint quickly and easily. You can calculate tokens and cost for each request (before the request) and for each response.
Stars: 93
README:
Unofficial Fluent C#/.NET SDK for accessing the OpenAI API (Easy swap among OpenAi and Azure OpenAi)
A simple C# .NET wrapper library to use with OpenAI's API.
Contribute: https://www.buymeacoffee.com/keyserdsoze
Contribute: https://patreon.com/Rystem
This library targets .NET 9 or above.
Watch out my Rystem framework to be able to do .Net webapp faster (easy integration with repository pattern or CQRS for your Azure services).
Install package Rystem.OpenAi from Nuget. Here's how via command line:
Install-Package Rystem.OpenAi
-
Unofficial Fluent C#/.NET SDK for accessing the OpenAI API (Easy swap among OpenAi and Azure OpenAi)
- Last update with Cost and Tokens calculation
- Help the project
- Stars
- Requirements
- Setup
- Documentation
- Startup Setup
- Dependency Injection
-
Dependency Injection With Azure
- Add to service collection the OpenAi service in your DI with Azure integration
- Add to service collection the OpenAi service in your DI with Azure integration and app registration
- Add to service collection the OpenAi service in your DI with Azure integration and system assigned managed identity
- Add to service collection the OpenAi service in your DI with Azure integration and user assigned managed identity
- Use different version
- Dependency Injection With Factory
- Without Dependency Injection
- Models
- Chat
- Images
- Embeddings
- Audio
- File
- Fine-Tunes
- Moderations
- Utilities
- Management
- How the OpenAI Assistant Works
- Assistant Features
- Assistant
- Thread
- Run
- VectorStore
- Examples
- Create a Simple Assistant
- Use the Assistant for Code Interpretation
- Create a Thread and Manage Messages
- Execute a Run and Retrieve Steps
- Stream Responses from the Assistant
- Work with VectorStore
📖 Back to summary
You may install with Dependency Injection one or more than on integrations at the same time. Furthermore you don't need to use the Dependency Injection pattern and use a custom Setup.
var apiKey = configuration["Azure:ApiKey"];
services.AddOpenAi(settings =>
{
settings.ApiKey = apiKey;
//add a default model for chatClient, you can add everything in this way to prepare at the best your
//client for the request
settings.DefaultRequestConfiguration.Chat = chatClient =>
{
chatClient.WithModel(configuration["OpenAi2:ModelName"]!);
};
}, "custom integration name");
var openAiApi = serviceProvider.GetRequiredService<IFactory<IOpenAi>>();
var firstInstanceOfChatClient = openAiApi.Create("custom integration name").Chat;
var openAiChatApi = serviceProvider.GetRequiredService<IFactory<IOpenAiChat>>();
var anotherInstanceOfChatClient = openAiChatApi.Create("custom integration name");
When you want to use the integration with Azure.
builder.Services.AddOpenAi(settings =>
{
settings.ApiKey = apiKey;
settings.Azure.ResourceName = "AzureResourceName (Name of your deployed service on Azure)";
});
See how to create an app registration here.
var resourceName = builder.Configuration["Azure:ResourceName"];
var clientId = builder.Configuration["AzureAd:ClientId"];
var clientSecret = builder.Configuration["AzureAd:ClientSecret"];
var tenantId = builder.Configuration["AzureAd:TenantId"];
builder.Services.AddOpenAi(settings =>
{
settings.Azure.ResourceName = resourceName;
settings.Azure.AppRegistration.ClientId = clientId;
settings.Azure.AppRegistration.ClientSecret = clientSecret;
settings.Azure.AppRegistration.TenantId = tenantId;
});
Add to service collection the OpenAi service in your DI with Azure integration and system assigned managed identity
See how to create a managed identity here.
System Assigned Managed Identity
var resourceName = builder.Configuration["Azure:ResourceName"];
builder.Services.AddOpenAi(settings =>
{
settings.Azure.ResourceName = resourceName;
settings.Azure.ManagedIdentity.UseDefault = true;
});
Add to service collection the OpenAi service in your DI with Azure integration and user assigned managed identity
See how to create a managed identity here.
User Assigned Managed Identity
var resourceName = builder.Configuration["Azure:ResourceName"];
var managedIdentityId = builder.Configuration["ManagedIdentity:ClientId"];
builder.Services.AddOpenAi(settings =>
{
settings.Azure.ResourceName = resourceName;
settings.Azure.ManagedIdentity.Id = managedIdentityId;
});
📖 Back to summary
You may install different version for each endpoint.
services.AddOpenAi(settings =>
{
settings.ApiKey = azureApiKey;
//default version for all endpoints
settings.Version = "2024-08-01-preview";
//different version for chat endpoint
settings
.UseVersionForChat("2023-03-15-preview");
});
In this example We are adding a different version only for chat, and all the other endpoints will use the same (in this case the default version).
📖 Back to summary
You may install more than one OpenAi integration, using name parameter in configuration.
In the next example we have two different configurations, one with OpenAi and a default name and with Azure OpenAi and name "Azure"
var apiKey = context.Configuration["OpenAi:ApiKey"];
services
.AddOpenAi(settings =>
{
settings.ApiKey = apiKey;
});
var azureApiKey = context.Configuration["Azure:ApiKey"];
var resourceName = context.Configuration["Azure:ResourceName"];
var clientId = context.Configuration["AzureAd:ClientId"];
var clientSecret = context.Configuration["AzureAd:ClientSecret"];
var tenantId = context.Configuration["AzureAd:TenantId"];
services.AddOpenAi(settings =>
{
settings.ApiKey = azureApiKey;
settings
.UseVersionForChat("2023-03-15-preview");
settings.Azure.ResourceName = resourceName;
settings.Azure.AppRegistration.ClientId = clientId;
settings.Azure.AppRegistration.ClientSecret = clientSecret;
settings.Azure.AppRegistration.TenantId = tenantId;
}, "Azure");
I can retrieve the integration with IFactory<> interface (from Rystem) and the name of the integration.
private readonly IFactory<IOpenAi> _openAiFactory;
public CompletionEndpointTests(IFactory<IOpenAi> openAiFactory)
{
_openAiFactory = openAiFactory;
}
public async ValueTask DoSomethingWithDefaultIntegrationAsync()
{
var openAiApi = _openAiFactory.Create();
openAiApi.Chat.........
}
public async ValueTask DoSomethingWithAzureIntegrationAsync()
{
var openAiApi = _openAiFactory.Create("Azure");
openAiApi.Chat.........
}
or get the more specific service
private readonly IFactory<IOpenAiChat> _chatFactory;
public Constructor(IFactory<IOpenAiChat> chatFactory)
{
_chatFactory = chatFactory;
}
public async ValueTask DoSomethingWithAzureIntegrationAsync()
{
var chat = _chatFactory.Create(name);
chat.ExecuteRequestAsync(....);
}
📖 Back to summary
You may configure in a static constructor or during startup your integration without the dependency injection pattern.
OpenAiServiceLocator.Configuration.AddOpenAi(settings =>
{
settings.ApiKey = apiKey;
}, "NoDI");
and you can use it with the same static class OpenAiServiceLocator and the static Create method
var openAiApi = OpenAiServiceLocator.Instance.Create(name);
openAiApi.Embedding......
or get the more specific service
var openAiEmbeddingApi = OpenAiServiceLocator.Instance.CreateEmbedding(name);
openAiEmbeddingApi.Request(....);
📖 Back to summary
List and describe the various models available in the API. You can refer to the Models documentation to understand what models are available and the differences between them.
You may find more details here,
and here samples from unit test.
Lists the currently available models, and provides basic information about each one such as the owner and availability.
var openAiApi = _openAiFactory.Create(name);
var results = await openAiApi.Model.ListAsync();
Retrieves a model instance, providing basic information about the model such as the owner and per missioning.
var openAiApi = _openAiFactory.Create(name);
var result = await openAiApi.Model.RetrieveAsync("insert here the model name you need to retrieve");
Delete a fine-tuned model. You must have the Owner role in your organization.
var openAiApi = _openAiFactory.Create(name);
var deleteResult = await openAiApi.Model
.DeleteAsync(fineTuneModelId);
📖 Back to summary
Given a chat conversation, the model will return a chat completion response.
You may find more details here,
and here samples from unit test.
The IOpenAiChat
interface provides a robust framework for interacting with OpenAI Chat models. This documentation includes method details and usage explanations, followed by 20 distinct examples that demonstrate real-world applications.
- Executes the configured request and retrieves the result in a single response.
- Usage: Best for one-off requests where the response can be processed at once.
- Streams the results progressively, enabling real-time processing.
- Usage: Ideal for scenarios where partial results need to be displayed or acted upon immediately.
-
AddMessage(ChatMessageRequest message)
Adds a message with detailed configuration (Role
,Content
). -
AddMessages(params ChatMessageRequest[] messages)
Adds multiple messages at once. -
AddMessage(string content, ChatRole role = ChatRole.User)
A simplified method to add a single message.
-
AddUserMessage(string content)
Adds a user-specific message. -
AddSystemMessage(string content)
Adds a system-specific message for setting context. -
AddAssistantMessage(string content)
Adds an assistant-specific message.
-
GetCurrentMessages()
Retrieves all messages added to the current request.
-
AddContent(ChatRole role = ChatRole.User)
Adds content dynamically with a builder. -
AddUserContent()
,AddSystemContent()
,AddAssistantContent()
Builders for specific message roles.
-
WithTemperature(double value)
Adjusts randomness (range: 0 to 2). -
WithNucleusSampling(double value)
Enables nucleus sampling (range: 0 to 1). -
WithPresencePenalty(double value)
Penalizes repeating tokens (range: -2 to 2). -
WithFrequencyPenalty(double value)
Penalizes frequent tokens (range: -2 to 2).
-
SetMaxTokens(int value)
Sets the maximum tokens for the response. -
WithNumberOfChoicesPerPrompt(int value)
Sets how many response options to generate.
-
WithStopSequence(params string[] values)
Adds one or more stop sequences. -
AddStopSequence(string value)
Adds a single stop sequence.
-
WithBias(string key, int value)
,WithBias(Dictionary<string, int> bias)
Adjusts the likelihood of specific tokens appearing. -
WithUser(string user)
Adds a unique user identifier for tracking. -
WithSeed(int? seed)
Sets a seed for deterministic responses.
-
ForceResponseFormat(FunctionTool function)
,ForceResponseFormat(MethodInfo function)
Forces responses to follow specific function-based formats. -
ForceResponseAsJsonFormat()
,ForceResponseAsText()
Ensures responses are structured as JSON or plain text.
-
AvoidCallingTools()
,ForceCallTools()
,CanCallTools()
Configures tool-calling behavior. -
ClearTools()
,ForceCallFunction(string name)
Manages specific tools and their calls.
Description: A simple user message and response.
var chat = openAiApi.Chat
.AddUserMessage("Hello, how are you?")
.WithModel(ChatModelName.Gpt4_o);
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Streaming a response progressively.
await foreach (var chunk in openAiApi.Chat
.AddUserMessage("Tell me a story.")
.WithModel(ChatModelName.Gpt4_o)
.ExecuteAsStreamAsync())
{
Console.Write(chunk.Choices?.FirstOrDefault()?.Delta?.Content);
}
Description: Adjusting response randomness.
var chat = openAiApi.Chat
.AddUserMessage("What is your opinion on AI?")
.WithTemperature(0.9);
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Sending multiple messages to set context.
var chat = openAiApi.Chat
.AddSystemMessage("You are a helpful assistant.")
.AddUserMessage("Who won the soccer match yesterday?")
.AddUserMessage("What are the latest updates?");
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Limiting the response with stop sequences.
var chat = openAiApi.Chat
.AddUserMessage("Explain the theory of relativity.")
.WithStopSequence("end");
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Using functions for structured responses.
var functionTool = new FunctionTool
{
Name = "calculate_sum",
Description = "Adds two numbers",
Parameters = new FunctionToolMainProperty()
.AddPrimitive("number1", new FunctionToolPrimitiveProperty { Type = "integer" })
.AddPrimitive("number2", new FunctionToolPrimitiveProperty { Type = "integer" })
.AddRequired("number1")
.AddRequired("number2")
};
var chat = openAiApi.Chat
.AddUserMessage("Calculate the sum of 5 and 10.")
.AddFunctionTool(functionTool);
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Streaming with an enforced stop condition.
await foreach (var chunk in openAiApi.Chat
.AddUserMessage("Describe the universe.")
.WithStopSequence("stop")
.ExecuteAsStreamAsync())
{
Console.Write(chunk.Choices?.FirstOrDefault()?.Delta?.Content);
}
Description: Encouraging diverse topics in the response.
var chat = openAiApi.Chat
.AddUserMessage("Tell me something new.")
.WithPresencePenalty(1.5);
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Reducing repetitive phrases.
var chat = openAiApi.Chat
.AddUserMessage("What is recursion?")
.WithFrequencyPenalty(1.5);
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Forcing the response to be in JSON format.
var chat = openAiApi.Chat
.AddUserMessage("Summarize the book '1984'.")
.ForceResponseAsJsonFormat();
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
You can use some JsonProperty attribute like:
- JsonPropertyName: name of the property
- JsonPropertyDescription: description of what the property is.
- JsonRequired: to set as Required for OpenAi
- JsonPropertyAllowedValues: to have only a range of possible values for the property.
- JsonPropertyRange: to have a range of values
- JsonPropertyMaximum: to have a maximum value for the property
- JsonPropertyMinimum: to have a minimum value for the property
- JsonPropertyMultipleOf: to have only a multiple of a value for the property
After the configuration you can use this function framework in this way:
var openAiApi = _openAiFactory.Create(name);
var response = await openAiApi.Chat
.RequestWithUserMessage("What is the weather like in Boston?")
.WithModel(ChatModelType.Gpt35Turbo_Snapshot)
.WithFunction(WeatherFunction.NameLabel)
.ExecuteAndCalculateCostAsync(true);
var content = response.Result.Choices[0].Message.Content;
You may find the PlayFramework here
📖 Back to summary
Given a prompt and/or an input image, the model will generate a new image.
You may find more details here,
and here samples from unit test.
The IOpenAiImage
interface provides functionality for generating, editing, and varying images using OpenAI's image models. This document covers each method with explanations and includes 20 distinct examples demonstrating their usage.
- Description: Generates an image based on a textual prompt.
- Usage: Use for scenarios where a visual representation of an idea is required.
-
Returns:
ImageResult
containing the generated image's details.
- Description: Generates an image and returns it as a Base64 string.
- Usage: Ideal for embedding images directly into web or mobile applications without saving files.
-
Returns:
ImageResultForBase64
with the image encoded as Base64.
EditAsync(string prompt, Stream file, string fileName = "image", CancellationToken cancellationToken = default)
- Description: Edits an image using a text prompt and an image file.
- Usage: Modify existing images based on creative or functional requirements.
-
Returns:
ImageResult
with the edited image's details.
EditAsBase64Async(string prompt, Stream file, string fileName = "image", CancellationToken cancellationToken = default)
- Description: Edits an image and returns it as a Base64 string.
- Usage: Enables editing workflows with direct Base64 output for web integration.
-
Returns:
ImageResultForBase64
.
- Description: Creates variations of an existing image.
- Usage: Generate alternate versions of an image for creative exploration.
-
Returns:
ImageResult
.
VariateAsBase64Async(Stream file, string fileName = "image", CancellationToken cancellationToken = default)
- Description: Creates variations of an image and returns them as Base64 strings.
- Usage: Useful for embedding variations in platforms that consume Base64 directly.
-
Returns:
ImageResultForBase64
.
- Description: Adds a mask to guide image editing.
- Usage: Define specific areas of an image to be edited or preserved.
- Description: Sets the number of images to generate (1 to 10).
- Usage: Control how many images are returned in a single operation.
-
Description: Specifies the size of generated images (e.g.,
256x256
,512x512
,1024x1024
). - Usage: Select resolution based on the intended use case.
- Description: Sets the quality of generated images.
- Usage: Choose between standard and high-quality outputs based on performance needs.
- Description: Specifies the artistic style of generated images.
- Usage: Create images with specific aesthetic or thematic styles.
- Description: Sets a unique identifier for tracking and abuse prevention.
- Usage: Helps monitor usage and identify specific user requests.
Creates an image given a prompt.
var openAiApi = _openAiFactory.Create(name)!;
var response = await openAiApi.Image
.WithSize(ImageSize.Large)
.GenerateAsync("Create a captive logo with ice and fire, and thunder with the word Rystem. With a desolated futuristic landscape.");
var uri = response.Data?.FirstOrDefault();
Download directly and save as stream
var openAiApi = _openAiFactory.Create(name)!;
var response = await openAiApi.Image
.WithSize(ImageSize.Large)
.GenerateAsBase64Async("Create a captive logo with ice and fire, and thunder with the word Rystem. With a desolated futuristic landscape.");
var image = response.Data?.FirstOrDefault();
var imageAsStream = image.ConvertToStream();
Creates an edited or extended image given an original image and a prompt.
var openAiApi = _openAiFactory.Create(name)!;
var location = Assembly.GetExecutingAssembly().Location;
location = string.Join('\\', location.Split('\\').Take(location.Split('\\').Length - 1));
using var readableStream = File.OpenRead($"{location}\\Files\\otter.png");
var editableFile = new MemoryStream();
await readableStream.CopyToAsync(editableFile);
editableFile.Position = 0;
var response = await openAiApi.Image
.WithSize(ImageSize.Small)
.WithNumberOfResults(2)
.EditAsync("A cute baby sea otter wearing a beret", editableFile, "otter.png");
var uri = response.Data?.FirstOrDefault();
Creates a variation of a given image.
var openAiApi = _openAiFactory.Create(name)!;
var location = Assembly.GetExecutingAssembly().Location;
location = string.Join('\\', location.Split('\\').Take(location.Split('\\').Length - 1));
using var readableStream = File.OpenRead($"{location}\\Files\\otter.png");
var editableFile = new MemoryStream();
await readableStream.CopyToAsync(editableFile);
editableFile.Position = 0;
var response = await openAiApi.Image
.WithSize(ImageSize.Small)
.WithNumberOfResults(1)
.VariateAsync(editableFile, "otter.png");
var uri = response.Data?.FirstOrDefault();
📖 Back to summary
Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
You may find more details here,
and here samples from unit test.
The IOpenAiEmbedding
interface provides methods to generate embeddings for text inputs, enabling downstream tasks such as similarity search, clustering, and machine learning model inputs. This documentation explains each method and includes 10 usage examples.
- Description: Adds an array of strings to be processed for embeddings.
- Usage: Use when multiple inputs are provided simultaneously.
- Description: Removes all previously added inputs.
- Usage: Resets the input list, useful for reconfiguring the operation.
- Description: Adds a single input string for embedding.
- Usage: Use when inputs are added incrementally or one at a time.
- Description: Adds a unique identifier for the user, aiding in monitoring and abuse detection.
- Usage: Helpful in multi-user applications or for logging purposes.
- Description: Sets the desired dimensionality for the output embeddings.
- Usage: Supported only in specific models where dimension configuration is allowed.
-
Description: Specifies the encoding format of the embeddings (e.g.,
Base64
,Float
). - Usage: Define the format based on downstream processing needs.
- Description: Executes the embedding operation asynchronously and returns the result.
- Usage: Call after configuring inputs and parameters.
Creates an embedding vector representing the input text.
var openAiApi = name == "NoDI" ? OpenAiServiceLocatorLocator.Instance.Create(name) : _openAiFactory.Create(name)!;
var results = await openAiApi.Embeddings
.WithInputs("A test text for embedding")
.ExecuteAsync();
var resultOfCosineSimilarity = _openAiUtility.CosineSimilarity(results.Data.First().Embedding!, results.Data.First().Embedding!);
Creates an embedding with custom dimensions vector representing the input text. Only supported in text-embedding-3 and later models.
var openAiApi = name == "NoDI" ? OpenAiServiceLocatorLocator.Instance.Create(name) : _openAiFactory.Create(name)!;
var results = await openAiApi.Embeddings
.AddPrompt("A test text for embedding")
.WithModel("text-embedding-3-large")
.WithDimensions(999)
.ExecuteAsync();
For searching over many vectors quickly, we recommend using a vector database. You can find examples of working with vector databases and the OpenAI API in our Cookbook on GitHub. Vector database options include:
- Pinecone, a fully managed vector database
- Weaviate, an open-source vector search engine
- Redis as a vector database
- Qdrant, a vector search engine
- Milvus, a vector database built for scalable similarity search
- Chroma, an open-source embeddings store
We recommend cosine similarity. The choice of distance function typically doesn't matter much.
OpenAI embeddings are normalized to length 1, which means that:
Cosine similarity can be computed slightly faster using just a dot product Cosine similarity and Euclidean distance will result in the identical rankings
You may use the utility service in this repository to calculate in C# the distance with Cosine similarity
📖 Back to summary
You may find more details here,
and here samples from unit test.
The IOpenAiAudio
interface provides methods to handle audio processing tasks such as transcription, translation, and customization of audio analysis. Below is a detailed breakdown of each method.
- Description: Adds an audio file as a byte array for processing.
-
Parameters:
-
file
: Byte array representing the audio file. -
fileName
: Name of the audio file (default: "default").
-
- Usage: Useful when the audio file is loaded into memory as bytes.
- Description: Adds an audio file as a stream asynchronously.
-
Parameters:
-
file
: Stream representing the audio file. -
fileName
: Name of the audio file (default: "default").
-
- Usage: Ideal for large files streamed directly without loading entirely into memory.
- Description: Transcribes the audio into the input language.
-
Returns: An
AudioResult
containing the transcription details. - Usage: Extract text content from audio in its original language.
- Description: Transcribes the audio into a verbose representation in the input language.
-
Returns: A
VerboseSegmentAudioResult
with detailed transcription data. - Usage: Suitable for scenarios requiring detailed transcriptions with additional context or metadata.
- Description: Transcribes the audio into a verbose representation in the input language.
-
Returns: A
VerboseWordAudioResult
with detailed transcription data. - Usage: Suitable for scenarios requiring detailed transcriptions with additional context or metadata.
- Description: Translates audio content into English.
-
Returns: An
AudioResult
containing the translated text. - Usage: Convert audio content from any supported language to English.
- Description: Translates audio into a verbose representation in English.
-
Returns: A
VerboseSegmentAudioResult
with detailed translation data. - Usage: Obtain comprehensive translation output with additional metadata.
- Description: Translates audio into a verbose representation in English.
-
Returns: A
VerboseWordAudioResult
with detailed translation data. - Usage: Obtain comprehensive translation output with additional metadata.
- Description: Adds a text prompt to guide the model's transcription or translation style.
-
Parameters:
-
prompt
: Text to provide contextual guidance or continue a previous segment.
-
- Usage: Helps maintain consistency or tailor the model’s output style.
- Description: Sets the sampling temperature (range: 0 to 1). Higher values increase randomness, while lower values make output more deterministic.
-
Parameters:
-
temperature
: Value for controlling randomness.
-
- Usage: Adjusts the balance between creativity and focus in the output.
- Description: Specifies the input audio's language using ISO-639-1 codes.
-
Parameters:
-
language
: Language code of the input audio.
-
- Usage: Improves transcription/translation accuracy and reduces latency by specifying the language explicitly.
- Description: Sets the number of minutes allocated for transcription tasks.
-
Parameters:
-
minutes
: Duration in minutes.
-
- Usage: Controls the time allocation for transcription operations.
- Description: Sets the number of minutes allocated for translation tasks.
-
Parameters:
-
minutes
: Duration in minutes.
-
- Usage: Controls the time allocation for translation operations.
Transcribes audio into the input language.
var openAiApi = _openAiFactory.Create(name)!;
var location = Assembly.GetExecutingAssembly().Location;
location = string.Join('\\', location.Split('\\').Take(location.Split('\\').Length - 1));
using var readableStream = File.OpenRead($"{location}\\Files\\test.mp3");
var editableFile = new MemoryStream();
readableStream.CopyTo(editableFile);
editableFile.Position = 0;
var results = await openAiApi.Audio
.WithFile(editableFile.ToArray(), "default.mp3")
.WithTemperature(1)
.WithLanguage(Language.Italian)
.WithPrompt("Incidente")
.TranscriptAsync();
example for verbose transcription in segments
var openAiApi = _openAiFactory.Create(name)!;
var location = Assembly.GetExecutingAssembly().Location;
location = string.Join('\\', location.Split('\\').Take(location.Split('\\').Length - 1));
using var readableStream = File.OpenRead($"{location}\\Files\\test.mp3");
var editableFile = new MemoryStream();
readableStream.CopyTo(editableFile);
editableFile.Position = 0;
var results = await openAiApi.Audio
.WithFile(editableFile.ToArray(), "default.mp3")
.WithTemperature(1)
.WithLanguage(Language.Italian)
.WithPrompt("Incidente")
.VerboseTranscriptAsSegmentsAsync();
Assert.NotNull(results);
Assert.True(results.Text?.Length > 100);
Assert.StartsWith("Incidente tra due aerei di addestramento", results.Text);
Assert.NotEmpty(results.Segments ?? []);
Translates audio into English.
var openAiApi = _openAiFactory.Create(name)!;
var location = Assembly.GetExecutingAssembly().Location;
location = string.Join('\\', location.Split('\\').Take(location.Split('\\').Length - 1));
using var readableStream = File.OpenRead($"{location}\\Files\\test.mp3");
var editableFile = new MemoryStream();
await readableStream.CopyToAsync(editableFile);
editableFile.Position = 0;
var results = await openAiApi.Audio
.WithTemperature(1)
.WithPrompt("sample")
.WithFile(editableFile.ToArray(), "default.mp3")
.TranslateAsync();
example for verbose translation in segments
var openAiApi = _openAiFactory.Create(name)!;
Assert.NotNull(openAiApi.Audio);
var location = Assembly.GetExecutingAssembly().Location;
location = string.Join('\\', location.Split('\\').Take(location.Split('\\').Length - 1));
using var readableStream = File.OpenRead($"{location}\\Files\\test.mp3");
var editableFile = new MemoryStream();
await readableStream.CopyToAsync(editableFile);
editableFile.Position = 0;
var results = await openAiApi.Audio
.WithFile(editableFile.ToArray(), "default.mp3")
.WithTemperature(1)
.WithPrompt("sample")
.VerboseTranslateAsSegmentsAsync();
Assert.NotNull(results);
Assert.True(results.Text?.Length > 100);
Assert.NotEmpty(results.Segments ?? []);
The IOpenAiSpeech
interface enables text-to-speech synthesis by providing methods to generate audio in various formats, along with options for controlling voice style and playback speed. Below is a detailed description of each method.
- Description: Converts the given text input into an MP3 audio stream.
-
Parameters:
-
input
: The text to be synthesized into audio. -
cancellationToken
: Optional token for cancelling the operation.
-
-
Returns: A
Stream
containing the MP3 audio data.
- Description: Converts the given text input into an Opus audio stream.
-
Parameters:
-
input
: The text to be synthesized into audio. -
cancellationToken
: Optional token for cancelling the operation.
-
-
Returns: A
Stream
containing the Opus audio data.
- Description: Converts the given text input into an AAC audio stream.
-
Parameters:
-
input
: The text to be synthesized into audio. -
cancellationToken
: Optional token for cancelling the operation.
-
-
Returns: A
Stream
containing the AAC audio data.
- Description: Converts the given text input into a FLAC audio stream.
-
Parameters:
-
input
: The text to be synthesized into audio. -
cancellationToken
: Optional token for cancelling the operation.
-
-
Returns: A
Stream
containing the FLAC audio data.
- Description: Converts the given text input into a WAV audio stream.
-
Parameters:
-
input
: The text to be synthesized into audio. -
cancellationToken
: Optional token for cancelling the operation.
-
-
Returns: A
Stream
containing the WAV audio data.
- Description: Converts the given text input into a PCM audio stream.
-
Parameters:
-
input
: The text to be synthesized into audio. -
cancellationToken
: Optional token for cancelling the operation.
-
-
Returns: A
Stream
containing the PCM audio data.
-
Description: Adjusts the speed of the generated audio. The default speed is
1.0
. -
Parameters:
-
speed
: A value between0.25
and4.0
to control playback speed.
-
-
Returns: The current instance of
IOpenAiSpeech
for method chaining. -
Exceptions: Throws
ArgumentException
if the speed is out of the valid range.
- Description: Specifies the voice to use for generating audio.
-
Parameters:
-
audioVoice
: The desired voice style. Supported values arealloy
,echo
,fable
,onyx
,nova
, andshimmer
.
-
-
Returns: The current instance of
IOpenAiSpeech
for method chaining.
- The
IOpenAiSpeech
interface allows generating audio in multiple high-quality formats suitable for various applications, such as podcasts, presentations, and accessibility tools. - You can customize the playback speed and voice style to suit your needs.
- It supports seamless integration with asynchronous workflows via
ValueTask<Stream>
.
This interface provides powerful capabilities for creating dynamic audio content from text, offering flexibility in format, speed, and voice customization.
var openAiApi = _openAiFactory.Create(name)!;
var result = await openAiApi.Speech
.WithVoice(AudioVoice.Fable)
.WithSpeed(1.3d)
.Mp3Async(text);
📖 Back to summary
Files are used to upload documents that can be used with features like Fine-tuning.
You may find more details here,
and here samples from unit test.
The IOpenAiFile
interface provides functionality for managing files within the OpenAI platform. These files are typically used for tasks such as fine-tuning models or storing custom datasets. Below is a detailed explanation of each method in the interface.
- Description: Retrieves a list of all files that belong to the user's organization.
-
Parameters:
-
cancellationToken
: Optional token for cancelling the operation.
-
-
Returns: A
ValueTask<FilesDataResult>
containing metadata for all uploaded files. - Usage: Use this method to get an overview of all available files and their statuses.
-
Exceptions: Throws
HttpRequestException
if the request fails.
- Description: Fetches metadata about a specific file by its ID.
-
Parameters:
-
fileId
: The unique identifier of the file to retrieve. -
cancellationToken
: Optional token for cancelling the operation.
-
-
Returns: A
ValueTask<FileResult>
containing details about the specified file. - Usage: Retrieve specific information about a file, such as its size, type, and upload time.
- Description: Retrieves the content of a specified file as a string.
-
Parameters:
-
fileId
: The unique identifier of the file to retrieve. -
cancellationToken
: Optional token for cancelling the operation.
-
-
Returns: A
Task<string>
containing the file content as a string. - Usage: Use this to read the content of smaller text-based files, such as JSON or CSV files.
- Description: Retrieves the content of a specified file as a stream.
-
Parameters:
-
fileId
: The unique identifier of the file to retrieve. -
cancellationToken
: Optional token for cancelling the operation.
-
-
Returns: A
Task<Stream>
containing the file content as a stream. - Usage: Useful for reading large files or binary content incrementally.
- Description: Deletes a specified file by its ID.
-
Parameters:
-
fileId
: The unique identifier of the file to delete. -
cancellationToken
: Optional token for cancelling the operation.
-
-
Returns: A
ValueTask<FileResult>
indicating the result of the deletion operation. - Usage: Use to remove files that are no longer needed, freeing up storage space.
UploadFileAsync(Stream file, string fileName, string contentType = "application/json", PurposeFileUpload purpose = PurposeFileUpload.FineTune, CancellationToken cancellationToken = default)
- Description: Uploads a file to the OpenAI platform for use with various features, such as fine-tuning.
-
Parameters:
-
file
: AStream
representing the file to upload. -
fileName
: The name of the file to upload. -
contentType
: The MIME type of the file (default: "application/json"). -
purpose
: The intended purpose of the file (e.g., "fine-tune"). -
cancellationToken
: Optional token for cancelling the operation.
-
-
Returns: A
ValueTask<FileResult>
containing details about the uploaded file. - Usage: Use this to upload datasets or other documents required for OpenAI features.
- Notes: The total size of files for an organization is limited to 1 GB by default. Contact OpenAI support to increase the limit if necessary.
- This interface is designed to handle all file-related operations within the OpenAI ecosystem, including uploading, retrieving, and deleting files.
- It supports asynchronous operations to ensure scalability and responsiveness, especially when handling large files or network delays.
- The methods provide flexible options for retrieving file content, either as strings or streams, depending on the use case.
This interface is essential for managing files effectively in projects requiring fine-tuning or custom dataset handling.
Returns a list of files that belong to the user's organization.
var openAiApi = _openAiFactory.Create(name);
var results = await openAiApi.File
.AllAsync();
Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact us if you need to increase the storage limit.
var openAiApi = _openAiFactory.Create(name);
var uploadResult = await openAiApi.File
.UploadFileAsync(editableFile, name);
Delete a file.
var openAiApi = _openAiFactory.Create(name);
var deleteResult = await openAiApi.File
.DeleteAsync(uploadResult.Id);
Returns information about a specific file.
var openAiApi = _openAiFactory.Create(name);
var retrieve = await openAiApi.File
.RetrieveAsync(uploadResult.Id);
Returns the contents of the specified file
var openAiApi = _openAiFactory.Create(name);
var contentRetrieve = await openAiApi.File
.RetrieveFileContentAsStringAsync(uploadResult.Id);
You can upload large files by splitting them into parts. Upload a file that can be used across various endpoints. Individual files can be up to 512 MB, and the size of all files uploaded by one organization can be up to 100 GB. The Assistants API supports files up to 2 million tokens and of specific file types. See the Assistants Tools guide for details. The Fine-tuning API only supports .jsonl files. The input also has certain required formats for fine-tuning chat or completions models. The Batch API only supports .jsonl files up to 200 MB in size. The input also has a specific required format.
var upload = openAiApi.File
.CreateUpload(fileName)
.WithPurpose(PurposeFileUpload.FineTune)
.WithContentType("application/json")
.WithSize(editableFile.Length);
var execution = await upload.ExecuteAsync();
var partResult = await execution.AddPartAsync(editableFile);
Assert.True(partResult.Id?.Length > 7);
var completeResult = await execution.CompleteAsync();
📖 Back to summary
Manage fine-tuning jobs to tailor a model to your specific training data.
You may find more details here,
and here samples from unit test.
The IOpenAiFineTune
interface provides methods to manage fine-tuning operations, allowing customization of models with specific training data. Fine-tuning is useful for tailoring models to specialized tasks or datasets. Below is a detailed breakdown of the methods provided.
- Description: Specifies the ID of the training file to be used for fine-tuning.
-
Parameters:
-
trainingFileId
: The unique identifier of the training dataset.
-
-
Returns: The current instance of
IOpenAiFineTune
for method chaining. - Usage: Set the file ID required for starting a fine-tune operation.
- Description: Sets the ID of a validation file to evaluate fine-tuning performance.
-
Parameters:
-
validationFileId
: The unique identifier of the validation dataset.
-
-
Returns: The current instance of
IOpenAiFineTune
for method chaining.
- Description: Configures hyperparameters for the fine-tuning operation.
-
Parameters:
-
hyperParametersSettings
: A delegate for configuring fine-tune hyperparameters.
-
-
Returns: The current instance of
IOpenAiFineTune
for method chaining.
- Description: Adds a suffix to the name of the fine-tuned model.
-
Parameters:
-
value
: The suffix string.
-
-
Returns: The current instance of
IOpenAiFineTune
for method chaining.
- Description: Sets a seed value for reproducibility during fine-tuning.
-
Parameters:
-
seed
: The seed value to ensure consistent results.
-
-
Returns: The current instance of
IOpenAiFineTune
for method chaining.
- Description: Configures integration with Weights and Biases for fine-tuning tracking.
-
Parameters:
-
integration
: A delegate for setting up Weights and Biases integration.
-
-
Returns: The current instance of
IOpenAiFineTune
for method chaining.
- Description: Removes all integrations associated with the fine-tuning operation.
-
Returns: The current instance of
IOpenAiFineTune
for method chaining.
- Description: Starts the fine-tuning operation with the configured parameters.
-
Parameters:
-
cancellationToken
: A token for cancelling the operation if needed.
-
-
Returns: A
ValueTask<FineTuneResult>
representing the result of the operation.
- Description: Lists all fine-tuning jobs with pagination.
-
Parameters:
-
take
: The number of results to retrieve (default: 20). -
skip
: The number of results to skip (default: 0). -
cancellationToken
: A token for cancelling the operation.
-
-
Returns: A
ValueTask<FineTuneResults>
containing a list of fine-tune jobs.
- Description: Retrieves details of a specific fine-tune operation by its ID.
-
Parameters:
-
fineTuneId
: The unique identifier of the fine-tune operation. -
cancellationToken
: A token for cancelling the operation.
-
-
Returns: A
ValueTask<FineTuneResult>
containing the fine-tune job details.
- Description: Cancels a fine-tune operation by its ID.
-
Parameters:
-
fineTuneId
: The unique identifier of the fine-tune operation. -
cancellationToken
: A token for cancelling the operation.
-
-
Returns: A
ValueTask<FineTuneResult>
indicating the cancellation status.
CheckPointEventsAsync(string fineTuneId, int take = 20, int skip = 0, CancellationToken cancellationToken = default)
- Description: Lists checkpoint events for a fine-tune operation with pagination.
-
Parameters:
-
fineTuneId
: The ID of the fine-tune operation. -
take
: The number of results to retrieve (default: 20). -
skip
: The number of results to skip (default: 0). -
cancellationToken
: A token for cancelling the operation.
-
-
Returns: A
ValueTask<FineTuneCheckPointEventsResult>
containing the checkpoint events.
ListEventsAsync(string fineTuneId, int take = 20, int skip = 0, CancellationToken cancellationToken = default)
- Description: Retrieves a list of events related to a fine-tune operation.
-
Parameters:
-
fineTuneId
: The ID of the fine-tune operation. -
take
: The number of results to retrieve (default: 20). -
skip
: The number of results to skip (default: 0). -
cancellationToken
: A token for cancelling the operation.
-
-
Returns: A
ValueTask<FineTuneEventsResult>
containing the event details.
- Description: Streams fine-tune job results asynchronously.
-
Parameters:
-
take
: The number of results to retrieve (default: 20). -
skip
: The number of results to skip (default: 0). -
cancellationToken
: A token for cancelling the operation.
-
-
Returns: An
IAsyncEnumerable<FineTuneResult>
for processing results incrementally.
ListEventsAsStreamAsync(string fineTuneId, int take = 20, int skip = 0, CancellationToken cancellationToken = default)
- Description: Streams events for a fine-tune operation asynchronously.
-
Parameters:
-
fineTuneId
: The ID of the fine-tune operation. -
take
: The number of results to retrieve (default: 20). -
skip
: The number of results to skip (default: 0). -
cancellationToken
: A token for cancelling the operation.
-
-
Returns: An
IAsyncEnumerable<FineTuneEvent>
for processing events incrementally.
- The
IOpenAiFineTune
interface provides a comprehensive API for managing fine-tuning operations, from configuration to execution and result retrieval. - Hyperparameter configuration and event tracking enable fine-grained control and monitoring of the fine-tuning process.
- Asynchronous and streaming methods allow efficient handling of large datasets and operations.
Creates a job that fine-tunes a specified model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.
var openAiApi = _openAiFactory.Create(name);
var createResult = await openAiApi.FineTune
.Create(fileId)
.ExecuteAsync();
List your organization's fine-tuning jobs
var openAiApi = _openAiFactory.Create(name);
var allFineTunes = await openAiApi.FineTune
.ListAsync();
Gets info about the fine-tune job.
var openAiApi = _openAiFactory.Create(name);
var retrieveFineTune = await openAiApi.FineTune
.RetrieveAsync(fineTuneId);
Immediately cancel a fine-tune job.
var openAiApi = _openAiFactory.Create(name);
var cancelResult = await openAiApi.FineTune
.CancelAsync(fineTuneId);
Get fine-grained status updates for a fine-tune job.
var openAiApi = _openAiFactory.Create(name);
var events = await openAiApi.FineTune
.ListEventsAsync(fineTuneId);
Get fine-grained status updates for a fine-tune job.
var openAiApi = _openAiFactory.Create(name);
var events = await openAiApi.FineTune
.ListEventsAsStreamAsync(fineTuneId);
Delete a fine-tuned model. You must have the Owner role in your organization.
var openAiApi = _openAiFactory.Create(name);
var deleteResult = await openAiApi.FineTune
.DeleteAsync(fineTuneModelId);
📖 Back to summary
Given a input text, outputs if the model classifies it as violating OpenAI's content policy.
You may find more details here,
and here samples from unit test.
The IOpenAiModeration
interface provides functionality for evaluating text against OpenAI's Content Policy, determining if the input violates any predefined guidelines. This interface is particularly useful for applications requiring automated content moderation to ensure safety and compliance.
- Description: Evaluates the provided text to determine if it violates OpenAI's Content Policy.
-
Parameters:
-
input
: The text to be analyzed for potential policy violations. -
cancellationToken
: Optional token for cancelling the operation.
-
-
Returns: A
ValueTask<ModerationResult>
containing the moderation outcome. -
Usage:
- The method classifies text for categories such as hate speech, violence, self-harm, or other potentially harmful content.
- Useful for integrating automated moderation into platforms, such as user-generated content systems, messaging apps, or forums.
- The
ExecuteAsync
method processes the input text and provides aModerationResult
object that contains detailed classification results. - It is an asynchronous operation, making it suitable for applications requiring high concurrency and responsiveness.
- This interface focuses solely on moderation tasks and is designed to integrate seamlessly with other OpenAI API functionalities.
The IOpenAiModeration
interface is a vital tool for developers aiming to build applications that enforce content guidelines and promote a safe user environment.
Classifies if text violates OpenAI's Content Policy
var openAiApi = _openAiFactory.Create(name)!;
var results = await openAiApi.Moderation
.WithModel("testModel")
.WithModel(ModerationModelName.OmniLatest)
.ExecuteAsync("I want to kill them and everyone else.");
var categories = results.Results?.FirstOrDefault()?.Categories;
📖 Back to summary
Utilities for OpenAi, you can inject the interface IOpenAiUtility everywhere you need it.
In IOpenAiUtility you can find:
📖 Back to embeddings
In data analysis, cosine similarity is a measure of similarity between two non-zero vectors defined in an inner product space. Cosine similarity is the cosine of the angle between the vectors; that is, it is the dot product of the vectors divided by the product of their lengths. It follows that the cosine similarity does not depend on the magnitudes of the vectors, but only on their angle. The cosine similarity always belongs to the interval [−1,1]. For example, two proportional vectors have a cosine similarity of 1, two orthogonal vectors have a similarity of 0, and two opposite vectors have a similarity of -1. In some contexts, the component values of the vectors cannot be negative, in which case the cosine similarity is bounded in [0,1].
Here an example from Unit test.
IOpenAiUtility _openAiUtility;
var resultOfCosineSimilarity = _openAiUtility.CosineSimilarity(results.Data.First().Embedding, results.Data.First().Embedding);
var resultOfEuclideanDinstance = _openAiUtility.EuclideanDistance(results.Data.First().Embedding, results.Data.First().Embedding);
Assert.True(resultOfCosineSimilarity >= 1);
Without DI, you need to setup an OpenAiServiceLocator without Dependency Injection and after that you can use
IOpenAiUtility openAiUtility = OpenAiServiceLocator.Instance.Utility();
📖 Back to summary
You can think of tokens as pieces of words, where 1,000 tokens is about 750 words. You can calculate your request tokens with the Tokenizer service in Utility.
IOpenAiUtility _openAiUtility
var encoded = _openAiUtility.Tokenizer
.WithChatModel(ChatModelType.Gpt4)
.Encode(value);
Assert.Equal(numberOfTokens, encoded.NumberOfTokens);
var decoded = _openAiUtility.Tokenizer.Decode(encoded.EncodedTokens);
Assert.Equal(value, decoded);
📖 Back to summary
You can think of tokens as pieces of words, where 1,000 tokens is about 750 words.
var openAiApi = _openAiFactory.Create(name)!;
var results = await openAiApi.Chat
.AddMessage(new ChatMessageRequest { Role = ChatRole.User, Content = "Hello!! How are you?" })
.WithModel(ChatModelName.Gpt4_o)
.WithTemperature(1)
.ExecuteAsync();
//calculate cost works only if you added the price during setup.
var cost = openAiApi.Chat.CalculateCost();
📖 Back to summary
During setup of your OpenAi service you may add your custom price table with settings.PriceBuilder property.
services.AddOpenAi(settings =>
{
//custom version for chat endpoint
settings
.UseVersionForChat("2024-08-01-preview");
//resource name for Azure
settings.Azure.ResourceName = resourceName;
//app registration configuration for Azure authentication
settings.Azure.AppRegistration.ClientId = clientId;
settings.Azure.AppRegistration.ClientSecret = clientSecret;
settings.Azure.AppRegistration.TenantId = tenantId;
//map deployment for Azure for every request for chat endpoint with gpt-4 model.
settings
.MapDeploymentForEveryRequests(OpenAiType.Chat, "gpt-4");
//default request configuration for chat endpoint, this method is ran during the creation of the chat service.
settings.DefaultRequestConfiguration.Chat = chatClient =>
{
chatClient.ForceModel("gpt-4");
};
//add a price for kind of cost for model you want to add. Here an example with gpt-4 model.
settings.PriceBuilder
.AddModel("gpt-4",
new OpenAiCost { Kind = KindOfCost.Input, UnitOfMeasure = UnitOfMeasure.Tokens, Units = 0.0000025m },
new OpenAiCost { Kind = KindOfCost.CachedInput, UnitOfMeasure = UnitOfMeasure.Tokens, Units = 0.00000125m },
new OpenAiCost { Kind = KindOfCost.Output, UnitOfMeasure = UnitOfMeasure.Tokens, Units = 0.00001m });
}, "Azure");
📖 Back to summary
In your openai dashboard you may get the billing usage, or users, or taxes, or similar. Here you have an api to retrieve this kind of data.
📖 Back to summary
You may use the management endpoint to retrieve data for your usage. Here an example on how to get the usage for the month of April.
var management = _openAiFactory.CreateManagement(integrationName);
var usages = await management
.Billing
.From(new DateTime(2023, 4, 1))
.To(new DateTime(2023, 4, 30))
.GetUsageAsync();
Assert.NotEmpty(usages.DailyCosts);
📖 Back to summary
Only for Azure you have to deploy a model to use model in your application. You can configure Deployment during startup of your application.
services.AddOpenAi(settings =>
{
settings.ApiKey = azureApiKey;
settings
.UseVersionForChat("2023-03-15-preview");
settings.Azure.ResourceName = resourceName;
settings.Azure.AppRegistration.ClientId = clientId;
settings.Azure.AppRegistration.ClientSecret = clientSecret;
settings.Azure.AppRegistration.TenantId = tenantId;
settings.Azure
.MapDeploymentTextModel("text-curie-001", TextModelType.CurieText)
.MapDeploymentTextModel("text-davinci-003", TextModelType.DavinciText3)
.MapDeploymentEmbeddingModel("OpenAiDemoModel", EmbeddingModelType.AdaTextEmbedding)
.MapDeploymentChatModel("gpt35turbo", ChatModelType.Gpt35Turbo0301)
.MapDeploymentCustomModel("ada001", "text-ada-001");
settings.Price
.SetFineTuneForAda(0.0004M, 0.0016M)
.SetAudioForTranslation(0.006M);
}, "Azure");
During startup you can configure other deployments on your application or on Azure.
var app = builder.Build();
await app.Services.MapDeploymentsAutomaticallyAsync(true);
or a specific integration or list of integration that you setup previously.
await app.Services.MapDeploymentsAutomaticallyAsync(true, "Azure", "Azure2");
You can do this step with No dependency injection integration too.
MapDeploymentsAutomaticallyAsync is a extensions method for IServiceProvider, with true you can automatically install on Azure the deployments you setup on application. In the other parameter you can choose which integration runs this automatic update. In the example it's running for the default integration. With the Management endpoint you can programmatically configure or manage deployments on Azure.
You can create a new deployment
var createResponse = await openAiApi.Management.Deployment
.Create(deploymentId)
.WithCapacity(2)
.WithDeploymentTextModel("ada", TextModelType.AdaText)
.WithScaling(Management.DeploymentScaleType.Standard)
.ExecuteAsync();
Get a deployment by Id
var deploymentResult = await openAiApi.Management.Deployment.RetrieveAsync(createResponse.Id);
List of all deployments by status
var listResponse = await openAiApi.Management.Deployment.ListAsync();
Update a deployment
var updateResponse = await openAiApi.Management.Deployment
.Update(createResponse.Id)
.WithCapacity(1)
.WithDeploymentTextModel("ada", TextModelType.AdaText)
.WithScaling(Management.DeploymentScaleType.Standard)
.ExecuteAsync();
Delete a deployment by Id
var deleteResponse = await openAiApi.Management.Deployment
.DeleteAsync(createResponse.Id);
The OpenAI Assistant is a conversational agent powered by OpenAI's GPT models (e.g., GPT-4). Here's a breakdown of its key concepts:
- Instructions: Define the assistant's behavior (e.g., "You are a math tutor").
-
Model Selection: Choose a model such as
gpt-4
orgpt-3.5
. -
Temperature: Adjust the randomness of the assistant's responses (e.g.,
0.7
for more creative answers,0.2
for deterministic answers). - Code Interpreter: Enable the assistant to execute Python code to solve complex problems.
- File Search: Attach files and let the assistant search within them for context.
The assistant interacts with threads and runs:
- Thread: A conversation that persists messages for a context-aware dialogue.
- Run: An execution instance where tasks are performed and steps are tracked.
Create and manage AI assistants with configurable instructions, temperature, and capabilities (e.g., file search, code interpretation).
Manage conversations by creating threads and exchanging messages with context.
Execute tasks asynchronously, allowing for step-by-step operations and status monitoring.
Store and manage vectorized data or files for advanced AI integrations, such as semantic search.
Define an assistant with specific instructions and model:
var assistant = openAiApi.Assistant;
var created = await assistant
.WithTemperature(0.7)
.WithInstructions("You are a personal assistant. Respond professionally to all queries.")
.WithModel("gpt-4")
.CreateAsync();
Console.WriteLine($"Assistant created with ID: {created.Id}");
Retrieve the assistant details:
var retrievedAssistant = await assistant.RetrieveAsync(created.Id);
Console.WriteLine($"Retrieved Assistant ID: {retrievedAssistant.Id}");
Enable the assistant to write and execute Python code:
var assistant = openAiApi.Assistant;
var created = await assistant
.WithInstructions("You are a Python code interpreter. Solve math problems by running Python code.")
.WithCodeInterpreter()
.WithModel("gpt-4")
.CreateAsync();
Console.WriteLine($"Assistant created for code interpretation with ID: {created.Id}");
Start a conversation thread:
var threadClient = openAiApi.Thread;
var response = await threadClient
.WithMessage()
.AddText(Chat.ChatRole.User, "What is the capital of France?")
.CreateAsync();
Console.WriteLine($"Thread created with ID: {response.Id}");
Add more messages to the thread:
var responseMessages = await threadClient.WithId(response.Id)
.WithMessage()
.AddText(Chat.ChatRole.Assistant, "Please explain the Nexus.")
.AddMessagesAsync()
.ToListAsync();
Start a run and retrieve its status:
var runClient = openAiApi.Run;
var runResponse = await runClient
.WithThread(threadId)
.AddText(Chat.ChatRole.Assistant, "Let me calculate that for you.")
.StartAsync(assistantId);
Console.WriteLine($"Run started with ID: {runResponse.Id}");
var steps = await runClient.ListStepsAsync(runResponse.Id);
foreach (var step in steps.Data)
{
Console.WriteLine($"Step: {step.Content}");
}
Stream responses for real-time feedback:
var runClient = openAiApi.Run;
string? runResponseId = null;
var message = new StringBuilder();
await foreach (var value in runClient
.WithThread(response.Id)
.AddText(Chat.ChatRole.Assistant, "Please explain the Nexus.")
.StreamAsync(created.Id))
{
if (value.Is<RunResult>())
runResponseId = value.AsT0?.Id;
else if (value.Is<ThreadChunkMessageResponse>())
{
var content = value.AsT2?.Delta?.Content;
if (content != null)
{
if (content.Is<string>())
message.Append(content.AsT0);
else
foreach (var c in content.CastT1)
{
if (c.Text != null)
message.Append(c.Text?.Value);
}
}
}
}
Console.WriteLine($"Streamed Response: {message.ToString()}");
Upload files to a vector store for advanced integrations:
var fileApi = openAiApi.File;
var fileId = await fileApi.UploadFileAsync(fileBytes, "document.txt", "application/text");
var vectorStore = await openAiApi.VectorStore
.WithName("KnowledgeBase")
.AddFiles(new[] { fileId })
.AddMetadata("Category", "Documentation")
.CreateAsync();
Console.WriteLine($"VectorStore created with ID: {vectorStore.Id}");
Retrieve stored vectors:
var retrievedStore = await openAiApi.VectorStore.WithId(vectorStore.Id).RetrieveAsync();
Console.WriteLine($"Metadata: {retrievedStore.Metadata["Category"]}");
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Rystem.OpenAi
Similar Open Source Tools
polyfire-js
Polyfire is an all-in-one managed backend for AI apps that allows users to build AI apps directly from the frontend, eliminating the need for a separate backend. It simplifies the process by providing most backend services in just a few lines of code. With Polyfire, users can easily create chatbots, transcribe audio files to text, generate simple text, create a long-term memory, and generate images with Dall-E. The tool also offers starter guides and tutorials to help users get started quickly and efficiently.
farfalle
Farfalle is an open-source AI-powered search engine that allows users to run their own local LLM or utilize the cloud. It provides a tech stack including Next.js for frontend, FastAPI for backend, Tavily for search API, Logfire for logging, and Redis for rate limiting. Users can get started by setting up prerequisites like Docker and Ollama, and obtaining API keys for Tavily, OpenAI, and Groq. The tool supports models like llama3, mistral, and gemma. Users can clone the repository, set environment variables, run containers using Docker Compose, and deploy the backend and frontend using services like Render and Vercel.
swift-ocr-llm-powered-pdf-to-markdown
Swift OCR is a powerful tool for extracting text from PDF files using OpenAI's GPT-4 Turbo with Vision model. It offers flexible input options, advanced OCR processing, performance optimizations, structured output, robust error handling, and scalable architecture. The tool ensures accurate text extraction, resilience against failures, and efficient handling of multiple requests.
WebAI-to-API
This project implements a web API that offers a unified interface to Google Gemini and Claude 3. It provides a self-hosted, lightweight, and scalable solution for accessing these AI models through a streaming API. The API supports both Claude and Gemini models, allowing users to interact with them in real-time. The project includes a user-friendly web UI for configuration and documentation, making it easy to get started and explore the capabilities of the API.
VITA
VITA is an open-source interactive omni multimodal Large Language Model (LLM) capable of processing video, image, text, and audio inputs simultaneously. It stands out with features like Omni Multimodal Understanding, Non-awakening Interaction, and Audio Interrupt Interaction. VITA can respond to user queries without a wake-up word, track and filter external queries in real-time, and handle various query inputs effectively. The model utilizes state tokens and a duplex scheme to enhance the multimodal interactive experience.
VLMEvalKit
VLMEvalKit is an open-source evaluation toolkit of large vision-language models (LVLMs). It enables one-command evaluation of LVLMs on various benchmarks, without the heavy workload of data preparation under multiple repositories. In VLMEvalKit, we adopt generation-based evaluation for all LVLMs, and provide the evaluation results obtained with both exact matching and LLM-based answer extraction.
solana-agent-kit
Solana Agent Kit is an open-source toolkit designed for connecting AI agents to Solana protocols. It enables agents, regardless of the model used, to autonomously perform various Solana actions such as trading tokens, launching new tokens, lending assets, sending compressed airdrops, executing blinks, and more. The toolkit integrates core blockchain features like token operations, NFT management via Metaplex, DeFi integration, Solana blinks, AI integration features with LangChain, autonomous modes, and AI tools. It provides ready-to-use tools for blockchain operations, supports autonomous agent actions, and offers features like memory management, real-time feedback, and error handling. Solana Agent Kit facilitates tasks such as deploying tokens, creating NFT collections, swapping tokens, lending tokens, staking SOL, and sending SPL token airdrops via ZK compression. It also includes functionalities for fetching price data from Pyth and relies on key Solana and Metaplex libraries for its operations.
acte
Acte is a framework designed to build GUI-like tools for AI Agents. It aims to address the issues of cognitive load and freedom degrees when interacting with multiple APIs in complex scenarios. By providing a graphical user interface (GUI) for Agents, Acte helps reduce cognitive load and constraints interaction, similar to how humans interact with computers through GUIs. The tool offers APIs for starting new sessions, executing actions, and displaying screens, accessible via HTTP requests or the SessionManager class.
human
AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition, Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis, Age & Gender & Emotion Prediction, Gaze Tracking, Gesture Recognition, Body Segmentation
obsei
Obsei is an open-source, low-code, AI powered automation tool that consists of an Observer to collect unstructured data from various sources, an Analyzer to analyze the collected data with various AI tasks, and an Informer to send analyzed data to various destinations. The tool is suitable for scheduled jobs or serverless applications as all Observers can store their state in databases. Obsei is still in alpha stage, so caution is advised when using it in production. The tool can be used for social listening, alerting/notification, automatic customer issue creation, extraction of deeper insights from feedbacks, market research, dataset creation for various AI tasks, and more based on creativity.
aio-pika
Aio-pika is a wrapper around aiormq for asyncio and humans. It provides a completely asynchronous API, object-oriented API, transparent auto-reconnects with complete state recovery, Python 3.7+ compatibility, transparent publisher confirms support, transactions support, and complete type-hints coverage.
LLMTSCS
LLMLight is a novel framework that employs Large Language Models (LLMs) as decision-making agents for Traffic Signal Control (TSC). The framework leverages the advanced generalization capabilities of LLMs to engage in a reasoning and decision-making process akin to human intuition for effective traffic control. LLMLight has been demonstrated to be remarkably effective, generalizable, and interpretable against various transportation-based and RL-based baselines on nine real-world and synthetic datasets.
cog
Cog is an open-source tool that lets you package machine learning models in a standard, production-ready container. You can deploy your packaged model to your own infrastructure, or to Replicate.
openai-edge-tts
This project provides a local, OpenAI-compatible text-to-speech (TTS) API using `edge-tts`. It emulates the OpenAI TTS endpoint (`/v1/audio/speech`), enabling users to generate speech from text with various voice options and playback speeds, just like the OpenAI API. `edge-tts` uses Microsoft Edge's online text-to-speech service, making it completely free. The project supports multiple audio formats, adjustable playback speed, and voice selection options, providing a flexible and customizable TTS solution for users.
ort
Ort is an unofficial ONNX Runtime 1.17 wrapper for Rust based on the now inactive onnxruntime-rs. ONNX Runtime accelerates ML inference on both CPU and GPU.