
LlmTornado
The .NET library to consume 100+ APIs: OpenAI, Anthropic, Google, DeepSeek, Cohere, Mistral, Azure, xAI, Perplexity, Groq, Ollama, LocalAi, and many more!
Stars: 90

LLM Tornado is a .NET library designed to simplify the consumption of various large language models (LLMs) from providers like OpenAI, Anthropic, Cohere, Google, Azure, Groq, and self-hosted APIs. It acts as an aggregator, allowing users to easily switch between different LLM providers with just a change in argument. Users can perform tasks such as chatting with documents, voice calling with AI, orchestrating assistants, generating images, and more. The library exposes capabilities through vendor extensions, making it easy to integrate and use multiple LLM providers simultaneously.
README:
🌪️ LLM Tornado - The .NET library to consume 100+ APIs, including OpenAI, Anthropic, Google, DeepSeek, Cohere, Mistral, Azure, xAI, Perplexity, Groq, and self-hosted APIs.
At least one new large language model is released each month. Wouldn't it be awesome if using the latest, shiny model was as easy as switching one argument? LLM Tornado is a framework for building AI, RAG/Agentic-enabled applications, allowing you to do just that.
Features:
- 100+ supported providers: OpenAI, Anthropic, Google, DeepSeek, Cohere, Mistral, Azure, xAI, Perplexity, Groq, and any (self-hosted) OpenAI-compatible inference servers, such as Ollama. Check the full Feature Matrix here.
-
API harmonization of many providers. For example, suppose a request accidentally passes
temperature
to a reasoning model, where such an argument is not supported. We take care of that, to maximize the probability of the call succeeding. This applies to various whims of the Providers, such asdeveloper_message
vssystem_prompt
(in Tornado there is just aSystem
role for Messages), Google having completely different endpoints for embedding multiple texts at once, and many other annoyances. - Powerful, strongly-typed
Vendor Extensions
for each provider offering something unique. Enables rich, resilient applications, while minimizing vendor lock-in. - Easy-to-grasp primitives for building Agentic systems, Chatbots, and RAG-based applications. Less complex than Semantic Kernel, and more powerful than the raw APIs. Easy to extend.
- Focused on minimizing breaking changes. We take these seriously and think ahead. Updating Tornado typically requires no action on your side, even when a new major version is released.
- Actively maintained for over two years, often with day 1 support for new features.
⭐ Awesome things you can do with Tornado:
- Chat with your documents
- Voice call with AI using your microphone
- Orchestrate Assistants
- Generate images
- Summarize a video (local file / YouTube)
- Turn text & images into high quality embeddings
- Transcribe audio in real time
- Create Chatbots utilizing multiple Agents:
https://github.com/lofcz/LlmTornado/assets/10260230/05c27b37-397d-4b4c-96a4-4138ade48dbe
... and a lot more! Now, instead of relying on one LLM provider, you can combine the unique strengths of many.
Install LLM Tornado via NuGet:
dotnet add package LlmTornado
Optional: extra features and quality of life extension methods are distributed in Contrib
addon:
dotnet add package LlmTornado LlmTornado.Contrib
Inferencing across multiple providers is as easy as changing the ChatModel
argument. Tornado instance can be constructed with multiple API keys, the correct key is then used based on the model automatically:
TornadoApi api = new TornadoApi([
new (LLmProviders.OpenAi, "OPEN_AI_KEY"),
new (LLmProviders.Anthropic, "ANTHROPIC_KEY"),
new (LLmProviders.Cohere, "COHERE_KEY"),
new (LLmProviders.Google, "GOOGLE_KEY"),
new (LLmProviders.Groq, "GROQ_KEY"),
new (LLmProviders.DeepSeek, "DEEP_SEEK_KEY"),
new (LLmProviders.Mistral, "MISTRAL_KEY"),
new (LLmProviders.XAi, "XAI_KEY"),
new (LLmProviders.Perplexity, "PERPLEXITY_KEY")
]);
List<ChatModel> models = [
ChatModel.OpenAi.O3.Mini, ChatModel.Anthropic.Claude37.Sonnet,
ChatModel.Cohere.Command.RPlus, ChatModel.Google.Gemini.Gemini2Flash001,
ChatModel.Groq.Meta.Llama370B, ChatModel.DeepSeek.Models.Chat,
ChatModel.Mistral.Premier.MistralLarge, ChatModel.XAi.Grok.Grok2241212,
ChatModel.Perplexity.Sonar.Default
];
foreach (ChatModel model in models)
{
string? response = await api.Chat.CreateConversation(model)
.AppendSystemMessage("You are a fortune teller.")
.AppendUserInput("What will my future bring?")
.GetResponse();
Console.WriteLine(response);
}
💡 Instead of passing in a strongly typed model, you can pass a string instead: await api.Chat.CreateConversation("gpt-4o")
, Tornado will automatically resolve the provider.
Tornado has a powerful concept of VendorExtensions
which can be applied to various endpoints and are strongly typed. Many Providers offer unique/niche APIs, often enabling use cases otherwise unavailable. For example, let's set a reasoning budget for Anthropic's Claude 3.7:
public static async Task AnthropicSonnet37Thinking()
{
Conversation chat = Program.Connect(LLmProviders.Anthropic).Chat.CreateConversation(new ChatRequest
{
Model = ChatModel.Anthropic.Claude37.Sonnet,
VendorExtensions = new ChatRequestVendorExtensions(new ChatRequestVendorAnthropicExtensions
{
Thinking = new AnthropicThinkingSettings
{
BudgetTokens = 2_000,
Enabled = true
}
})
});
chat.AppendUserInput("Explain how to solve differential equations.");
ChatRichResponse blocks = await chat.GetResponseRich();
if (blocks.Blocks is not null)
{
foreach (ChatRichResponseBlock reasoning in blocks.Blocks.Where(x => x.Type is ChatRichResponseBlockTypes.Reasoning))
{
Console.ForegroundColor = ConsoleColor.DarkGray;
Console.WriteLine(reasoning.Reasoning?.Content);
Console.ResetColor();
}
foreach (ChatRichResponseBlock reasoning in blocks.Blocks.Where(x => x.Type is ChatRichResponseBlockTypes.Message))
{
Console.WriteLine(reasoning.Message);
}
}
}
Instead of consuming commercial APIs, one can roll their own inference servers easily with a myriad of tools available. Here is a simple demo for streaming response with Ollama, but the same approach can be used for any custom provider:
public static async Task OllamaStreaming()
{
TornadoApi api = new TornadoApi(new Uri("http://localhost:11434")); // default Ollama port
await api.Chat.CreateConversation(new ChatModel("falcon3:1b")) // <-- replace with your model
.AppendUserInput("Why is the sky blue?")
.StreamResponse(Console.Write);
}
https://github.com/user-attachments/assets/de62f0fe-93e0-448c-81d0-8ab7447ad780
Tornado offers several levels of abstraction, trading more details for more complexity. The simple use cases where only plaintext is needed can be represented in a terse format:
await api.Chat.CreateConversation(ChatModel.Anthropic.Claude3.Sonnet)
.AppendSystemMessage("You are a fortune teller.")
.AppendUserInput("What will my future bring?")
.StreamResponse(Console.Write);
When plaintext is insufficient, switch to StreamResponseRich
or GetResponseRich()
APIs. Tools requested by the model can be resolved later and never returned to the model. This is useful in scenarios where we use the tools without intending to continue the conversation:
//Ask the model to generate two images, and stream the result:
public static async Task GoogleStreamImages()
{
Conversation chat = api.Chat.CreateConversation(new ChatRequest
{
Model = ChatModel.Google.GeminiExperimental.Gemini2FlashImageGeneration,
Modalities = [ ChatModelModalities.Text, ChatModelModalities.Image ]
});
chat.AppendUserInput([
new ChatMessagePart("Generate two images: a lion and a squirrel")
]);
await chat.StreamResponseRich(new ChatStreamEventHandler
{
MessagePartHandler = async (part) =>
{
if (part.Text is not null)
{
Console.Write(part.Text);
return;
}
if (part.Image is not null)
{
// In our tests this executes Chafa to turn the raw base64 data into Sixels
await DisplayImage(part.Image.Url);
}
},
BlockFinishedHandler = (block) =>
{
Console.WriteLine();
return ValueTask.CompletedTask;
},
OnUsageReceived = (usage) =>
{
Console.WriteLine();
Console.WriteLine(usage);
return ValueTask.CompletedTask;
}
});
}
Tools requested by the model can be resolved and the results returned immediately. This has the benefit of automatically continuing the conversation:
Conversation chat = api.Chat.CreateConversation(new ChatRequest
{
Model = ChatModel.OpenAi.Gpt4.O,
Tools =
[
new Tool(new ToolFunction("get_weather", "gets the current weather", new
{
type = "object",
properties = new
{
location = new
{
type = "string",
description = "The location for which the weather information is required."
}
},
required = new List<string> { "location" }
}))
]
})
.AppendSystemMessage("You are a helpful assistant")
.AppendUserInput("What is the weather like today in Prague?");
ChatStreamEventHandler handler = new ChatStreamEventHandler
{
MessageTokenHandler = (x) =>
{
Console.Write(x);
return Task.CompletedTask;
},
FunctionCallHandler = (calls) =>
{
calls.ForEach(x => x.Result = new FunctionResult(x, "A mild rain is expected around noon.", null));
return Task.CompletedTask;
},
AfterFunctionCallsResolvedHandler = async (results, handler) => { await chat.StreamResponseRich(handler); }
};
await chat.StreamResponseRich(handler);
Instead of resolving the tool call, we can postpone/quit the conversation. This is useful for extractive tasks, where we care only for the tool call:
Conversation chat = api.Chat.CreateConversation(new ChatRequest
{
Model = ChatModel.OpenAi.Gpt4.Turbo,
Tools = new List<Tool>
{
new Tool
{
Function = new ToolFunction("get_weather", "gets the current weather")
}
},
ToolChoice = new OutboundToolChoice(OutboundToolChoiceModes.Required)
});
chat.AppendUserInput("Who are you?"); // user asks something unrelated, but we force the model to use the tool
ChatRichResponse response = await chat.GetResponseRich(); // the response contains one block of type Function
GetResponseRichSafe()
API is also available, which is guaranteed not to throw on the network level. The response is wrapped in a network-level wrapper, containing additional information. For production use cases, either use try {} catch {}
on all the HTTP request-producing Tornado APIs, or use the safe APIs.
This interactive demo can be expanded into an end-user-facing interface in the style of ChatGPT. We show how to use strongly typed tools together with streaming and resolving parallel tool calls.
ChatStreamEventHandler
is a convenient class with a subscription interface for listening to the various streaming events:
public static async Task OpenAiFunctionsStreamingInteractive()
{
// 1. set up a sample tool using a strongly typed model
ChatPluginCompiler compiler = new ChatPluginCompiler();
compiler.SetFunctions([
new ChatPluginFunction("get_weather", "gets the current weather in a given city", [
new ChatFunctionParam("city_name", "name of the city", ChatPluginFunctionAtomicParamTypes.String)
])
]);
// 2. in this scenario, the conversation starts with the user asking for the current weather in two of the supported cities.
// we can try asking for the weather in the third supported city (Paris) later.
Conversation chat = api.Chat.CreateConversation(new ChatRequest
{
Model = ChatModel.OpenAi.Gpt4.Turbo,
Tools = compiler.GetFunctions()
}).AppendUserInput("Please call functions get_weather for Prague and Bratislava (two function calls).");
// 3. repl
while (true)
{
// 3.1 stream the response from llm
await StreamResponse();
// 3.2 read input
while (true)
{
Console.WriteLine();
Console.Write("> ");
string? input = Console.ReadLine();
if (input?.ToLowerInvariant() is "q" or "quit")
{
return;
}
if (!string.IsNullOrWhiteSpace(input))
{
chat.AppendUserInput(input);
break;
}
}
}
async Task StreamResponse()
{
await chat.StreamResponseRich(new ChatStreamEventHandler
{
MessageTokenHandler = async (token) =>
{
Console.Write(token);
},
FunctionCallHandler = async (fnCalls) =>
{
foreach (FunctionCall x in fnCalls)
{
if (!x.TryGetArgument("city_name", out string? cityName))
{
x.Result = new FunctionResult(x, new
{
result = "error",
message = "expected city_name argument"
}, null, true);
continue;
}
x.Result = new FunctionResult(x, new
{
result = "ok",
weather = cityName.ToLowerInvariant() is "prague" ? "A mild rain" : cityName.ToLowerInvariant() is "paris" ? "Foggy, cloudy" : "A sunny day"
}, null, true);
}
},
AfterFunctionCallsResolvedHandler = async (fnResults, handler) =>
{
await chat.StreamResponseRich(handler);
}
});
}
}
Other endpoints such as Images, Embedding, Speech, Assistants, Threads and Vision are also supported!
Check the links for simple-to-understand examples!
- 50,000+ installs on NuGet (previous names Lofcz.Forks.OpenAI, OpenAiNg).
- Used in commercial projects incurring charges of thousands of dollars monthly.
- The license will never change. Looking at you HashiCorp and Tiny.
- Supports streaming, functions/tools, modalities (images, audio), and strongly typed LLM plugins/connectors.
- Great performance, nullability annotations.
- Extensive tests suite.
- Maintained actively for two years, often with day 1 support for new features.
Most public classes, methods, and properties (90%+) are extensively XML documented. Feel free to open an issue here if you have any questions.
PRs are welcome!
This library is licensed under the MIT license.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for LlmTornado
Similar Open Source Tools

LlmTornado
LLM Tornado is a .NET library designed to simplify the consumption of various large language models (LLMs) from providers like OpenAI, Anthropic, Cohere, Google, Azure, Groq, and self-hosted APIs. It acts as an aggregator, allowing users to easily switch between different LLM providers with just a change in argument. Users can perform tasks such as chatting with documents, voice calling with AI, orchestrating assistants, generating images, and more. The library exposes capabilities through vendor extensions, making it easy to integrate and use multiple LLM providers simultaneously.

azure-functions-openai-extension
Azure Functions OpenAI Extension is a project that adds support for OpenAI LLM (GPT-3.5-turbo, GPT-4) bindings in Azure Functions. It provides NuGet packages for various functionalities like text completions, chat completions, assistants, embeddings generators, and semantic search. The project requires .NET 6 SDK or greater, Azure Functions Core Tools v4.x, and specific settings in Azure Function or local settings for development. It offers features like text completions, chat completion, assistants with custom skills, embeddings generators for text relatedness, and semantic search using vector databases. The project also includes examples in C# and Python for different functionalities.

whetstone.chatgpt
Whetstone.ChatGPT is a simple light-weight library that wraps the Open AI API with support for dependency injection. It supports features like GPT 4, GPT 3.5 Turbo, chat completions, audio transcription and translation, vision completions, files, fine tunes, images, embeddings, moderations, and response streaming. The library provides a video walkthrough of a Blazor web app built on it and includes examples such as a command line bot. It offers quickstarts for dependency injection, chat completions, completions, file handling, fine tuning, image generation, and audio transcription.

mcpdotnet
mcpdotnet is a .NET implementation of the Model Context Protocol (MCP), facilitating connections and interactions between .NET applications and MCP clients and servers. It aims to provide a clean, specification-compliant implementation with support for various MCP capabilities and transport types. The library includes features such as async/await pattern, logging support, and compatibility with .NET 8.0 and later. Users can create clients to use tools from configured servers and also create servers to register tools and interact with clients. The project roadmap includes expanding documentation, increasing test coverage, adding samples, performance optimization, SSE server support, and authentication.

instructor-js
Instructor is a Typescript library for structured extraction in Typescript, powered by llms, designed for simplicity, transparency, and control. It stands out for its simplicity, transparency, and user-centric design. Whether you're a seasoned developer or just starting out, you'll find Instructor's approach intuitive and steerable.

Jlama
Jlama is a modern Java inference engine designed for large language models. It supports various model types such as Gemma, Llama, Mistral, GPT-2, BERT, and more. The tool implements features like Flash Attention, Mixture of Experts, and supports different model quantization formats. Built with Java 21 and utilizing the new Vector API for faster inference, Jlama allows users to add LLM inference directly to their Java applications. The tool includes a CLI for running models, a simple UI for chatting with LLMs, and examples for different model types.

LightRAG
LightRAG is a PyTorch library designed for building and optimizing Retriever-Agent-Generator (RAG) pipelines. It follows principles of simplicity, quality, and optimization, offering developers maximum customizability with minimal abstraction. The library includes components for model interaction, output parsing, and structured data generation. LightRAG facilitates tasks like providing explanations and examples for concepts through a question-answering pipeline.

axar
AXAR AI is a lightweight framework designed for building production-ready agentic applications using TypeScript. It aims to simplify the process of creating robust, production-grade LLM-powered apps by focusing on familiar coding practices without unnecessary abstractions or steep learning curves. The framework provides structured, typed inputs and outputs, familiar and intuitive patterns like dependency injection and decorators, explicit control over agent behavior, real-time logging and monitoring tools, minimalistic design with little overhead, model agnostic compatibility with various AI models, and streamed outputs for fast and accurate results. AXAR AI is ideal for developers working on real-world AI applications who want a tool that gets out of the way and allows them to focus on shipping reliable software.

generative-ai
The 'Generative AI' repository provides a C# library for interacting with Google's Generative AI models, specifically the Gemini models. It allows users to access and integrate the Gemini API into .NET applications, supporting functionalities such as listing available models, generating content, creating tuned models, working with large files, starting chat sessions, and more. The repository also includes helper classes and enums for Gemini API aspects. Authentication methods include API key, OAuth, and various authentication modes for Google AI and Vertex AI. The package offers features for both Google AI Studio and Google Cloud Vertex AI, with detailed instructions on installation, usage, and troubleshooting.

mcp-go
MCP Go is a Go implementation of the Model Context Protocol (MCP), facilitating seamless integration between LLM applications and external data sources and tools. It handles complex protocol details and server management, allowing developers to focus on building tools. The tool is designed to be fast, simple, and complete, aiming to provide a high-level and easy-to-use interface for developing MCP servers. MCP Go is currently under active development, with core features working and advanced capabilities in progress.

experts
Experts.js is a tool that simplifies the creation and deployment of OpenAI's Assistants, allowing users to link them together as Tools to create a Panel of Experts system with expanded memory and attention to detail. It leverages the new Assistants API from OpenAI, which offers advanced features such as referencing attached files & images as knowledge sources, supporting instructions up to 256,000 characters, integrating with 128 tools, and utilizing the Vector Store API for efficient file search. Experts.js introduces Assistants as Tools, enabling the creation of Multi AI Agent Systems where each Tool is an LLM-backed Assistant that can take on specialized roles or fulfill complex tasks.

create-million-parameter-llm-from-scratch
The 'create-million-parameter-llm-from-scratch' repository provides a detailed guide on creating a Large Language Model (LLM) with 2.3 million parameters from scratch. The blog replicates the LLaMA approach, incorporating concepts like RMSNorm for pre-normalization, SwiGLU activation function, and Rotary Embeddings. The model is trained on a basic dataset to demonstrate the ease of creating a million-parameter LLM without the need for a high-end GPU.

Arcade-Learning-Environment
The Arcade Learning Environment (ALE) is a simple framework that allows researchers and hobbyists to develop AI agents for Atari 2600 games. It is built on top of the Atari 2600 emulator Stella and separates the details of emulation from agent design. The ALE currently supports three different interfaces: C++, Python, and OpenAI Gym.

gritlm
The 'gritlm' repository provides all materials for the paper Generative Representational Instruction Tuning. It includes code for inference, training, evaluation, and known issues related to the GritLM model. The repository also offers models for embedding and generation tasks, along with instructions on how to train and evaluate the models. Additionally, it contains visualizations, acknowledgements, and a citation for referencing the work.

crb
CRB (Composable Runtime Blocks) is a unique framework that implements hybrid workloads by seamlessly combining synchronous and asynchronous activities, state machines, routines, the actor model, and supervisors. It is ideal for building massive applications and serves as a low-level framework for creating custom frameworks, such as AI-agents. The core idea is to ensure high compatibility among all blocks, enabling significant code reuse. The framework allows for the implementation of algorithms with complex branching, making it suitable for building large-scale applications or implementing complex workflows, such as AI pipelines. It provides flexibility in defining structures, implementing traits, and managing execution flow, allowing users to create robust and nonlinear algorithms easily.
For similar tasks

anything-llm
AnythingLLM is a full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions.

DistiLlama
DistiLlama is a Chrome extension that leverages a locally running Large Language Model (LLM) to perform various tasks, including text summarization, chat, and document analysis. It utilizes Ollama as the locally running LLM instance and LangChain for text summarization. DistiLlama provides a user-friendly interface for interacting with the LLM, allowing users to summarize web pages, chat with documents (including PDFs), and engage in text-based conversations. The extension is easy to install and use, requiring only the installation of Ollama and a few simple steps to set up the environment. DistiLlama offers a range of customization options, including the choice of LLM model and the ability to configure the summarization chain. It also supports multimodal capabilities, allowing users to interact with the LLM through text, voice, and images. DistiLlama is a valuable tool for researchers, students, and professionals who seek to leverage the power of LLMs for various tasks without compromising data privacy.

SecureAI-Tools
SecureAI Tools is a private and secure AI tool that allows users to chat with AI models, chat with documents (PDFs), and run AI models locally. It comes with built-in authentication and user management, making it suitable for family members or coworkers. The tool is self-hosting optimized and provides necessary scripts and docker-compose files for easy setup in under 5 minutes. Users can customize the tool by editing the .env file and enabling GPU support for faster inference. SecureAI Tools also supports remote OpenAI-compatible APIs, with lower hardware requirements for using remote APIs only. The tool's features wishlist includes chat sharing, mobile-friendly UI, and support for more file types and markdown rendering.

serverless-pdf-chat
The serverless-pdf-chat repository contains a sample application that allows users to ask natural language questions of any PDF document they upload. It leverages serverless services like Amazon Bedrock, AWS Lambda, and Amazon DynamoDB to provide text generation and analysis capabilities. The application architecture involves uploading a PDF document to an S3 bucket, extracting metadata, converting text to vectors, and using a LangChain to search for information related to user prompts. The application is not intended for production use and serves as a demonstration and educational tool.

chat-your-doc
Chat Your Doc is an experimental project exploring various applications based on LLM technology. It goes beyond being just a chatbot project, focusing on researching LLM applications using tools like LangChain and LlamaIndex. The project delves into UX, computer vision, and offers a range of examples in the 'Lab Apps' section. It includes links to different apps, descriptions, launch commands, and demos, aiming to showcase the versatility and potential of LLM applications.

witsy
Witsy is a generative AI desktop application that supports various models like OpenAI, Ollama, Anthropic, MistralAI, Google, Groq, and Cerebras. It offers features such as chat completion, image generation, scratchpad for content creation, prompt anywhere functionality, AI commands for productivity, expert prompts for specialization, LLM plugins for additional functionalities, read aloud capabilities, chat with local files, transcription/dictation, Anthropic Computer Use support, local history of conversations, code formatting, image copy/download, and more. Users can interact with the application to generate content, boost productivity, and perform various AI-related tasks.

docs-ai
Docs AI is a platform that allows users to train their documents, chat with their documents, and create chatbots to solve queries. It is built using NextJS, Tailwind, tRPC, ShadcnUI, Prisma, Postgres, NextAuth, Pinecone, and Cloudflare R2. The platform requires Node.js (Version: >=18.x), PostgreSQL, and Redis for setup. Users can utilize Docker for development by using the provided `docker-compose.yml` file in the `/app` directory.

LlmTornado
LLM Tornado is a .NET library designed to simplify the consumption of various large language models (LLMs) from providers like OpenAI, Anthropic, Cohere, Google, Azure, Groq, and self-hosted APIs. It acts as an aggregator, allowing users to easily switch between different LLM providers with just a change in argument. Users can perform tasks such as chatting with documents, voice calling with AI, orchestrating assistants, generating images, and more. The library exposes capabilities through vendor extensions, making it easy to integrate and use multiple LLM providers simultaneously.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.