openapi
OpenAPI definitions, converters and LLM function calling schema composer.
Stars: 51
The `@samchon/openapi` repository is a collection of OpenAPI types and converters for various versions of OpenAPI specifications. It includes an 'emended' OpenAPI v3.1 specification that enhances clarity by removing ambiguous and duplicated expressions. The repository also provides an application composer for LLM (Large Language Model) function calling from OpenAPI documents, allowing users to easily perform LLM function calls based on the Swagger document. Conversions to different versions of OpenAPI documents are also supported, all based on the emended OpenAPI v3.1 specification. Users can validate their OpenAPI documents using the `typia` library with `@samchon/openapi` types, ensuring compliance with standard specifications.
README:
flowchart
subgraph "OpenAPI Specification"
v20("Swagger v2.0") --upgrades--> emended[["OpenAPI v3.1 (emended)"]]
v30("OpenAPI v3.0") --upgrades--> emended
v31("OpenAPI v3.1") --emends--> emended
end
subgraph "OpenAPI Generator"
emended --normalizes--> migration[["Migration Schema"]]
migration --"Artificial Intelligence"--> lfc{{"LLM Function Calling"}}
lfc --"OpenAI"--> chatgpt("ChatGPT")
lfc --"Anthropic"--> claude("Claude")
lfc --"Google"--> gemini("Gemini")
lfc --"Meta"--> llama("Llama")
end
OpenAPI definitions, converters and LLM function calling application composer.
@samchon/openapi
is a collection of OpenAPI types for every versions, and converters for them. In the OpenAPI types, there is an "emended" OpenAPI v3.1 specification, which has removed ambiguous and duplicated expressions for the clarity. Every conversions are based on the emended OpenAPI v3.1 specification.
@samchon/openapi
also provides LLM (Large Language Model) function calling application composer from the OpenAPI document with many strategies. With the HttpLlm
module, you can perform the LLM funtion calling extremely easily just by delivering the OpenAPI (Swagger) document.
HttpLlm.application()
IHttpLlmApplication<Model>
IHttpLlmFunction<Model>
- Supported schemas
-
IChatGptSchema
: OpenAI ChatGPT -
IClaudeSchema
: Anthropic Claude -
IGeminiSchema
: Google Gemini -
ILlamaSchema
: Meta Llama
-
- Midldle layer schemas
-
ILlmSchemaV3
: middle layer based on OpenAPI v3.0 specification -
ILlmSchemaV3_1
: middle layer based on OpenAPI v3.1 specification
-
npm install @samchon/openapi
Just install by npm i @samchon/openapi
command.
Here is an example code utilizing the @samchon/openapi
for LLM function calling purpose.
import {
HttpLlm,
IChatGptSchema,
IHttpLlmApplication,
IHttpLlmFunction,
OpenApi,
OpenApiV3,
OpenApiV3_1,
SwaggerV2,
} from "@samchon/openapi";
import fs from "fs";
import typia from "typia";
const main = async (): Promise<void> => {
// read swagger document and validate it
const swagger:
| SwaggerV2.IDocument
| OpenApiV3.IDocument
| OpenApiV3_1.IDocument = JSON.parse(
await fs.promises.readFile("swagger.json", "utf8"),
);
typia.assert(swagger); // recommended
// convert to emended OpenAPI document,
// and compose LLM function calling application
const document: OpenApi.IDocument = OpenApi.convert(swagger);
const application: IHttpLlmApplication<"chatgpt"> = HttpLlm.application({
model: "chatgpt",
document,
});
// Let's imagine that LLM has selected a function to call
const func: IHttpLlmFunction<"chatgpt"> | undefined =
application.functions.find(
// (f) => f.name === "llm_selected_fuction_name"
(f) => f.path === "/bbs/articles" && f.method === "post",
);
if (func === undefined) throw new Error("No matched function exists.");
// actual execution is by yourself
const article = await HttpLlm.execute({
connection: {
host: "http://localhost:3000",
},
application,
function: func,
arguments: {
// arguments composed by LLM
body: {
title: "Hello, world!",
body: "Let's imagine that this argument is composed by LLM.",
thumbnail: null,
},
},
});
console.log("article", article);
};
main().catch(console.error);
flowchart
v20(Swagger v2.0) --upgrades--> emended[["<b><u>OpenAPI v3.1 (emended)</u></b>"]]
v30(OpenAPI v3.0) --upgrades--> emended
v31(OpenAPI v3.1) --emends--> emended
emended --downgrades--> v20d(Swagger v2.0)
emended --downgrades--> v30d(Swagger v3.0)
@samchon/openapi
support every versions of OpenAPI specifications with detailed TypeScript types.
Also, @samchon/openapi
provides "emended OpenAPI v3.1 definition" which has removed ambiguous and duplicated expressions for clarity. It has emended original OpenAPI v3.1 specification like above. You can compose the "emended OpenAPI v3.1 document" by calling the OpenApi.convert()
function.
- Operation
- Merge
OpenApiV3_1.IPathItem.parameters
toOpenApi.IOperation.parameters
- Resolve references of
OpenApiV3_1.IOperation
members - Escape references of
OpenApiV3_1.IComponents.examples
- Merge
- JSON Schema
- Decompose mixed type:
OpenApiV3_1.IJsonSchema.IMixed
- Resolve nullable property:
OpenApiV3_1.IJsonSchema.__ISignificant.nullable
- Array type utilizes only single
OpenAPI.IJsonSchema.IArray.items
- Tuple type utilizes only
OpenApi.IJsonSchema.ITuple.prefixItems
- Merge
OpenApiV3_1.IJsonSchema.IAnyOf
toOpenApi.IJsonSchema.IOneOf
- Merge
OpenApiV3_1.IJsonSchema.IRecursiveReference
toOpenApi.IJsonSchema.IReference
- Merge
OpenApiV3_1.IJsonSchema.IAllOf
toOpenApi.IJsonSchema.IObject
- Decompose mixed type:
Conversions to another version's OpenAPI document is also based on the "emended OpenAPI v3.1 specification" like above diagram. You can do it through OpenApi.downgrade()
function. Therefore, if you want to convert Swagger v2.0 document to OpenAPI v3.0 document, you have to call two functions; OpenApi.convert()
and then OpenApi.downgrade()
.
At last, if you utilize typia
library with @samchon/openapi
types, you can validate whether your OpenAPI document is following the standard specification or not. Just visit one of below playground links, and paste your OpenAPI document URL address. This validation strategy would be superior than any other OpenAPI validator libraries.
- Playground Links
import { OpenApi, OpenApiV3, OpenApiV3_1, SwaggerV2 } from "@samchon/openapi";
import typia from "typia";
const main = async (): Promise<void> => {
// GET YOUR OPENAPI DOCUMENT
const response: Response = await fetch(
"https://raw.githubusercontent.com/samchon/openapi/master/examples/v3.0/openai.json"
);
const document: any = await response.json();
// TYPE VALIDATION
const result = typia.validate<
| OpenApiV3_1.IDocument
| OpenApiV3.IDocument
| SwaggerV2.IDocument
>(document);
if (result.success === false) {
console.error(result.errors);
return;
}
// CONVERT TO EMENDED
const emended: OpenApi.IDocument = OpenApi.convert(document);
console.info(emended);
};
main().catch(console.error);
flowchart TD
subgraph "OpenAPI Specification"
v20("Swagger v2.0") --upgrades--> emended[["OpenAPI v3.1 (emended)"]]
v30("OpenAPI v3.0") --upgrades--> emended
v31("OpenAPI v3.1") --emends--> emended
end
subgraph "OpenAPI Generator"
emended --normalizes--> migration[["Migration Schema"]]
migration --"Artificial Intelligence"--> lfc{{"<b><u>LLM Function Calling</b></u>"}}
lfc --"OpenAI"--> chatgpt("ChatGPT")
lfc --"Anthropic"--> claude("Claude")
lfc --"Google"--> gemini("Gemini")
lfc --"Meta"--> llama("Llama")
end
LLM function calling application from OpenAPI document.
@samchon/openapi
provides LLM (Large Language Model) funtion calling application from the "emended OpenAPI v3.1 document". Therefore, if you have any HTTP backend server and succeeded to build an OpenAPI document, you can easily make the A.I. chatbot application.
In the A.I. chatbot, LLM will select proper function to remotely call from the conversations with user, and fill arguments of the function automatically. If you actually execute the function call through the HttpLlm.execute()
funtion, it is the "LLM function call."
Let's enjoy the fantastic LLM function calling feature very easily with @samchon/openapi
.
- Application
- Schemas
-
IChatGptSchema
: OpenAI ChatGPT -
IClaudeSchema
: Anthropic Claude -
IGeminiSchema
: Google Gemini -
ILlamaSchema
: Meta Llama -
ILlmSchemaV3
: middle layer based on OpenAPI v3.0 specification -
ILlmSchemaV3_1
: middle layer based on OpenAPI v3.1 specification
-
- Type Checkers
[!NOTE]
You also can compose
ILlmApplication
from a class type withtypia
.https://typia.io/docs/llm/application
import { ILlmApplication } from "@samchon/openapi"; import typia from "typia"; const app: ILlmApplication<"chatgpt"> = typia.llm.application<YourClassType, "chatgpt">();
[!TIP]
LLM selects proper function and fill arguments.
In nowadays, most LLM (Large Language Model) like OpenAI are supporting "function calling" feature. The "LLM function calling" means that LLM automatically selects a proper function and fills parameter values from conversation with the user (may by chatting text).
Actual function call execution is by yourself.
LLM (Large Language Model) providers like OpenAI selects a proper function to call from the conversations with users, and fill arguments of it. However, function calling feature supported by LLM providers do not perform the function call execution. The actual execution responsibility is on you.
In @samchon/openapi
, you can execute the LLM function calling by HttpLlm.execute()
(or HttpLlm.propagate()
) function. Here is an example code executing the LLM function calling through the HttpLlm.execute()
function. As you can see, to execute the LLM function call, you have to deliver these informations:
- Connection info to the HTTP server
- Application of the LLM fuction calling
- LLM function schema to call
- Arguments for the function call (maybe composed by LLM)
Here is the example code executing the LLM function call with @samchon/openapi
.
- Example Code:
test/examples/chatgpt-function-call-to-sale-create.ts
- Prompt describing the produc to create:
Microsoft Surface Pro 9
- Result of the Function Calling:
examples/arguments/chatgpt.microsoft-surface-pro-9.input.json
import {
HttpLlm,
IChatGptSchema,
IHttpLlmApplication,
IHttpLlmFunction,
OpenApi,
OpenApiV3,
OpenApiV3_1,
SwaggerV2,
} from "@samchon/openapi";
import OpenAI from "openai";
import typia from "typia";
const main = async (): Promise<void> => {
// Read swagger document and validate it
const swagger:
| SwaggerV2.IDocument
| OpenApiV3.IDocument
| OpenApiV3_1.IDocument = JSON.parse(
await fetch(
"https://github.com/samchon/shopping-backend/blob/master/packages/api/swagger.json",
).then((r) => r.json()),
);
typia.assert(swagger); // recommended
// convert to emended OpenAPI document,
// and compose LLM function calling application
const document: OpenApi.IDocument = OpenApi.convert(swagger);
const application: IHttpLlmApplication<"chatgpt"> = HttpLlm.application({
model: "chatgpt",
document,
});
// Let's imagine that LLM has selected a function to call
const func: IHttpLlmFunction<"chatgpt"> | undefined =
application.functions.find(
// (f) => f.name === "llm_selected_fuction_name"
(f) => f.path === "/shoppings/sellers/sale" && f.method === "post",
);
if (func === undefined) throw new Error("No matched function exists.");
// Get arguments by ChatGPT function calling
const client: OpenAI = new OpenAI({
apiKey: "<YOUR_OPENAI_API_KEY>",
});
const completion: OpenAI.ChatCompletion =
await client.chat.completions.create({
model: "gpt-4o",
messages: [
{
role: "assistant",
content:
"You are a helpful customer support assistant. Use the supplied tools to assist the user.",
},
{
role: "user",
content: "<DESCRIPTION ABOUT THE SALE>",
// https://github.com/samchon/openapi/blob/master/examples/function-calling/prompts/microsoft-surface-pro-9.md
},
],
tools: [
{
type: "function",
function: {
name: func.name,
description: func.description,
parameters: func.parameters as Record<string, any>,
},
},
],
});
const toolCall: OpenAI.ChatCompletionMessageToolCall =
completion.choices[0].message.tool_calls![0];
// Actual execution by yourself
const article = await HttpLlm.execute({
connection: {
host: "http://localhost:37001",
},
application,
function: func,
input: JSON.parse(toolCall.function.arguments),
});
console.log("article", article);
};
main().catch(console.error);
Arguments from both Human and LLM sides.
When composing parameter arguments through the LLM (Large Language Model) function calling, there can be a case that some parameters (or nested properties) must be composed not by LLM, but by Human. File uploading feature, or sensitive information like secret key (password) cases are the representative examples.
In that case, you can configure the LLM function calling schemas to exclude such Human side parameters (or nested properties) by IHttpLlmApplication.options.separate
property. Instead, you have to merge both Human and LLM composed parameters into one by calling the HttpLlm.mergeParameters()
before the LLM function call execution of HttpLlm.execute()
function.
Here is the example code separating the file uploading feature from the LLM function calling schema, and combining both Human and LLM composed parameters into one before the LLM function call execution.
- Example Code:
test/examples/claude-function-call-separate-to-sale-create.ts
- Prompt describing the produc to create:
Microsoft Surpace Pro 9
- Result of the Function Calling:
examples/arguments/claude.microsoft-surface-pro-9.input.json
import Anthropic from "@anthropic-ai/sdk";
import {
ClaudeTypeChecker,
HttpLlm,
IClaudeSchema,
IHttpLlmApplication,
IHttpLlmFunction,
OpenApi,
OpenApiV3,
OpenApiV3_1,
SwaggerV2,
} from "@samchon/openapi";
import typia from "typia";
const main = async (): Promise<void> => {
// Read swagger document and validate it
const swagger:
| SwaggerV2.IDocument
| OpenApiV3.IDocument
| OpenApiV3_1.IDocument = JSON.parse(
await fetch(
"https://github.com/samchon/shopping-backend/blob/master/packages/api/swagger.json",
).then((r) => r.json()),
);
typia.assert(swagger); // recommended
// convert to emended OpenAPI document,
// and compose LLM function calling application
const document: OpenApi.IDocument = OpenApi.convert(swagger);
const application: IHttpLlmApplication<"claude"> = HttpLlm.application({
model: "claude",
document,
options: {
reference: true,
separate: (schema) =>
ClaudeTypeChecker.isString(schema) &&
!!schema.contentMediaType?.startsWith("image"),
},
});
// Let's imagine that LLM has selected a function to call
const func: IHttpLlmFunction<"claude"> | undefined =
application.functions.find(
// (f) => f.name === "llm_selected_fuction_name"
(f) => f.path === "/shoppings/sellers/sale" && f.method === "post",
);
if (func === undefined) throw new Error("No matched function exists.");
// Get arguments by ChatGPT function calling
const client: Anthropic = new Anthropic({
apiKey: "<YOUR_ANTHROPIC_API_KEY>",
});
const completion: Anthropic.Message = await client.messages.create({
model: "claude-3-5-sonnet-latest",
max_tokens: 8_192,
messages: [
{
role: "assistant",
content:
"You are a helpful customer support assistant. Use the supplied tools to assist the user.",
},
{
role: "user",
content: "<DESCRIPTION ABOUT THE SALE>",
// https://github.com/samchon/openapi/blob/master/examples/function-calling/prompts/microsoft-surface-pro-9.md
},
],
tools: [
{
name: func.name,
description: func.description,
input_schema: func.separated!.llm as any,
},
],
});
const toolCall: Anthropic.ToolUseBlock = completion.content.filter(
(c) => c.type === "tool_use",
)[0]!;
// Actual execution by yourself
const article = await HttpLlm.execute({
connection: {
host: "http://localhost:37001",
},
application,
function: func,
input: HttpLlm.mergeParameters({
function: func,
llm: toolCall.input as any,
human: {
// Human composed parameter values
content: {
files: [],
thumbnails: [
{
name: "thumbnail",
extension: "jpeg",
url: "https://serpapi.com/searches/673d3a37e45f3316ecd8ab3e/images/1be25e6e2b1fb7509f1af89c326cb41749301b94375eb5680b9bddcdf88fabcb.jpeg",
},
// ...
],
},
},
}),
});
console.log("article", article);
};
main().catch(console.error);
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for openapi
Similar Open Source Tools
openapi
The `@samchon/openapi` repository is a collection of OpenAPI types and converters for various versions of OpenAPI specifications. It includes an 'emended' OpenAPI v3.1 specification that enhances clarity by removing ambiguous and duplicated expressions. The repository also provides an application composer for LLM (Large Language Model) function calling from OpenAPI documents, allowing users to easily perform LLM function calls based on the Swagger document. Conversions to different versions of OpenAPI documents are also supported, all based on the emended OpenAPI v3.1 specification. Users can validate their OpenAPI documents using the `typia` library with `@samchon/openapi` types, ensuring compliance with standard specifications.
llm.nvim
llm.nvim is a universal plugin for a large language model (LLM) designed to enable users to interact with LLM within neovim. Users can customize various LLMs such as gpt, glm, kimi, and local LLM. The plugin provides tools for optimizing code, comparing code, translating text, and more. It also supports integration with free models from Cloudflare, Github models, siliconflow, and others. Users can customize tools, chat with LLM, quickly translate text, and explain code snippets. The plugin offers a flexible window interface for easy interaction and customization.
orch
orch is a library for building language model powered applications and agents for the Rust programming language. It can be used for tasks such as text generation, streaming text generation, structured data generation, and embedding generation. The library provides functionalities for executing various language model tasks and can be integrated into different applications and contexts. It offers flexibility for developers to create language model-powered features and applications in Rust.
Webscout
WebScout is a versatile tool that allows users to search for anything using Google, DuckDuckGo, and phind.com. It contains AI models, can transcribe YouTube videos, generate temporary email and phone numbers, has TTS support, webai (terminal GPT and open interpreter), and offline LLMs. It also supports features like weather forecasting, YT video downloading, temp mail and number generation, text-to-speech, advanced web searches, and more.
dashscope-sdk
DashScope SDK for .NET is an unofficial SDK maintained by Cnblogs, providing various APIs for text embedding, generation, multimodal generation, image synthesis, and more. Users can interact with the SDK to perform tasks such as text completion, chat generation, function calls, file operations, and more. The project is under active development, and users are advised to check the Release Notes before upgrading.
funcchain
Funcchain is a Python library that allows you to easily write cognitive systems by leveraging Pydantic models as output schemas and LangChain in the backend. It provides a seamless integration of LLMs into your apps, utilizing OpenAI Functions or LlamaCpp grammars (json-schema-mode) for efficient structured output. Funcchain compiles the Funcchain syntax into LangChain runnables, enabling you to invoke, stream, or batch process your pipelines effortlessly.
rust-genai
genai is a multi-AI providers library for Rust that aims to provide a common and ergonomic single API to various generative AI providers such as OpenAI, Anthropic, Cohere, Ollama, and Gemini. It focuses on standardizing chat completion APIs across major AI services, prioritizing ergonomics and commonality. The library initially focuses on text chat APIs and plans to expand to support images, function calling, and more in the future versions. Version 0.1.x will have breaking changes in patches, while version 0.2.x will follow semver more strictly. genai does not provide a full representation of a given AI provider but aims to simplify the differences at a lower layer for ease of use.
venom
Venom is a high-performance system developed with JavaScript to create a bot for WhatsApp, support for creating any interaction, such as customer service, media sending, sentence recognition based on artificial intelligence and all types of design architecture for WhatsApp.
instructor
Instructor is a Python library that makes it a breeze to work with structured outputs from large language models (LLMs). Built on top of Pydantic, it provides a simple, transparent, and user-friendly API to manage validation, retries, and streaming responses. Get ready to supercharge your LLM workflows!
cellseg_models.pytorch
cellseg-models.pytorch is a Python library built upon PyTorch for 2D cell/nuclei instance segmentation models. It provides multi-task encoder-decoder architectures and post-processing methods for segmenting cell/nuclei instances. The library offers high-level API to define segmentation models, open-source datasets for training, flexibility to modify model components, sliding window inference, multi-GPU inference, benchmarking utilities, regularization techniques, and example notebooks for training and finetuning models with different backbones.
ai21-python
The AI21 Labs Python SDK is a comprehensive tool for interacting with the AI21 API. It provides functionalities for chat completions, conversational RAG, token counting, error handling, and support for various cloud providers like AWS, Azure, and Vertex. The SDK offers both synchronous and asynchronous usage, along with detailed examples and documentation. Users can quickly get started with the SDK to leverage AI21's powerful models for various natural language processing tasks.
candle-vllm
Candle-vllm is an efficient and easy-to-use platform designed for inference and serving local LLMs, featuring an OpenAI compatible API server. It offers a highly extensible trait-based system for rapid implementation of new module pipelines, streaming support in generation, efficient management of key-value cache with PagedAttention, and continuous batching. The tool supports chat serving for various models and provides a seamless experience for users to interact with LLMs through different interfaces.
parrot.nvim
Parrot.nvim is a Neovim plugin that prioritizes a seamless out-of-the-box experience for text generation. It simplifies functionality and focuses solely on text generation, excluding integration of DALLE and Whisper. It supports persistent conversations as markdown files, custom hooks for inline text editing, multiple providers like Anthropic API, perplexity.ai API, OpenAI API, Mistral API, and local/offline serving via ollama. It allows custom agent definitions, flexible API credential support, and repository-specific instructions with a `.parrot.md` file. It does not have autocompletion or hidden requests in the background to analyze files.
ax
Ax is a Typescript library that allows users to build intelligent agents inspired by agentic workflows and the Stanford DSP paper. It seamlessly integrates with multiple Large Language Models (LLMs) and VectorDBs to create RAG pipelines or collaborative agents capable of solving complex problems. The library offers advanced features such as streaming validation, multi-modal DSP, and automatic prompt tuning using optimizers. Users can easily convert documents of any format to text, perform smart chunking, embedding, and querying, and ensure output validation while streaming. Ax is production-ready, written in Typescript, and has zero dependencies.
aiotdlib
aiotdlib is a Python asyncio Telegram client based on TDLib. It provides automatic generation of types and functions from tl schema, validation, good IDE type hinting, and high-level API methods for simpler work with tdlib. The package includes prebuilt TDLib binaries for macOS (arm64) and Debian Bullseye (amd64). Users can use their own binary by passing `library_path` argument to `Client` class constructor. Compatibility with other versions of the library is not guaranteed. The tool requires Python 3.9+ and users need to get their `api_id` and `api_hash` from Telegram docs for installation and usage.
For similar tasks
openapi
The `@samchon/openapi` repository is a collection of OpenAPI types and converters for various versions of OpenAPI specifications. It includes an 'emended' OpenAPI v3.1 specification that enhances clarity by removing ambiguous and duplicated expressions. The repository also provides an application composer for LLM (Large Language Model) function calling from OpenAPI documents, allowing users to easily perform LLM function calls based on the Swagger document. Conversions to different versions of OpenAPI documents are also supported, all based on the emended OpenAPI v3.1 specification. Users can validate their OpenAPI documents using the `typia` library with `@samchon/openapi` types, ensuring compliance with standard specifications.
For similar jobs
Awesome-LLM-RAG-Application
Awesome-LLM-RAG-Application is a repository that provides resources and information about applications based on Large Language Models (LLM) with Retrieval-Augmented Generation (RAG) pattern. It includes a survey paper, GitHub repo, and guides on advanced RAG techniques. The repository covers various aspects of RAG, including academic papers, evaluation benchmarks, downstream tasks, tools, and technologies. It also explores different frameworks, preprocessing tools, routing mechanisms, evaluation frameworks, embeddings, security guardrails, prompting tools, SQL enhancements, LLM deployment, observability tools, and more. The repository aims to offer comprehensive knowledge on RAG for readers interested in exploring and implementing LLM-based systems and products.
ChatGPT-On-CS
ChatGPT-On-CS is an intelligent chatbot tool based on large models, supporting various platforms like WeChat, Taobao, Bilibili, Douyin, Weibo, and more. It can handle text, voice, and image inputs, access external resources through plugins, and customize enterprise AI applications based on proprietary knowledge bases. Users can set custom replies, utilize ChatGPT interface for intelligent responses, send images and binary files, and create personalized chatbots using knowledge base files. The tool also features platform-specific plugin systems for accessing external resources and supports enterprise AI applications customization.
call-gpt
Call GPT is a voice application that utilizes Deepgram for Speech to Text, elevenlabs for Text to Speech, and OpenAI for GPT prompt completion. It allows users to chat with ChatGPT on the phone, providing better transcription, understanding, and speaking capabilities than traditional IVR systems. The app returns responses with low latency, allows user interruptions, maintains chat history, and enables GPT to call external tools. It coordinates data flow between Deepgram, OpenAI, ElevenLabs, and Twilio Media Streams, enhancing voice interactions.
awesome-LLM-resourses
A comprehensive repository of resources for Chinese large language models (LLMs), including data processing tools, fine-tuning frameworks, inference libraries, evaluation platforms, RAG engines, agent frameworks, books, courses, tutorials, and tips. The repository covers a wide range of tools and resources for working with LLMs, from data labeling and processing to model fine-tuning, inference, evaluation, and application development. It also includes resources for learning about LLMs through books, courses, and tutorials, as well as insights and strategies from building with LLMs.
tappas
Hailo TAPPAS is a set of full application examples that implement pipeline elements and pre-trained AI tasks. It demonstrates Hailo's system integration scenarios on predefined systems, aiming to accelerate time to market, simplify integration with Hailo's runtime SW stack, and provide a starting point for customers to fine-tune their applications. The tool supports both Hailo-15 and Hailo-8, offering various example applications optimized for different common hosts. TAPPAS includes pipelines for single network, two network, and multi-stream processing, as well as high-resolution processing via tiling. It also provides example use case pipelines like License Plate Recognition and Multi-Person Multi-Camera Tracking. The tool is regularly updated with new features, bug fixes, and platform support.
cloudflare-rag
This repository provides a fullstack example of building a Retrieval Augmented Generation (RAG) app with Cloudflare. It utilizes Cloudflare Workers, Pages, D1, KV, R2, AI Gateway, and Workers AI. The app features streaming interactions to the UI, hybrid RAG with Full-Text Search and Vector Search, switchable providers using AI Gateway, per-IP rate limiting with Cloudflare's KV, OCR within Cloudflare Worker, and Smart Placement for workload optimization. The development setup requires Node, pnpm, and wrangler CLI, along with setting up necessary primitives and API keys. Deployment involves setting up secrets and deploying the app to Cloudflare Pages. The project implements a Hybrid Search RAG approach combining Full Text Search against D1 and Hybrid Search with embeddings against Vectorize to enhance context for the LLM.
pixeltable
Pixeltable is a Python library designed for ML Engineers and Data Scientists to focus on exploration, modeling, and app development without the need to handle data plumbing. It provides a declarative interface for working with text, images, embeddings, and video, enabling users to store, transform, index, and iterate on data within a single table interface. Pixeltable is persistent, acting as a database unlike in-memory Python libraries such as Pandas. It offers features like data storage and versioning, combined data and model lineage, indexing, orchestration of multimodal workloads, incremental updates, and automatic production-ready code generation. The tool emphasizes transparency, reproducibility, cost-saving through incremental data changes, and seamless integration with existing Python code and libraries.
wave-apps
Wave Apps is a directory of sample applications built on H2O Wave, allowing users to build AI apps faster. The apps cover various use cases such as explainable hotel ratings, human-in-the-loop credit risk assessment, mitigating churn risk, online shopping recommendations, and sales forecasting EDA. Users can download, modify, and integrate these sample apps into their own projects to learn about app development and AI model deployment.