
openapi
OpenAPI definitions, converters and LLM function calling schema composer.
Stars: 82

The `@samchon/openapi` repository is a collection of OpenAPI types and converters for various versions of OpenAPI specifications. It includes an 'emended' OpenAPI v3.1 specification that enhances clarity by removing ambiguous and duplicated expressions. The repository also provides an application composer for LLM (Large Language Model) function calling from OpenAPI documents, allowing users to easily perform LLM function calls based on the Swagger document. Conversions to different versions of OpenAPI documents are also supported, all based on the emended OpenAPI v3.1 specification. Users can validate their OpenAPI documents using the `typia` library with `@samchon/openapi` types, ensuring compliance with standard specifications.
README:
flowchart
subgraph "OpenAPI Specification"
v20("Swagger v2.0") --upgrades--> emended[["OpenAPI v3.1 (emended)"]]
v30("OpenAPI v3.0") --upgrades--> emended
v31("OpenAPI v3.1") --emends--> emended
end
subgraph "OpenAPI Generator"
emended --normalizes--> migration[["Migration Schema"]]
migration --"Artificial Intelligence"--> lfc{{"LLM Function Calling"}}
lfc --"OpenAI"--> chatgpt("ChatGPT")
lfc --"Anthropic"--> claude("Claude")
lfc --"Google"--> gemini("Gemini")
lfc --"Meta"--> llama("Llama")
end
OpenAPI definitions, converters and LLM function calling application composer.
@samchon/openapi
is a collection of OpenAPI types for every versions, and converters for them. In the OpenAPI types, there is an "emended" OpenAPI v3.1 specification, which has removed ambiguous and duplicated expressions for the clarity. Every conversions are based on the emended OpenAPI v3.1 specification.
@samchon/openapi
also provides LLM (Large Language Model) function calling application composer from the OpenAPI document with many strategies. With the HttpLlm
module, you can perform the LLM function calling extremely easily just by delivering the OpenAPI (Swagger) document.
HttpLlm.application()
IHttpLlmApplication<Model>
IHttpLlmFunction<Model>
- Supported schemas
-
IChatGptSchema
: OpenAI ChatGPT -
IClaudeSchema
: Anthropic Claude -
IGeminiSchema
: Google Gemini -
ILlamaSchema
: Meta Llama
-
- Midldle layer schemas
-
ILlmSchemaV3
: middle layer based on OpenAPI v3.0 specification -
ILlmSchemaV3_1
: middle layer based on OpenAPI v3.1 specification
-
https://github.com/user-attachments/assets/01604b53-aca4-41cb-91aa-3faf63549ea6
Demonstration video composing A.I. chatbot with
@samchon/openapi
andagentica
- Shopping A.I. Chatbot Application: https://nestia.io/chat/shopping
- Shopping Backend Repository: https://github.com/samchon/shopping-backend
- Shopping Swagger Document (
@nestia/editor
): https://nestia.io/editor/?url=...
npm install @samchon/openapi
Just install by npm i @samchon/openapi
command.
Here is an example code utilizing the @samchon/openapi
for LLM function calling purpose.
import {
HttpLlm,
IChatGptSchema,
IHttpLlmApplication,
IHttpLlmFunction,
OpenApi,
OpenApiV3,
OpenApiV3_1,
SwaggerV2,
} from "@samchon/openapi";
import fs from "fs";
import typia from "typia";
const main = async (): Promise<void> => {
// read swagger document and validate it
const swagger:
| SwaggerV2.IDocument
| OpenApiV3.IDocument
| OpenApiV3_1.IDocument = JSON.parse(
await fs.promises.readFile("swagger.json", "utf8"),
);
typia.assert(swagger); // recommended
// convert to emended OpenAPI document,
// and compose LLM function calling application
const document: OpenApi.IDocument = OpenApi.convert(swagger);
const application: IHttpLlmApplication<"chatgpt"> = HttpLlm.application({
model: "chatgpt",
document,
});
// Let's imagine that LLM has selected a function to call
const func: IHttpLlmFunction<"chatgpt"> | undefined =
application.functions.find(
// (f) => f.name === "llm_selected_function_name"
(f) => f.path === "/bbs/articles" && f.method === "post",
);
if (func === undefined) throw new Error("No matched function exists.");
// actual execution is by yourself
const article = await HttpLlm.execute({
connection: {
host: "http://localhost:3000",
},
application,
function: func,
arguments: {
// arguments composed by LLM
body: {
title: "Hello, world!",
body: "Let's imagine that this argument is composed by LLM.",
thumbnail: null,
},
},
});
console.log("article", article);
};
main().catch(console.error);
flowchart
v20(Swagger v2.0) --upgrades--> emended[["<b><u>OpenAPI v3.1 (emended)</u></b>"]]
v30(OpenAPI v3.0) --upgrades--> emended
v31(OpenAPI v3.1) --emends--> emended
emended --downgrades--> v20d(Swagger v2.0)
emended --downgrades--> v30d(Swagger v3.0)
@samchon/openapi
support every versions of OpenAPI specifications with detailed TypeScript types.
Also, @samchon/openapi
provides "emended OpenAPI v3.1 definition" which has removed ambiguous and duplicated expressions for clarity. It has emended original OpenAPI v3.1 specification like above. You can compose the "emended OpenAPI v3.1 document" by calling the OpenApi.convert()
function.
- Operation
- Merge
OpenApiV3_1.IPathItem.parameters
toOpenApi.IOperation.parameters
- Resolve references of
OpenApiV3_1.IOperation
members - Escape references of
OpenApiV3_1.IComponents.examples
- Merge
- JSON Schema
- Decompose mixed type:
OpenApiV3_1.IJsonSchema.IMixed
- Resolve nullable property:
OpenApiV3_1.IJsonSchema.__ISignificant.nullable
- Array type utilizes only single
OpenAPI.IJsonSchema.IArray.items
- Tuple type utilizes only
OpenApi.IJsonSchema.ITuple.prefixItems
- Merge
OpenApiV3_1.IJsonSchema.IAnyOf
toOpenApi.IJsonSchema.IOneOf
- Merge
OpenApiV3_1.IJsonSchema.IRecursiveReference
toOpenApi.IJsonSchema.IReference
- Merge
OpenApiV3_1.IJsonSchema.IAllOf
toOpenApi.IJsonSchema.IObject
- Decompose mixed type:
Conversions to another version's OpenAPI document is also based on the "emended OpenAPI v3.1 specification" like above diagram. You can do it through OpenApi.downgrade()
function. Therefore, if you want to convert Swagger v2.0 document to OpenAPI v3.0 document, you have to call two functions; OpenApi.convert()
and then OpenApi.downgrade()
.
At last, if you utilize typia
library with @samchon/openapi
types, you can validate whether your OpenAPI document is following the standard specification or not. Just visit one of below playground links, and paste your OpenAPI document URL address. This validation strategy would be superior than any other OpenAPI validator libraries.
- Playground Links
import { OpenApi, OpenApiV3, OpenApiV3_1, SwaggerV2 } from "@samchon/openapi";
import typia from "typia";
const main = async (): Promise<void> => {
// GET YOUR OPENAPI DOCUMENT
const response: Response = await fetch(
"https://raw.githubusercontent.com/samchon/openapi/master/examples/v3.0/openai.json"
);
const document: any = await response.json();
// TYPE VALIDATION
const result = typia.validate<
| OpenApiV3_1.IDocument
| OpenApiV3.IDocument
| SwaggerV2.IDocument
>(document);
if (result.success === false) {
console.error(result.errors);
return;
}
// CONVERT TO EMENDED
const emended: OpenApi.IDocument = OpenApi.convert(document);
console.info(emended);
};
main().catch(console.error);
flowchart TD
subgraph "OpenAPI Specification"
v20("Swagger v2.0") --upgrades--> emended[["OpenAPI v3.1 (emended)"]]
v30("OpenAPI v3.0") --upgrades--> emended
v31("OpenAPI v3.1") --emends--> emended
end
subgraph "OpenAPI Generator"
emended --normalizes--> migration[["Migration Schema"]]
migration --"Artificial Intelligence"--> lfc{{"<b><u>LLM Function Calling</b></u>"}}
lfc --"OpenAI"--> chatgpt("ChatGPT")
lfc --"Anthropic"--> claude("Claude")
lfc --"Google"--> gemini("Gemini")
lfc --"Meta"--> llama("Llama")
end
LLM function calling application from OpenAPI document.
@samchon/openapi
provides LLM (Large Language Model) function calling application from the "emended OpenAPI v3.1 document". Therefore, if you have any HTTP backend server and succeeded to build an OpenAPI document, you can easily make the A.I. chatbot application.
In the A.I. chatbot, LLM will select proper function to remotely call from the conversations with user, and fill arguments of the function automatically. If you actually execute the function call through the HttpLlm.execute()
function, it is the "LLM function call."
Let's enjoy the fantastic LLM function calling feature very easily with @samchon/openapi
.
- Application
- Schemas
-
IChatGptSchema
: OpenAI ChatGPT -
IClaudeSchema
: Anthropic Claude -
IGeminiSchema
: Google Gemini -
ILlamaSchema
: Meta Llama -
ILlmSchemaV3
: middle layer based on OpenAPI v3.0 specification -
ILlmSchemaV3_1
: middle layer based on OpenAPI v3.1 specification
-
- Type Checkers
[!NOTE]
You also can compose
ILlmApplication
from a class type withtypia
.https://typia.io/docs/llm/application
import { ILlmApplication } from "@samchon/openapi"; import typia from "typia"; const app: ILlmApplication<"chatgpt"> = typia.llm.application<YourClassType, "chatgpt">();
[!TIP]
LLM selects proper function and fill arguments.
In nowadays, most LLM (Large Language Model) like OpenAI are supporting "function calling" feature. The "LLM function calling" means that LLM automatically selects a proper function and fills parameter values from conversation with the user (may by chatting text).
Actual function call execution is by yourself.
LLM (Large Language Model) providers like OpenAI selects a proper function to call from the conversations with users, and fill arguments of it. However, function calling feature supported by LLM providers do not perform the function call execution. The actual execution responsibility is on you.
In @samchon/openapi
, you can execute the LLM function calling by HttpLlm.execute()
(or HttpLlm.propagate()
) function. Here is an example code executing the LLM function calling through the HttpLlm.execute()
function. As you can see, to execute the LLM function call, you have to deliver these information:
- Connection info to the HTTP server
- Application of the LLM function calling
- LLM function schema to call
- Arguments for the function call (maybe composed by LLM)
Here is the example code executing the LLM function call with @samchon/openapi
.
- Example Code:
test/examples/chatgpt-function-call-to-sale-create.ts
- Prompt describing the produc to create:
Microsoft Surface Pro 9
- Result of the Function Calling:
examples/arguments/chatgpt.microsoft-surface-pro-9.input.json
import {
HttpLlm,
IChatGptSchema,
IHttpLlmApplication,
IHttpLlmFunction,
OpenApi,
OpenApiV3,
OpenApiV3_1,
SwaggerV2,
} from "@samchon/openapi";
import OpenAI from "openai";
import typia from "typia";
const main = async (): Promise<void> => {
// Read swagger document and validate it
const swagger:
| SwaggerV2.IDocument
| OpenApiV3.IDocument
| OpenApiV3_1.IDocument = JSON.parse(
await fetch(
"https://github.com/samchon/shopping-backend/blob/master/packages/api/swagger.json",
).then((r) => r.json()),
);
typia.assert(swagger); // recommended
// convert to emended OpenAPI document,
// and compose LLM function calling application
const document: OpenApi.IDocument = OpenApi.convert(swagger);
const application: IHttpLlmApplication<"chatgpt"> = HttpLlm.application({
model: "chatgpt",
document,
});
// Let's imagine that LLM has selected a function to call
const func: IHttpLlmFunction<"chatgpt"> | undefined =
application.functions.find(
// (f) => f.name === "llm_selected_function_name"
(f) => f.path === "/shoppings/sellers/sale" && f.method === "post",
);
if (func === undefined) throw new Error("No matched function exists.");
// Get arguments by ChatGPT function calling
const client: OpenAI = new OpenAI({
apiKey: "<YOUR_OPENAI_API_KEY>",
});
const completion: OpenAI.ChatCompletion =
await client.chat.completions.create({
model: "gpt-4o",
messages: [
{
role: "assistant",
content:
"You are a helpful customer support assistant. Use the supplied tools to assist the user.",
},
{
role: "user",
content: "<DESCRIPTION ABOUT THE SALE>",
// https://github.com/samchon/openapi/blob/master/examples/function-calling/prompts/microsoft-surface-pro-9.md
},
],
tools: [
{
type: "function",
function: {
name: func.name,
description: func.description,
parameters: func.parameters as Record<string, any>,
},
},
],
});
const toolCall: OpenAI.ChatCompletionMessageToolCall =
completion.choices[0].message.tool_calls![0];
// Actual execution by yourself
const article = await HttpLlm.execute({
connection: {
host: "http://localhost:37001",
},
application,
function: func,
input: JSON.parse(toolCall.function.arguments),
});
console.log("article", article);
};
main().catch(console.error);
import { IHttpLlmFunction, IValidation } from "@samchon/openapi";
import { FunctionCall } from "pseudo";
export const correctFunctionCall = (p: {
call: FunctionCall;
functions: Array<IHttpLlmFunction<"chatgpt">>;
retry: (reason: string, errors?: IValidation.IError[]) => Promise<unknown>;
}): Promise<unknown> => {
// FIND FUNCTION
const func: IHttpLlmFunction<"chatgpt"> | undefined =
p.functions.find((f) => f.name === p.call.name);
if (func === undefined) {
// never happened in my experience
return p.retry(
"Unable to find the matched function name. Try it again.",
);
}
// VALIDATE
const result: IValidation<unknown> = func.validate(p.call.arguments);
if (result.success === false) {
// 1st trial: 30% (gpt-4o-mini in shopping mall chatbot)
// 2nd trial with validation feedback: 99%
// 3nd trial with validation feedback again: never have failed
return p.retry(
"Type errors are detected. Correct it through validation errors",
{
errors: result.errors,
},
);
}
return result.data;
}
Is LLM Function Calling perfect? No, absolutely not.
LLM (Large Language Model) service vendor like OpenAI takes a lot of type level mistakes when composing the arguments of function calling or structured output. Even though target schema is super simple like Array<string>
type, LLM often fills it just by a string
typed value.
In my experience, OpenAI gpt-4o-mini
(8b
parameters) is taking about 70% of type level mistakes when filling the arguments of function calling to Shopping Mall service. To overcome the imperfection of such LLM function calling, @samchon/openapi
supports validation feedback strategy.
The key concept of validation feedback strategy is, let LLM function calling to construct invalid typed arguments first, and informing detailed type errors to the LLM, so that induce LLM to emend the wrong typed arguments at the next turn by using IHttpLlmFunction<Model>.validate()
function.
Embedded validator function in IHttpLlmFunction<Model>.validate()
is exactly same with typia.validate<T>()
function, so that detailed and accurate than any other validators like below. By such validation feedback strategy, 30% success rate of the 1st function calling trial has been increased to 99% success rate of the 2nd function calling trial. And have never failed from the 3rd trial.
Components | typia |
TypeBox |
ajv |
io-ts |
zod |
C.V. |
---|---|---|---|---|---|---|
Easy to use | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
Object (simple) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Object (hierarchical) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Object (recursive) | ✔ | ❌ | ✔ | ✔ | ✔ | ✔ |
Object (union, implicit) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
Object (union, explicit) | ✔ | ✔ | ✔ | ✔ | ✔ | ❌ |
Object (additional tags) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Object (template literal types) | ✔ | ✔ | ✔ | ❌ | ❌ | ❌ |
Object (dynamic properties) | ✔ | ✔ | ✔ | ❌ | ❌ | ❌ |
Array (rest tuple) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
Array (hierarchical) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Array (recursive) | ✔ | ✔ | ✔ | ✔ | ✔ | ❌ |
Array (recursive, union) | ✔ | ✔ | ❌ | ✔ | ✔ | ❌ |
Array (R+U, implicit) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
Array (repeated) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
Array (repeated, union) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
Ultimate Union Type | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
C.V.
meansclass-validator
Arguments from both Human and LLM sides.
When composing parameter arguments through the LLM (Large Language Model) function calling, there can be a case that some parameters (or nested properties) must be composed not by LLM, but by Human. File uploading feature, or sensitive information like secret key (password) cases are the representative examples.
In that case, you can configure the LLM function calling schemas to exclude such Human side parameters (or nested properties) by IHttpLlmApplication.options.separate
property. Instead, you have to merge both Human and LLM composed parameters into one by calling the HttpLlm.mergeParameters()
before the LLM function call execution of HttpLlm.execute()
function.
Here is the example code separating the file uploading feature from the LLM function calling schema, and combining both Human and LLM composed parameters into one before the LLM function call execution.
- Example Code:
test/examples/claude-function-call-separate-to-sale-create.ts
- Prompt describing the produc to create:
Microsoft Surpace Pro 9
- Result of the Function Calling:
examples/arguments/claude.microsoft-surface-pro-9.input.json
import Anthropic from "@anthropic-ai/sdk";
import {
ClaudeTypeChecker,
HttpLlm,
IClaudeSchema,
IHttpLlmApplication,
IHttpLlmFunction,
OpenApi,
OpenApiV3,
OpenApiV3_1,
SwaggerV2,
} from "@samchon/openapi";
import typia from "typia";
const main = async (): Promise<void> => {
// Read swagger document and validate it
const swagger:
| SwaggerV2.IDocument
| OpenApiV3.IDocument
| OpenApiV3_1.IDocument = JSON.parse(
await fetch(
"https://github.com/samchon/shopping-backend/blob/master/packages/api/swagger.json",
).then((r) => r.json()),
);
typia.assert(swagger); // recommended
// convert to emended OpenAPI document,
// and compose LLM function calling application
const document: OpenApi.IDocument = OpenApi.convert(swagger);
const application: IHttpLlmApplication<"claude"> = HttpLlm.application({
model: "claude",
document,
options: {
reference: true,
separate: (schema) =>
ClaudeTypeChecker.isString(schema) &&
!!schema.contentMediaType?.startsWith("image"),
},
});
// Let's imagine that LLM has selected a function to call
const func: IHttpLlmFunction<"claude"> | undefined =
application.functions.find(
// (f) => f.name === "llm_selected_function_name"
(f) => f.path === "/shoppings/sellers/sale" && f.method === "post",
);
if (func === undefined) throw new Error("No matched function exists.");
// Get arguments by ChatGPT function calling
const client: Anthropic = new Anthropic({
apiKey: "<YOUR_ANTHROPIC_API_KEY>",
});
const completion: Anthropic.Message = await client.messages.create({
model: "claude-3-5-sonnet-latest",
max_tokens: 8_192,
messages: [
{
role: "assistant",
content:
"You are a helpful customer support assistant. Use the supplied tools to assist the user.",
},
{
role: "user",
content: "<DESCRIPTION ABOUT THE SALE>",
// https://github.com/samchon/openapi/blob/master/examples/function-calling/prompts/microsoft-surface-pro-9.md
},
],
tools: [
{
name: func.name,
description: func.description,
input_schema: func.separated!.llm as any,
},
],
});
const toolCall: Anthropic.ToolUseBlock = completion.content.filter(
(c) => c.type === "tool_use",
)[0]!;
// Actual execution by yourself
const article = await HttpLlm.execute({
connection: {
host: "http://localhost:37001",
},
application,
function: func,
input: HttpLlm.mergeParameters({
function: func,
llm: toolCall.input as any,
human: {
// Human composed parameter values
content: {
files: [],
thumbnails: [
{
name: "thumbnail",
extension: "jpeg",
url: "https://serpapi.com/searches/673d3a37e45f3316ecd8ab3e/images/1be25e6e2b1fb7509f1af89c326cb41749301b94375eb5680b9bddcdf88fabcb.jpeg",
},
// ...
],
},
},
}),
});
console.log("article", article);
};
main().catch(console.error);
https://github.com/wrtnlabs/agentica
agentica
is the simplest Agentic AI library, specialized in LLM Function Calling with @samchon/openapi
.
With it, you don't need to compose complicate agent graph or workflow. Instead, just deliver Swagger/OpenAPI documents or TypeScript class types linearly to the agentica
. Then agentica
will do everything with the function calling.
Look at the below demonstration, and feel how agentica
is easy and powerful combining with @samchon/openapi
.
import { Agentica } from "@agentica/core";
import typia from "typia";
const agent = new Agentica({
controllers: [
await fetch(
"https://shopping-be.wrtn.ai/editor/swagger.json",
).then(r => r.json()),
typia.llm.application<ShoppingCounselor>(),
typia.llm.application<ShoppingPolicy>(),
typia.llm.application<ShoppingSearchRag>(),
],
});
await agent.conversate("I wanna buy MacBook Pro");
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for openapi
Similar Open Source Tools

openapi
The `@samchon/openapi` repository is a collection of OpenAPI types and converters for various versions of OpenAPI specifications. It includes an 'emended' OpenAPI v3.1 specification that enhances clarity by removing ambiguous and duplicated expressions. The repository also provides an application composer for LLM (Large Language Model) function calling from OpenAPI documents, allowing users to easily perform LLM function calls based on the Swagger document. Conversions to different versions of OpenAPI documents are also supported, all based on the emended OpenAPI v3.1 specification. Users can validate their OpenAPI documents using the `typia` library with `@samchon/openapi` types, ensuring compliance with standard specifications.

rust-genai
genai is a multi-AI providers library for Rust that aims to provide a common and ergonomic single API to various generative AI providers such as OpenAI, Anthropic, Cohere, Ollama, and Gemini. It focuses on standardizing chat completion APIs across major AI services, prioritizing ergonomics and commonality. The library initially focuses on text chat APIs and plans to expand to support images, function calling, and more in the future versions. Version 0.1.x will have breaking changes in patches, while version 0.2.x will follow semver more strictly. genai does not provide a full representation of a given AI provider but aims to simplify the differences at a lower layer for ease of use.

client-ts
Mistral Typescript Client is an SDK for Mistral AI API, providing Chat Completion and Embeddings APIs. It allows users to create chat completions, upload files, create agent completions, create embedding requests, and more. The SDK supports various JavaScript runtimes and provides detailed documentation on installation, requirements, API key setup, example usage, error handling, server selection, custom HTTP client, authentication, providers support, standalone functions, debugging, and contributions.

venom
Venom is a high-performance system developed with JavaScript to create a bot for WhatsApp, support for creating any interaction, such as customer service, media sending, sentence recognition based on artificial intelligence and all types of design architecture for WhatsApp.

agentops
AgentOps is a toolkit for evaluating and developing robust and reliable AI agents. It provides benchmarks, observability, and replay analytics to help developers build better agents. AgentOps is open beta and can be signed up for here. Key features of AgentOps include: - Session replays in 3 lines of code: Initialize the AgentOps client and automatically get analytics on every LLM call. - Time travel debugging: (coming soon!) - Agent Arena: (coming soon!) - Callback handlers: AgentOps works seamlessly with applications built using Langchain and LlamaIndex.

BetaML.jl
The Beta Machine Learning Toolkit is a package containing various algorithms and utilities for implementing machine learning workflows in multiple languages, including Julia, Python, and R. It offers a range of supervised and unsupervised models, data transformers, and assessment tools. The models are implemented entirely in Julia and are not wrappers for third-party models. Users can easily contribute new models or request implementations. The focus is on user-friendliness rather than computational efficiency, making it suitable for educational and research purposes.

freeGPT
freeGPT provides free access to text and image generation models. It supports various models, including gpt3, gpt4, alpaca_7b, falcon_40b, prodia, and pollinations. The tool offers both asynchronous and non-asynchronous interfaces for text completion and image generation. It also features an interactive Discord bot that provides access to all the models in the repository. The tool is easy to use and can be integrated into various applications.

cellseg_models.pytorch
cellseg-models.pytorch is a Python library built upon PyTorch for 2D cell/nuclei instance segmentation models. It provides multi-task encoder-decoder architectures and post-processing methods for segmenting cell/nuclei instances. The library offers high-level API to define segmentation models, open-source datasets for training, flexibility to modify model components, sliding window inference, multi-GPU inference, benchmarking utilities, regularization techniques, and example notebooks for training and finetuning models with different backbones.

orch
orch is a library for building language model powered applications and agents for the Rust programming language. It can be used for tasks such as text generation, streaming text generation, structured data generation, and embedding generation. The library provides functionalities for executing various language model tasks and can be integrated into different applications and contexts. It offers flexibility for developers to create language model-powered features and applications in Rust.

island-ai
island-ai is a TypeScript toolkit tailored for developers engaging with structured outputs from Large Language Models. It offers streamlined processes for handling, parsing, streaming, and leveraging AI-generated data across various applications. The toolkit includes packages like zod-stream for interfacing with LLM streams, stream-hooks for integrating streaming JSON data into React applications, and schema-stream for JSON streaming parsing based on Zod schemas. Additionally, related packages like @instructor-ai/instructor-js focus on data validation and retry mechanisms, enhancing the reliability of data processing workflows.

dashscope-sdk
DashScope SDK for .NET is an unofficial SDK maintained by Cnblogs, providing various APIs for text embedding, generation, multimodal generation, image synthesis, and more. Users can interact with the SDK to perform tasks such as text completion, chat generation, function calls, file operations, and more. The project is under active development, and users are advised to check the Release Notes before upgrading.

aioshelly
Aioshelly is an asynchronous library designed to control Shelly devices. It is currently under development and requires Python version 3.11 or higher, along with dependencies like bluetooth-data-tools, aiohttp, and orjson. The library provides examples for interacting with Gen1 devices using CoAP protocol and Gen2/Gen3 devices using RPC and WebSocket protocols. Users can easily connect to Shelly devices, retrieve status information, and perform various actions through the provided APIs. The repository also includes example scripts for quick testing and usage guidelines for contributors to maintain consistency with the Shelly API.

mediapipe-rs
MediaPipe-rs is a Rust library designed for MediaPipe tasks on WasmEdge WASI-NN. It offers easy-to-use low-code APIs similar to mediapipe-python, with low overhead and flexibility for custom media input. The library supports various tasks like object detection, image classification, gesture recognition, and more, including TfLite models, TF Hub models, and custom models. Users can create task instances, run sessions for pre-processing, inference, and post-processing, and speed up processing by reusing sessions. The library also provides support for audio tasks using audio data from symphonia, ffmpeg, or raw audio. Users can choose between CPU, GPU, or TPU devices for processing.

llama_ros
This repository provides a set of ROS 2 packages to integrate llama.cpp into ROS 2. By using the llama_ros packages, you can easily incorporate the powerful optimization capabilities of llama.cpp into your ROS 2 projects by running GGUF-based LLMs and VLMs.

json-translator
The json-translator repository provides a free tool to translate JSON/YAML files or JSON objects into different languages using various translation modules. It supports CLI usage and package support, allowing users to translate words, sentences, JSON objects, and JSON files. The tool also offers multi-language translation, ignoring specific words, and safe translation practices. Users can contribute to the project by updating CLI, translation functions, JSON operations, and more. The roadmap includes features like Libre Translate option, Argos Translate option, Bing Translate option, and support for additional translation modules.
For similar tasks

openapi
The `@samchon/openapi` repository is a collection of OpenAPI types and converters for various versions of OpenAPI specifications. It includes an 'emended' OpenAPI v3.1 specification that enhances clarity by removing ambiguous and duplicated expressions. The repository also provides an application composer for LLM (Large Language Model) function calling from OpenAPI documents, allowing users to easily perform LLM function calls based on the Swagger document. Conversions to different versions of OpenAPI documents are also supported, all based on the emended OpenAPI v3.1 specification. Users can validate their OpenAPI documents using the `typia` library with `@samchon/openapi` types, ensuring compliance with standard specifications.
For similar jobs

Awesome-LLM-RAG-Application
Awesome-LLM-RAG-Application is a repository that provides resources and information about applications based on Large Language Models (LLM) with Retrieval-Augmented Generation (RAG) pattern. It includes a survey paper, GitHub repo, and guides on advanced RAG techniques. The repository covers various aspects of RAG, including academic papers, evaluation benchmarks, downstream tasks, tools, and technologies. It also explores different frameworks, preprocessing tools, routing mechanisms, evaluation frameworks, embeddings, security guardrails, prompting tools, SQL enhancements, LLM deployment, observability tools, and more. The repository aims to offer comprehensive knowledge on RAG for readers interested in exploring and implementing LLM-based systems and products.

ChatGPT-On-CS
ChatGPT-On-CS is an intelligent chatbot tool based on large models, supporting various platforms like WeChat, Taobao, Bilibili, Douyin, Weibo, and more. It can handle text, voice, and image inputs, access external resources through plugins, and customize enterprise AI applications based on proprietary knowledge bases. Users can set custom replies, utilize ChatGPT interface for intelligent responses, send images and binary files, and create personalized chatbots using knowledge base files. The tool also features platform-specific plugin systems for accessing external resources and supports enterprise AI applications customization.

call-gpt
Call GPT is a voice application that utilizes Deepgram for Speech to Text, elevenlabs for Text to Speech, and OpenAI for GPT prompt completion. It allows users to chat with ChatGPT on the phone, providing better transcription, understanding, and speaking capabilities than traditional IVR systems. The app returns responses with low latency, allows user interruptions, maintains chat history, and enables GPT to call external tools. It coordinates data flow between Deepgram, OpenAI, ElevenLabs, and Twilio Media Streams, enhancing voice interactions.

awesome-LLM-resourses
A comprehensive repository of resources for Chinese large language models (LLMs), including data processing tools, fine-tuning frameworks, inference libraries, evaluation platforms, RAG engines, agent frameworks, books, courses, tutorials, and tips. The repository covers a wide range of tools and resources for working with LLMs, from data labeling and processing to model fine-tuning, inference, evaluation, and application development. It also includes resources for learning about LLMs through books, courses, and tutorials, as well as insights and strategies from building with LLMs.

tappas
Hailo TAPPAS is a set of full application examples that implement pipeline elements and pre-trained AI tasks. It demonstrates Hailo's system integration scenarios on predefined systems, aiming to accelerate time to market, simplify integration with Hailo's runtime SW stack, and provide a starting point for customers to fine-tune their applications. The tool supports both Hailo-15 and Hailo-8, offering various example applications optimized for different common hosts. TAPPAS includes pipelines for single network, two network, and multi-stream processing, as well as high-resolution processing via tiling. It also provides example use case pipelines like License Plate Recognition and Multi-Person Multi-Camera Tracking. The tool is regularly updated with new features, bug fixes, and platform support.

cloudflare-rag
This repository provides a fullstack example of building a Retrieval Augmented Generation (RAG) app with Cloudflare. It utilizes Cloudflare Workers, Pages, D1, KV, R2, AI Gateway, and Workers AI. The app features streaming interactions to the UI, hybrid RAG with Full-Text Search and Vector Search, switchable providers using AI Gateway, per-IP rate limiting with Cloudflare's KV, OCR within Cloudflare Worker, and Smart Placement for workload optimization. The development setup requires Node, pnpm, and wrangler CLI, along with setting up necessary primitives and API keys. Deployment involves setting up secrets and deploying the app to Cloudflare Pages. The project implements a Hybrid Search RAG approach combining Full Text Search against D1 and Hybrid Search with embeddings against Vectorize to enhance context for the LLM.

pixeltable
Pixeltable is a Python library designed for ML Engineers and Data Scientists to focus on exploration, modeling, and app development without the need to handle data plumbing. It provides a declarative interface for working with text, images, embeddings, and video, enabling users to store, transform, index, and iterate on data within a single table interface. Pixeltable is persistent, acting as a database unlike in-memory Python libraries such as Pandas. It offers features like data storage and versioning, combined data and model lineage, indexing, orchestration of multimodal workloads, incremental updates, and automatic production-ready code generation. The tool emphasizes transparency, reproducibility, cost-saving through incremental data changes, and seamless integration with existing Python code and libraries.

wave-apps
Wave Apps is a directory of sample applications built on H2O Wave, allowing users to build AI apps faster. The apps cover various use cases such as explainable hotel ratings, human-in-the-loop credit risk assessment, mitigating churn risk, online shopping recommendations, and sales forecasting EDA. Users can download, modify, and integrate these sample apps into their own projects to learn about app development and AI model deployment.