wingman
Inference Hub for AI at Scale
Stars: 70
The LLM Platform, also known as Inference Hub, is an open-source tool designed to simplify the development and deployment of large language model applications at scale. It provides a unified framework for integrating and managing multiple LLM vendors, models, and related services through a flexible approach. The platform supports various LLM providers, document processing, RAG, advanced AI workflows, infrastructure operations, and flexible configuration using YAML files. Its modular and extensible architecture allows developers to plug in different providers and services as needed. Key components include completers, embedders, renderers, synthesizers, transcribers, document processors, segmenters, retrievers, summarizers, translators, AI workflows, tools, and infrastructure components. Use cases range from enterprise AI applications to scalable LLM deployment and custom AI pipelines. Integrations with LLM providers like OpenAI, Azure OpenAI, Anthropic, Google Gemini, AWS Bedrock, Groq, Mistral AI, xAI, Hugging Face, and more are supported.
README:
The LLM Platform or Inference Hub is an open-source product designed to simplify the development and deployment of large language model (LLM) applications at scale. It provides a unified framework that allows developers to integrate and manage multiple LLM vendors, models, and related services through a standardized but highly flexible approach.
The platform integrates with a wide range of LLM providers:
Chat/Completion Models:
- OpenAI Platform and Azure OpenAI Service (GPT models)
- Anthropic (Claude models)
- Google Gemini
- AWS Bedrock
- Mistral AI
- Hugging Face
- Local deployments: Ollama, LLAMA.CPP
- Custom models via gRPC plugins
Embedding Models:
- OpenAI, Azure OpenAI, Jina, Hugging Face, Google Gemini
- Local: Ollama, LLAMA.CPP
- Custom embedders via gRPC
Media Processing:
- Image generation: OpenAI DALL-E, Replicate
- Speech-to-text: OpenAI Whisper, Mistral
- Text-to-speech: OpenAI TTS
- Reranking: Jina
Document Extractors:
- Apache Tika for various document formats
- Unstructured.io for advanced document parsing
- Azure Document Intelligence
- Docling for document conversion
- Kreuzberg for document parsing
- Mistral document extraction
- Text extraction from plain files
- Custom extractors via gRPC
Text Segmentation:
- Jina segmenter for semantic chunking
- Kreuzberg segmenter
- Text-based chunking with configurable sizes
- Unstructured.io segmentation
- Custom segmenters via gRPC
Information Retrieval:
- Web search: DuckDuckGo, Exa, Tavily
- Custom retrievers via gRPC plugins
Chains & Agents:
- Agent/Assistant chains with tool calling capabilities
- Custom conversation flows
- Multi-step reasoning workflows
- Tool integration and function calling
Tools & Function Calling:
- Built-in tools: search, extract, retrieve, render, synthesize, translate
-
Model Context Protocol (MCP) support: Full server and client implementation
- Connect to external MCP servers as tool providers
- Built-in MCP server exposing platform capabilities
- Multiple transport methods (HTTP streaming, SSE, command execution)
- Custom tools via gRPC plugins
Additional Capabilities:
- Text summarization (via chat models)
- Language translation
- Content rendering and formatting
Routing & Load Balancing:
- Round-robin load balancer for distributing requests
- Model fallback strategies
- Request routing across multiple providers
Rate Limiting & Control:
- Per-provider and per-model rate limiting
- Request throttling and queuing
- Resource usage controls
Authentication & Security:
- Static token authentication
- OpenID Connect (OIDC) integration
- Secure credential management
API Compatibility:
- OpenAI-compatible API endpoints
- Custom API configurations
- Multiple API versions support
Observability & Monitoring:
- Full OpenTelemetry integration
- Request tracing across all components
- Comprehensive metrics and logging
- Performance monitoring and debugging
Developers can define providers, models, credentials, document processing pipelines, tools, and advanced AI workflows using YAML configuration files. This approach streamlines integration and makes it easy to manage complex AI applications.
The architecture is designed to be modular and extensible, allowing developers to plug in different providers and services as needed. It consists of key components:
Core Providers:
- Completers: Chat/completion models for text generation and reasoning
- Embedders: Vector embedding models for semantic understanding
- Renderers: Image generation and visual content creation
- Synthesizers: Text-to-speech and audio generation
- Transcribers: Speech-to-text and audio processing
- Rerankers: Result ranking and relevance scoring
Document & Data Processing:
- Extractors: Document parsing and content extraction from various formats
- Segmenters: Text chunking and semantic segmentation for RAG
- Retrievers: Web search and information retrieval
- Summarizers: Content compression and summarization
- Translators: Multi-language text translation
AI Workflows & Tools:
- Chains: Multi-step AI workflows and agent-based reasoning
- Tools: Function calling, web search, document processing, and custom capabilities
- APIs: Multiple API formats and compatibility layers
Infrastructure:
- Routers: Load balancing and request distribution
- Rate Limiters: Resource control and throttling
- Authorizers: Authentication and access control
- Observability: OpenTelemetry tracing and monitoring
- Enterprise AI Applications: Unified platform for multiple AI services and models
- RAG (Retrieval-Augmented Generation): Document processing, semantic search, and knowledge retrieval
- AI Agents & Workflows: Multi-step reasoning, tool integration, and autonomous task execution
- Scalable LLM Deployment: High-volume applications with load balancing and failover
- Multi-Modal AI: Combining text, image, and audio processing capabilities
- Custom AI Pipelines: Flexible workflows using custom tools and chains
https://platform.openai.com/docs/api-reference
providers:
- type: openai
token: sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
models:
- gpt-4o
- gpt-4o-mini
- text-embedding-3-small
- text-embedding-3-large
- whisper-1
- dall-e-3
- tts-1
- tts-1-hdhttps://azure.microsoft.com/en-us/products/ai-services/openai-service
providers:
- type: openai
url: https://xxxxxxxx.openai.azure.com
token: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
models:
# https://docs.anthropic.com/en/docs/models-overview
#
# {alias}:
# - id: {azure oai deployment name}
gpt-3.5-turbo:
id: gpt-35-turbo-16k
gpt-4:
id: gpt-4-32k
text-embedding-ada-002:
id: text-embedding-ada-002providers:
- type: anthropic
token: sk-ant-apixx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# https://docs.anthropic.com/en/docs/models-overview
#
# {alias}:
# - id: {anthropic api model name}
models:
claude-3.5-sonnet:
id: claude-3-5-sonnet-20240620providers:
- type: gemini
token: ${GOOGLE_API_KEY}
# https://ai.google.dev/gemini-api/docs/models/gemini
#
# {alias}:
# - id: {gemini api model name}
models:
gemini-1.5-pro:
id: gemini-1.5-pro-latest
gemini-1.5-flash:
id: gemini-1.5-flash-latestproviders:
- type: bedrock
# AWS credentials configured via environment or IAM roles
models:
claude-3-sonnet:
id: anthropic.claude-3-sonnet-20240229-v1:0providers:
- type: mistral
token: ${MISTRAL_API_KEY}
# https://docs.mistral.ai/getting-started/models/
#
# {alias}:
# - id: {mistral api model name}
models:
mistral-large:
id: mistral-large-latestproviders:
- type: replicate
token: ${REPLICATE_API_KEY}
#
# {alias}:
# - id: {cohere api model name}
models:
replicate-flux-pro:
id: black-forest-labs/flux-pro$ ollama start
$ ollama run mistralproviders:
- type: ollama
url: http://localhost:11434
# https://ollama.com/library
#
# {alias}:
# - id: {ollama model name with optional version}
models:
mistral-7b-instruct:
id: mistral:latesthttps://github.com/ggerganov/llama.cpp/tree/master/examples/server
$ llama-server --port 9081 --log-disable --model ./models/mistral-7b-instruct-v0.2.Q4_K_M.ggufproviders:
- type: llama
url: http://localhost:9081
models:
- mistral-7b-instructproviders:
- type: huggingface
token: hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
models:
mistral-7B-instruct:
id: mistralai/Mistral-7B-Instruct-v0.1
huggingface-minilm-l6-2:
id: sentence-transformers/all-MiniLM-L6-v2routers:
llama-lb:
type: roundrobin
models:
- llama-3-8b
- groq-llama-3-8b
- huggingface-llama-3-8bretrievers:
web:
type: duckduckgoretrievers:
exa:
type: exa
token: ${EXA_API_KEY}retrievers:
tavily:
type: tavily
token: ${TAVILY_API_KEY}retrievers:
custom:
type: custom
url: http://localhost:8080# using Docker
docker run -it --rm -p 9998:9998 apache/tika:3.0.0.0-BETA2-fullextractors:
tika:
type: tika
url: http://localhost:9998
chunkSize: 4000
chunkOverlap: 200docker run -it --rm -p 9085:8000 quay.io/unstructured-io/unstructured-api:0.0.80 --port 8000 --host 0.0.0.0extractors:
unstructured:
type: unstructured
url: http://localhost:9085/general/v0/generalextractors:
azure:
type: azure
url: https://YOUR_INSTANCE.cognitiveservices.azure.com
token: ${AZURE_API_KEY}https://github.com/DS4SD/docling
extractors:
docling:
type: docling
url: http://localhost:5000https://github.com/lenskit/kreuzberg
extractors:
kreuzberg:
type: kreuzberg
url: http://localhost:8000extractors:
mistral:
type: mistral
token: ${MISTRAL_API_KEY}extractors:
text:
type: textextractors:
custom:
type: custom
url: http://localhost:8080segmenters:
jina:
type: jina
token: ${JINA_API_KEY}segmenters:
kreuzberg:
type: kreuzberg
url: http://localhost:8000segmenters:
text:
type: text
chunkSize: 1000
chunkOverlap: 200segmenters:
unstructured:
type: unstructured
url: http://localhost:9085/general/v0/generalsegmenters:
custom:
type: custom
url: http://localhost:8080chains:
assistant:
type: agent
model: gpt-4o
tools:
- search
- extract
messages:
- role: system
content: "You are a helpful AI assistant."The platform provides comprehensive support for the Model Context Protocol (MCP), enabling integration with MCP-compatible tools and services.
MCP Server Support:
- Built-in MCP server that exposes platform tools to MCP clients
- Automatic tool discovery and schema generation
- Multiple transport methods (HTTP streaming, SSE, command-line)
MCP Client Support:
- Connect to external MCP servers as tool providers
- Support for various MCP transport methods
- Automatic tool registration and execution
MCP Tool Configuration:
tools:
# MCP server via HTTP streaming
mcp-streamable:
type: mcp
url: http://localhost:8080/mcp
# MCP server via Server-Sent Events
mcp-sse:
type: mcp
url: http://localhost:8080/sse
vars:
api-key: ${API_KEY}
# MCP server via command execution
mcp-command:
type: mcp
command: /path/to/mcp-server
args:
- --config
- /path/to/config.json
vars:
ENV_VAR: valueBuilt-in MCP Server:
The platform automatically exposes its tools via MCP protocol at /mcp endpoint, allowing other MCP clients to discover and use platform capabilities.
tools:
search:
type: search
retriever: web
extract:
type: extract
extractor: tika
translate:
type: translate
translator: default
render:
type: render
renderer: dalle-3
synthesize:
type: synthesize
synthesizer: tts-1tools:
custom-tool:
type: custom
url: http://localhost:8080authorizers:
- type: static
tokens:
- "your-secret-token"authorizers:
- type: oidc
url: https://your-oidc-provider.com
audience: your-audiencerouters:
llama-lb:
type: roundrobin
models:
- llama-3-8b
- groq-llama-3-8b
- huggingface-llama-3-8bAdd rate limiting to any provider:
providers:
- type: openai
token: sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
limit: 10 # requests per second
models:
gpt-4o:
limit: 5 # override for specific modelSummarization is automatically available for any chat model:
# Use any completer model for summarization
# The platform automatically adapts chat models for summarization taskstranslators:
default:
type: default
# Uses configured chat models for translationFor Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for wingman
Similar Open Source Tools
wingman
The LLM Platform, also known as Inference Hub, is an open-source tool designed to simplify the development and deployment of large language model applications at scale. It provides a unified framework for integrating and managing multiple LLM vendors, models, and related services through a flexible approach. The platform supports various LLM providers, document processing, RAG, advanced AI workflows, infrastructure operations, and flexible configuration using YAML files. Its modular and extensible architecture allows developers to plug in different providers and services as needed. Key components include completers, embedders, renderers, synthesizers, transcribers, document processors, segmenters, retrievers, summarizers, translators, AI workflows, tools, and infrastructure components. Use cases range from enterprise AI applications to scalable LLM deployment and custom AI pipelines. Integrations with LLM providers like OpenAI, Azure OpenAI, Anthropic, Google Gemini, AWS Bedrock, Groq, Mistral AI, xAI, Hugging Face, and more are supported.
simba
Simba is an open source, portable Knowledge Management System (KMS) designed to seamlessly integrate with any Retrieval-Augmented Generation (RAG) system. It features a modern UI and modular architecture, allowing developers to focus on building advanced AI solutions without the complexities of knowledge management. Simba offers a user-friendly interface to visualize and modify document chunks, supports various vector stores and embedding models, and simplifies knowledge management for developers. It is community-driven, extensible, and aims to enhance AI functionality by providing a seamless integration with RAG-based systems.
z-ai-sdk-python
Z.ai Open Platform Python SDK is the official Python SDK for Z.ai's large model open interface, providing developers with easy access to Z.ai's open APIs. The SDK offers core features like chat completions, embeddings, video generation, audio processing, assistant API, and advanced tools. It supports various functionalities such as speech transcription, text-to-video generation, image understanding, and structured conversation handling. Developers can customize client behavior, configure API keys, and handle errors efficiently. The SDK is designed to simplify AI interactions and enhance AI capabilities for developers.
FDAbench
FDABench is a benchmark tool designed for evaluating data agents' reasoning ability over heterogeneous data in analytical scenarios. It offers 2,007 tasks across various data sources, domains, difficulty levels, and task types. The tool provides ready-to-use data agent implementations, a DAG-based evaluation system, and a framework for agent-expert collaboration in dataset generation. Key features include data agent implementations, comprehensive evaluation metrics, multi-database support, different task types, extensible framework for custom agent integration, and cost tracking. Users can set up the environment using Python 3.10+ on Linux, macOS, or Windows. FDABench can be installed with a one-command setup or manually. The tool supports API configuration for LLM access and offers quick start guides for database download, dataset loading, and running examples. It also includes features like dataset generation using the PUDDING framework, custom agent integration, evaluation metrics like accuracy and rubric score, and a directory structure for easy navigation.
llm-context.py
LLM Context is a tool designed to assist developers in quickly injecting relevant content from code/text projects into Large Language Model chat interfaces. It leverages `.gitignore` patterns for smart file selection and offers a streamlined clipboard workflow using the command line. The tool also provides direct integration with Large Language Models through the Model Context Protocol (MCP). LLM Context is optimized for code repositories and collections of text/markdown/html documents, making it suitable for developers working on projects that fit within an LLM's context window. The tool is under active development and aims to enhance AI-assisted development workflows by harnessing the power of Large Language Models.
dexto
Dexto is a lightweight runtime for creating and running AI agents that turn natural language into real-world actions. It serves as the missing intelligence layer for building AI applications, standalone chatbots, or as the reasoning engine inside larger products. Dexto features a powerful CLI and Web UI for running AI agents, supports multiple interfaces, allows hot-swapping of LLMs from various providers, connects to remote tool servers via the Model Context Protocol, is config-driven with version-controlled YAML, offers production-ready core features, extensibility for custom services, and enables multi-agent collaboration via MCP and A2A.
BrowserAI
BrowserAI is a production-ready tool that allows users to run AI models directly in the browser, offering simplicity, speed, privacy, and open-source capabilities. It provides WebGPU acceleration for fast inference, zero server costs, offline capability, and developer-friendly features. Perfect for web developers, companies seeking privacy-conscious AI solutions, researchers experimenting with browser-based AI, and hobbyists exploring AI without infrastructure overhead. The tool supports various AI tasks like text generation, speech recognition, and text-to-speech, with pre-configured popular models ready to use. It offers a simple SDK with multiple engine support and seamless switching between MLC and Transformers engines.
llama-api-server
This project aims to create a RESTful API server compatible with the OpenAI API using open-source backends like llama/llama2. With this project, various GPT tools/frameworks can be compatible with your own model. Key features include: - **Compatibility with OpenAI API**: The API server follows the OpenAI API structure, allowing seamless integration with existing tools and frameworks. - **Support for Multiple Backends**: The server supports both llama.cpp and pyllama backends, providing flexibility in model selection. - **Customization Options**: Users can configure model parameters such as temperature, top_p, and top_k to fine-tune the model's behavior. - **Batch Processing**: The API supports batch processing for embeddings, enabling efficient handling of multiple inputs. - **Token Authentication**: The server utilizes token authentication to secure access to the API. This tool is particularly useful for developers and researchers who want to integrate large language models into their applications or explore custom models without relying on proprietary APIs.
ChatGPT-Next-Web
ChatGPT Next Web is a well-designed cross-platform ChatGPT web UI tool that supports Claude, GPT4, and Gemini Pro models. It allows users to deploy their private ChatGPT applications with ease. The tool offers features like one-click deployment, compact client for Linux/Windows/MacOS, compatibility with self-deployed LLMs, privacy-first approach with local data storage, markdown support, responsive design, fast loading speed, prompt templates, awesome prompts, chat history compression, multilingual support, and more.
Neosgenesis
Neogenesis System is an advanced AI decision-making framework that enables agents to 'think about how to think'. It implements a metacognitive approach with real-time learning, tool integration, and multi-LLM support, allowing AI to make expert-level decisions in complex environments. Key features include metacognitive intelligence, tool-enhanced decisions, real-time learning, aha-moment breakthroughs, experience accumulation, and multi-LLM support.
chatgpt-adapter
ChatGPT-Adapter is an interface service that integrates various free services together. It provides a unified interface specification and integrates services like Bing, Claude-2, Gemini. Users can start the service by running the linux-server script and set proxies if needed. The tool offers model lists for different adapters, completion dialogues, authorization methods for different services like Claude, Bing, Gemini, Coze, and Lmsys. Additionally, it provides a free drawing interface with options like coze.dall-e-3, sd.dall-e-3, xl.dall-e-3, pg.dall-e-3 based on user-provided Authorization keys. The tool also supports special flags for enhanced functionality.
browser4
Browser4 is a lightning-fast, coroutine-safe browser designed for AI integration with large language models. It offers ultra-fast automation, deep web understanding, and powerful data extraction APIs. Users can automate the browser, extract data at scale, and perform tasks like summarizing products, extracting product details, and finding specific links. The tool is developer-friendly, supports AI-powered automation, and provides advanced features like X-SQL for precise data extraction. It also offers RPA capabilities, browser control, and complex data extraction with X-SQL. Browser4 is suitable for web scraping, data extraction, automation, and AI integration tasks.
ai-gradio
ai-gradio is a Python package that simplifies the creation of machine learning apps using various models like OpenAI, Google's Gemini, Anthropic's Claude, LumaAI, CrewAI, XAI's Grok, and Hyperbolic. It provides easy installation with support for different providers and offers features like text chat, voice chat, video chat, code generation interfaces, and AI agent teams. Users can set API keys for different providers and customize interfaces for specific tasks.
sgr-deep-research
This repository contains a deep learning research project focused on natural language processing tasks. It includes implementations of various state-of-the-art models and algorithms for text classification, sentiment analysis, named entity recognition, and more. The project aims to provide a comprehensive resource for researchers and developers interested in exploring deep learning techniques for NLP applications.
ai-counsel
AI Counsel is a true deliberative consensus MCP server where AI models engage in actual debate, refine positions across multiple rounds, and converge with voting and confidence levels. It features two modes (quick and conference), mixed adapters (CLI tools and HTTP services), auto-convergence, structured voting, semantic grouping, model-controlled stopping, evidence-based deliberation, local model support, data privacy, context injection, semantic search, fault tolerance, and full transcripts. Users can run local and cloud models to deliberate on various questions, ground decisions in reality by querying code and files, and query past decisions for analysis. The tool is designed for critical technical decisions requiring multi-model deliberation and consensus building.
azooKey-Desktop
azooKey-Desktop is an open-source Japanese input system for macOS that incorporates the high-precision neural kana-kanji conversion engine 'Zenzai'. It offers features such as neural kana-kanji conversion, profile prompt, history learning, user dictionary, integration with personal optimization system 'Tuner', 'nice feeling conversion' with LLM, live conversion, and native support for AZIK. The tool is currently in alpha version, and its operation is not guaranteed. Users can install it via `.pkg` file or Homebrew. Development contributions are welcome, and the project has received support from the Information-technology Promotion Agency, Japan (IPA) for the 2024 fiscal year's untapped IT human resources discovery and nurturing project.
For similar tasks
comfyui_LLM_party
COMFYUI LLM PARTY is a node library designed for LLM workflow development in ComfyUI, an extremely minimalist UI interface primarily used for AI drawing and SD model-based workflows. The project aims to provide a complete set of nodes for constructing LLM workflows, enabling users to easily integrate them into existing SD workflows. It features various functionalities such as API integration, local large model integration, RAG support, code interpreters, online queries, conditional statements, looping links for large models, persona mask attachment, and tool invocations for weather lookup, time lookup, knowledge base, code execution, web search, and single-page search. Users can rapidly develop web applications using API + Streamlit and utilize LLM as a tool node. Additionally, the project includes an omnipotent interpreter node that allows the large model to perform any task, with recommendations to use the 'show_text' node for display output.
n8n
n8n is a workflow automation platform that combines the flexibility of code with the speed of no-code. It offers 400+ integrations, native AI capabilities, and a fair-code license, empowering users to create powerful automations while maintaining control over data and deployments. With features like code customization, AI agent workflows, self-hosting options, enterprise-ready functionalities, and an active community, n8n provides a comprehensive solution for technical teams seeking efficient workflow automation.
wingman
The LLM Platform, also known as Inference Hub, is an open-source tool designed to simplify the development and deployment of large language model applications at scale. It provides a unified framework for integrating and managing multiple LLM vendors, models, and related services through a flexible approach. The platform supports various LLM providers, document processing, RAG, advanced AI workflows, infrastructure operations, and flexible configuration using YAML files. Its modular and extensible architecture allows developers to plug in different providers and services as needed. Key components include completers, embedders, renderers, synthesizers, transcribers, document processors, segmenters, retrievers, summarizers, translators, AI workflows, tools, and infrastructure components. Use cases range from enterprise AI applications to scalable LLM deployment and custom AI pipelines. Integrations with LLM providers like OpenAI, Azure OpenAI, Anthropic, Google Gemini, AWS Bedrock, Groq, Mistral AI, xAI, Hugging Face, and more are supported.
xtuner
XTuner is an efficient, flexible, and full-featured toolkit for fine-tuning large models. It supports various LLMs (InternLM, Mixtral-8x7B, Llama 2, ChatGLM, Qwen, Baichuan, ...), VLMs (LLaVA), and various training algorithms (QLoRA, LoRA, full-parameter fine-tune). XTuner also provides tools for chatting with pretrained / fine-tuned LLMs and deploying fine-tuned LLMs with any other framework, such as LMDeploy.
llm-hosting-container
The LLM Hosting Container repository provides Dockerfile and associated resources for building and hosting containers for large language models, specifically the HuggingFace Text Generation Inference (TGI) container. This tool allows users to easily deploy and manage large language models in a containerized environment, enabling efficient inference and deployment of language-based applications.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.
