
hayhooks
Deploy Haystack pipelines behind a REST Api.
Stars: 51

Hayhooks is a tool that simplifies the deployment and serving of Haystack pipelines as REST APIs. It allows users to wrap their pipelines with custom logic and expose them via HTTP endpoints, including OpenAI-compatible chat completion endpoints. With Hayhooks, users can easily convert their Haystack pipelines into API services with minimal boilerplate code.
README:
Hayhooks makes it easy to deploy and serve Haystack pipelines as REST APIs.
It provides a simple way to wrap your Haystack pipelines with custom logic and expose them via HTTP endpoints, including OpenAI-compatible chat completion endpoints. With Hayhooks, you can quickly turn your Haystack pipelines into API services with minimal boilerplate code.
Table of Contents
- Quick Start
- Install the package
- Configuration
- CLI Commands
- Start hayhooks
- Deploy a pipeline
- OpenAI Compatibility
- Advanced Usage
- Deployment Guidelines
- Legacy Features
- License
Start by installing the package:
pip install hayhooks
Currently, you can configure Hayhooks by:
- Set the environment variables in an
.env
file in the root of your project. - Pass the supported arguments and options to
hayhooks run
command. - Pass the environment variables to the
hayhooks
command.
The following environment variables are supported:
-
HAYHOOKS_HOST
: The host on which the server will listen. -
HAYHOOKS_PORT
: The port on which the server will listen. -
HAYHOOKS_PIPELINES_DIR
: The path to the directory containing the pipelines. -
HAYHOOKS_ROOT_PATH
: The root path of the server. -
HAYHOOKS_ADDITIONAL_PYTHONPATH
: Additional Python path to be added to the Python path. -
HAYHOOKS_DISABLE_SSL
: Boolean flag to disable SSL verification when making requests from the CLI. -
HAYHOOKS_SHOW_TRACEBACKS
: Boolean flag to show tracebacks on errors during pipeline execution and deployment.
-
HAYHOOKS_CORS_ALLOW_ORIGINS
: List of allowed origins (default: ["*"]) -
HAYHOOKS_CORS_ALLOW_METHODS
: List of allowed HTTP methods (default: ["*"]) -
HAYHOOKS_CORS_ALLOW_HEADERS
: List of allowed headers (default: ["*"]) -
HAYHOOKS_CORS_ALLOW_CREDENTIALS
: Allow credentials (default: false) -
HAYHOOKS_CORS_ALLOW_ORIGIN_REGEX
: Regex pattern for allowed origins (default: null) -
HAYHOOKS_CORS_EXPOSE_HEADERS
: Headers to expose in response (default: []) -
HAYHOOKS_CORS_MAX_AGE
: Maxium age for CORS preflight responses in seconds (default: 600)
The hayhooks
package provides a CLI to manage the server and the pipelines.
Any command can be run with hayhooks <command> --help
to get more information.
CLI commands are basically wrappers around the HTTP API of the server. The full API reference is available at //HAYHOOKS_HOST:HAYHOOKS_PORT/docs or //HAYHOOKS_HOST:HAYHOOKS_PORT/redoc.
hayhooks run # Start the server
hayhooks status # Check the status of the server and show deployed pipelines
hayhooks pipeline deploy-files <path_to_dir> # Deploy a pipeline using PipelineWrapper
hayhooks pipeline deploy <pipeline_name> # Deploy a pipeline from a YAML file
hayhooks pipeline undeploy <pipeline_name> # Undeploy a pipeline
Let's start Hayhooks:
hayhooks run
This will start the Hayhooks server on HAYHOOKS_HOST:HAYHOOKS_PORT
.
Now, we will deploy a pipeline to chat with a website. We have created an example in the examples/chat_with_website_streaming folder.
In the example folder, we have two files:
-
chat_with_website.yml
: The pipeline definition in YAML format. -
pipeline_wrapper.py
(mandatory): A pipeline wrapper that uses the pipeline definition.
The pipeline wrapper provides a flexible foundation for deploying Haystack pipelines by allowing users to:
- Choose their preferred pipeline initialization method (YAML files, Haystack templates, or inline code)
- Define custom pipeline execution logic with configurable inputs and outputs
- Optionally expose OpenAI-compatible chat endpoints with streaming support for integration with interfaces like open-webui
The pipeline_wrapper.py
file must contain an implementation of the BasePipelineWrapper
class (see here for more details).
A minimal PipelineWrapper
looks like this:
from pathlib import Path
from typing import List
from haystack import Pipeline
from hayhooks import BasePipelineWrapper
class PipelineWrapper(BasePipelineWrapper):
def setup(self) -> None:
pipeline_yaml = (Path(__file__).parent / "chat_with_website.yml").read_text()
self.pipeline = Pipeline.loads(pipeline_yaml)
def run_api(self, urls: List[str], question: str) -> str:
result = self.pipeline.run({"fetcher": {"urls": urls}, "prompt": {"query": question}})
return result["llm"]["replies"][0]
It contains two methods:
This method will be called when the pipeline is deployed. It should initialize the self.pipeline
attribute as a Haystack pipeline.
You can initialize the pipeline in many ways:
- Load it from a YAML file.
- Define it inline as a Haystack pipeline code.
- Load it from a Haystack pipeline template.
This method will be used to run the pipeline in API mode, when you call the {pipeline_name}/run
endpoint.
You can define the input arguments of the method according to your needs. The input arguments will be used to generate a Pydantic model that will be used to validate the request body. The same will be done for the response type.
NOTE: Since Hayhooks will dynamically create the Pydantic models, you need to make sure that the input arguments are JSON-serializable.
To deploy the pipeline, run:
hayhooks pipeline deploy-files -n chat_with_website examples/chat_with_website
This will deploy the pipeline with the name chat_with_website
. Any error encountered during development will be printed to the console and show in the server logs.
During development, you can use the --overwrite
flag to redeploy your pipeline without restarting the Hayhooks server:
hayhooks pipeline deploy-files -n {pipeline_name} --overwrite {pipeline_dir}
This is particularly useful when:
- Iterating on your pipeline wrapper implementation
- Debugging pipeline setup issues
- Testing different pipeline configurations
The --overwrite
flag will:
- Remove the existing pipeline from the registry
- Delete the pipeline files from disk
- Deploy the new version of your pipeline
For even faster development iterations, you can combine --overwrite
with --skip-saving-files
to avoid writing files to disk:
hayhooks pipeline deploy-files -n {pipeline_name} --overwrite --skip-saving-files {pipeline_dir}
This is useful when:
- You're making frequent changes during development
- You want to test a pipeline without persisting it
- You're running in an environment with limited disk access
After installing the Hayhooks package, it might happen that during pipeline deployment you need to install additional dependencies in order to correctly initialize the pipeline instance when calling the wrapper's setup()
method. For instance, the chat_with_website
pipeline requires the trafilatura
package, which is not installed by default.
HAYHOOKS_SHOW_TRACEBACKS
environment variable to true
or 1
.
Then, assuming you've installed the Hayhooks package in a virtual environment, you will need to install the additional required dependencies yourself by running:
pip install trafilatura
Hayhooks now can automatically generate OpenAI-compatible endpoints if you implement the run_chat_completion
method in your pipeline wrapper.
This will make Hayhooks compatible with fully-featured chat interfaces like open-webui, so you can use it as a backend for your chat interface.
Requirements:
- Ensure you have open-webui up and running (you can do it easily using
docker
, check their quick start guide). - Ensure you have Hayhooks server running somewhere. We will run it locally on
http://localhost:1416
.
First, you need to turn off tags
and title
generation from Admin settings -> Interface
:
Then you have two options to connect Hayhooks as a backend.
Add a Direct Connection from Settings -> Connections
:
NOTE: Fill a random value as API key as it's not needed
Alternatively, you can add an additional OpenAI API Connections from Admin settings -> Connections
:
Even in this case, remember to Fill a random value as API key.
To enable the automatic generation of OpenAI-compatible endpoints, you need only to implement the run_chat_completion
method in your pipeline wrapper.
Let's update the previous example to add a streaming response:
from pathlib import Path
from typing import Generator, List, Union
from haystack import Pipeline
from hayhooks import get_last_user_message, BasePipelineWrapper, log
URLS = ["https://haystack.deepset.ai", "https://www.redis.io", "https://ssi.inc"]
class PipelineWrapper(BasePipelineWrapper):
def setup(self) -> None:
... # Same as before
def run_api(self, urls: List[str], question: str) -> str:
... # Same as before
def run_chat_completion(self, model: str, messages: List[dict], body: dict) -> Union[str, Generator]:
log.trace(f"Running pipeline with model: {model}, messages: {messages}, body: {body}")
question = get_last_user_message(messages)
log.trace(f"Question: {question}")
# Plain pipeline run, will return a string
result = self.pipeline.run({"fetcher": {"urls": URLS}, "prompt": {"query": question}})
return result["llm"]["replies"][0]
Differently from the run_api
method, the run_chat_completion
has a fixed signature and will be called with the arguments specified in the OpenAI-compatible endpoint.
-
model
: Thename
of the Haystack pipeline which is called. -
messages
: The list of messages from the chat in the OpenAI format. -
body
: The full body of the request.
Some notes:
- Since we have only the user messages as input here, the
question
is extracted from the last user message and theurls
argument is hardcoded. - In this example, the
run_chat_completion
method is returning a string, so theopen-webui
will receive a string as response and show the pipeline output in the chat all at once. - The
body
argument contains the full request body, which may be used to extract more information like thetemperature
or themax_tokens
(see the OpenAI API reference for more information).
Finally, to use non-streaming responses in open-webui
you need also to turn of Stream Chat Response
chat settings.
Here's a video example:
Hayhooks now provides a streaming_generator
utility function that can be used to stream the pipeline output to the client.
Let's update the run_chat_completion
method of the previous example:
from pathlib import Path
from typing import Generator, List, Union
from haystack import Pipeline
from hayhooks import get_last_user_message, BasePipelineWrapper, log, streaming_generator
URLS = ["https://haystack.deepset.ai", "https://www.redis.io", "https://ssi.inc"]
class PipelineWrapper(BasePipelineWrapper):
def setup(self) -> None:
... # Same as before
def run_api(self, urls: List[str], question: str) -> str:
... # Same as before
def run_chat_completion(self, model: str, messages: List[dict], body: dict) -> Union[str, Generator]:
log.trace(f"Running pipeline with model: {model}, messages: {messages}, body: {body}")
question = get_last_user_message(messages)
log.trace(f"Question: {question}")
# Streaming pipeline run, will return a generator
return streaming_generator(
pipeline=self.pipeline,
pipeline_run_args={"fetcher": {"urls": URLS}, "prompt": {"query": question}},
)
Now, if you run the pipeline and call one of the following endpoints:
{pipeline_name}/chat
/chat/completions
/v1/chat/completions
You will see the pipeline output being streamed in OpenAI-compatible format to the client and you'll be able to see the output in chunks.
Since output will be streamed to open-webui
there's no need to change Stream Chat Response
chat setting (leave it as Default
or On
).
Here's a video example:
Since Hayhooks is OpenAI-compatible, it can be used as a backend for the haystack OpenAIChatGenerator.
Assuming you have a Haystack pipeline named chat_with_website_streaming
and you have deployed it using Hayhooks, here's an example script of how to use it with the OpenAIChatGenerator
:
from haystack.components.generators.chat.openai import OpenAIChatGenerator
from haystack.utils import Secret
from haystack.dataclasses import ChatMessage
from haystack.components.generators.utils import print_streaming_chunk
client = OpenAIChatGenerator(
model="chat_with_website_streaming",
api_key=Secret.from_token("not-relevant"), # This is not used, you can set it to anything
api_base_url="http://localhost:1416/v1/",
streaming_callback=print_streaming_chunk,
)
client.run([ChatMessage.from_user("Where are the offices or SSI?")])
# > The offices of Safe Superintelligence Inc. (SSI) are located in Palo Alto, California, and Tel Aviv, Israel.
# > {'replies': [ChatMessage(_role=<ChatRole.ASSISTANT: 'assistant'>, _content=[TextContent(text='The offices of Safe >Superintelligence Inc. (SSI) are located in Palo Alto, California, and Tel Aviv, Israel.')], _name=None, _meta={'model': >'chat_with_website_streaming', 'index': 0, 'finish_reason': 'stop', 'completion_start_time': '2025-02-11T15:31:44.599726', >'usage': {}})]}
A Hayhooks app instance can be run programmatically created by using the create_app
function. This is useful if you want to add custom routes or middleware to Hayhooks.
Here's an example script:
import uvicorn
from hayhooks.settings import settings
from fastapi import Request
from hayhooks import create_app
# Create the Hayhooks app
hayhooks = create_app()
# Add a custom route
@hayhooks.get("/custom")
async def custom_route():
return {"message": "Hi, this is a custom route!"}
# Add a custom middleware
@hayhooks.middleware("http")
async def custom_middleware(request: Request, call_next):
response = await call_next(request)
response.headers["X-Custom-Header"] = "custom-header-value"
return response
if __name__ == "__main__":
uvicorn.run("app:hayhooks", host=settings.host, port=settings.port)
For detailed deployment guidelines, see deployment_guidelines.md.
We're still supporting the Hayhooks former way to deploy a pipeline.
The former command hayhooks deploy
is now changed to hayhooks pipeline deploy
and can be used to deploy a pipeline only from a YAML definition file.
For example:
hayhooks pipeline deploy -n chat_with_website examples/chat_with_website/chat_with_website.yml
This will deploy the pipeline with the name chat_with_website
from the YAML definition file examples/chat_with_website/chat_with_website.yml
. You then can check the generated docs at http://HAYHOOKS_HOST:HAYHOOKS_PORT/docs
or http://HAYHOOKS_HOST:HAYHOOKS_PORT/redoc
, looking at the POST /chat_with_website
endpoint.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for hayhooks
Similar Open Source Tools

hayhooks
Hayhooks is a tool that simplifies the deployment and serving of Haystack pipelines as REST APIs. It allows users to wrap their pipelines with custom logic and expose them via HTTP endpoints, including OpenAI-compatible chat completion endpoints. With Hayhooks, users can easily convert their Haystack pipelines into API services with minimal boilerplate code.

llm-vscode
llm-vscode is an extension designed for all things LLM, utilizing llm-ls as its backend. It offers features such as code completion with 'ghost-text' suggestions, the ability to choose models for code generation via HTTP requests, ensuring prompt size fits within the context window, and code attribution checks. Users can configure the backend, suggestion behavior, keybindings, llm-ls settings, and tokenization options. Additionally, the extension supports testing models like Code Llama 13B, Phind/Phind-CodeLlama-34B-v2, and WizardLM/WizardCoder-Python-34B-V1.0. Development involves cloning llm-ls, building it, and setting up the llm-vscode extension for use.

langserve
LangServe helps developers deploy `LangChain` runnables and chains as a REST API. This library is integrated with FastAPI and uses pydantic for data validation. In addition, it provides a client that can be used to call into runnables deployed on a server. A JavaScript client is available in LangChain.js.

yek
Yek is a fast Rust-based tool designed to read text-based files in a repository or directory, chunk them, and serialize them for Large Language Models (LLM) consumption. It utilizes .gitignore rules to skip unwanted files, Git history to infer important files, and additional ignore patterns. Yek splits content into chunks based on token count or byte size, supports processing multiple directories, and can stream content when output is piped. It is configurable via a 'yek.toml' file and prioritizes important files at the end of the output.

june
june-va is a local voice chatbot that combines Ollama for language model capabilities, Hugging Face Transformers for speech recognition, and the Coqui TTS Toolkit for text-to-speech synthesis. It provides a flexible, privacy-focused solution for voice-assisted interactions on your local machine, ensuring that no data is sent to external servers. The tool supports various interaction modes including text input/output, voice input/text output, text input/audio output, and voice input/audio output. Users can customize the tool's behavior with a JSON configuration file and utilize voice conversion features for voice cloning. The application can be further customized using a configuration file with attributes for language model, speech-to-text model, and text-to-speech model configurations.

cursor-tools
cursor-tools is a CLI tool designed to enhance AI agents with advanced skills, such as web search, repository context, documentation generation, GitHub integration, Xcode tools, and browser automation. It provides features like Perplexity for web search, Gemini 2.0 for codebase context, and Stagehand for browser operations. The tool requires API keys for Perplexity AI and Google Gemini, and supports global installation for system-wide access. It offers various commands for different tasks and integrates with Cursor Composer for AI agent usage.

tiledesk-dashboard
Tiledesk is an open-source live chat platform with integrated chatbots written in Node.js and Express. It is designed to be a multi-channel platform for web, Android, and iOS, and it can be used to increase sales or provide post-sales customer service. Tiledesk's chatbot technology allows for automation of conversations, and it also provides APIs and webhooks for connecting external applications. Additionally, it offers a marketplace for apps and features such as CRM, ticketing, and data export.

llm-consortium
LLM Consortium is a plugin for the `llm` package that implements a model consortium system with iterative refinement and response synthesis. It orchestrates multiple learned language models to collaboratively solve complex problems through structured dialogue, evaluation, and arbitration. The tool supports multi-model orchestration, iterative refinement, advanced arbitration, database logging, configurable parameters, hundreds of models, and the ability to save and load consortium configurations.

llm-functions
LLM Functions is a project that enables the enhancement of large language models (LLMs) with custom tools and agents developed in bash, javascript, and python. Users can create tools for their LLM to execute system commands, access web APIs, or perform other complex tasks triggered by natural language prompts. The project provides a framework for building tools and agents, with tools being functions written in the user's preferred language and automatically generating JSON declarations based on comments. Agents combine prompts, function callings, and knowledge (RAG) to create conversational AI agents. The project is designed to be user-friendly and allows users to easily extend the capabilities of their language models.

upgini
Upgini is an intelligent data search engine with a Python library that helps users find and add relevant features to their ML pipeline from various public, community, and premium external data sources. It automates the optimization of connected data sources by generating an optimal set of machine learning features using large language models, GraphNNs, and recurrent neural networks. The tool aims to simplify feature search and enrichment for external data to make it a standard approach in machine learning pipelines. It democratizes access to data sources for the data science community.

simpleAI
SimpleAI is a self-hosted alternative to the not-so-open AI API, focused on replicating main endpoints for LLM such as text completion, chat, edits, and embeddings. It allows quick experimentation with different models, creating benchmarks, and handling specific use cases without relying on external services. Users can integrate and declare models through gRPC, query endpoints using Swagger UI or API, and resolve common issues like CORS with FastAPI middleware. The project is open for contributions and welcomes PRs, issues, documentation, and more.

sdfx
SDFX is the ultimate no-code platform for building and sharing AI apps with beautiful UI. It enables the creation of user-friendly interfaces for complex workflows by combining Comfy workflow with a UI. The tool is designed to merge the benefits of form-based UI and graph-node based UI, allowing users to create intricate graphs with a high-level UI overlay. SDFX is fully compatible with ComfyUI, abstracting the need for installing ComfyUI. It offers features like animated graph navigation, node bookmarks, UI debugger, custom nodes manager, app and template export, image and mask editor, and more. The tool compiles as a native app or web app, making it easy to maintain and add new features.

AI-Video-Boilerplate-Simple
AI-video-boilerplate-simple is a free Live AI Video boilerplate for testing out live video AI experiments. It includes a simple Flask server that serves files, supports live video from various sources, and integrates with Roboflow for AI vision. Users can use this template for projects, research, business ideas, and homework. It is lightweight and can be deployed on popular cloud platforms like Replit, Vercel, Digital Ocean, or Heroku.

hordelib
horde-engine is a wrapper around ComfyUI designed to run inference pipelines visually designed in the ComfyUI GUI. It enables users to design inference pipelines in ComfyUI and then call them programmatically, maintaining compatibility with the existing horde implementation. The library provides features for processing Horde payloads, initializing the library, downloading and validating models, and generating images based on input data. It also includes custom nodes for preprocessing and tasks such as face restoration and QR code generation. The project depends on various open source projects and bundles some dependencies within the library itself. Users can design ComfyUI pipelines, convert them to the backend format, and run them using the run_image_pipeline() method in hordelib.comfy.Comfy(). The project is actively developed and tested using git, tox, and a specific model directory structure.
For similar tasks

trickPrompt-engine
This repository contains a vulnerability mining engine based on GPT technology. The engine is designed to identify logic vulnerabilities in code by utilizing task-driven prompts. It does not require prior knowledge or fine-tuning and focuses on prompt design rather than model design. The tool is effective in real-world projects and should not be used for academic vulnerability testing. It supports scanning projects in various languages, with current support for Solidity. The engine is configured through prompts and environment settings, enabling users to scan for vulnerabilities in their codebase. Future updates aim to optimize code structure, add more language support, and enhance usability through command line mode. The tool has received a significant audit bounty of $50,000+ as of May 2024.

MachineSoM
MachineSoM is a code repository for the paper 'Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View'. It focuses on the emergence of intelligence from collaborative and communicative computational modules, enabling effective completion of complex tasks. The repository includes code for societies of LLM agents with different traits, collaboration processes such as debate and self-reflection, and interaction strategies for determining when and with whom to interact. It provides a coding framework compatible with various inference services like Replicate, OpenAI, Dashscope, and Anyscale, supporting models like Qwen and GPT. Users can run experiments, evaluate results, and draw figures based on the paper's content, with available datasets for MMLU, Math, and Chess Move Validity.

comfyui
ComfyUI is a highly-configurable, cloud-first AI-Dock container that allows users to run ComfyUI without bundled models or third-party configurations. Users can configure the container using provisioning scripts. The Docker image supports NVIDIA CUDA, AMD ROCm, and CPU platforms, with version tags for different configurations. Additional environment variables and Python environments are provided for customization. ComfyUI service runs on port 8188 and can be managed using supervisorctl. The tool also includes an API wrapper service and pre-configured templates for Vast.ai. The author may receive compensation for services linked in the documentation.

pyrfuniverse
pyrfuniverse is a python package used to interact with RFUniverse simulation environment. It is developed with reference to ML-Agents and produce new features. The package allows users to work with RFUniverse for simulation purposes, providing tools and functionalities to interact with the environment and create new features.

intentkit
IntentKit is an autonomous agent framework that enables the creation and management of AI agents with capabilities including blockchain interactions, social media management, and custom skill integration. It supports multiple agents, autonomous agent management, blockchain integration, social media integration, extensible skill system, and plugin system. The project is in alpha stage and not recommended for production use. It provides quick start guides for Docker and local development, integrations with Twitter and Coinbase, configuration options using environment variables or AWS Secrets Manager, project structure with core application code, entry points, configuration management, database models, skills, skill sets, and utility functions. Developers can add new skills by creating, implementing, and registering them in the skill directory.

pear-landing-page
PearAI Landing Page is an open-source AI-powered code editor managed by Nang and Pan. It is built with Next.js, Vercel, Tailwind CSS, and TypeScript. The project requires setting up environment variables for proper configuration. Users can run the project locally by starting the development server and visiting the specified URL in the browser. Recommended extensions include Prettier, ESLint, and JavaScript and TypeScript Nightly. Contributions to the project are welcomed and appreciated.

webapp-starter
webapp-starter is a modern full-stack application template built with Turborepo, featuring a Hono + Bun API backend and Next.js frontend. It provides an easy way to build a SaaS product. The backend utilizes technologies like Bun, Drizzle ORM, and Supabase, while the frontend is built with Next.js, Tailwind CSS, Shadcn/ui, and Clerk. Deployment can be done using Vercel and Render. The project structure includes separate directories for API backend and Next.js frontend, along with shared packages for the main database. Setup involves installing dependencies, configuring environment variables, and setting up services like Bun, Supabase, and Clerk. Development can be done using 'turbo dev' command, and deployment instructions are provided for Vercel and Render. Contributions are welcome through pull requests.

hayhooks
Hayhooks is a tool that simplifies the deployment and serving of Haystack pipelines as REST APIs. It allows users to wrap their pipelines with custom logic and expose them via HTTP endpoints, including OpenAI-compatible chat completion endpoints. With Hayhooks, users can easily convert their Haystack pipelines into API services with minimal boilerplate code.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.