
dive
Dive is an AI toolkit for Go that can be used to create specialized AI agents, automate workflows, and quickly integrate with the leading LLMs.
Stars: 87

Dive is an AI toolkit for Go that enables the creation of specialized teams of AI agents and seamless integration with leading LLMs. It offers a CLI and APIs for easy integration, with features like creating specialized agents, hierarchical agent systems, declarative configuration, multiple LLM support, extended reasoning, model context protocol, advanced model settings, tools for agent capabilities, tool annotations, streaming, CLI functionalities, thread management, confirmation system, deep research, and semantic diff. Dive also provides semantic diff analysis, unified interface for LLM providers, tool system with annotations, custom tool creation, and support for various verified models. The toolkit is designed for developers to build AI-powered applications with rich agent capabilities and tool integrations.
README:
Dive is an AI toolkit for Go that can be used to create specialized teams of AI agents and quickly integrate with the leading LLMs.
- π Embed it in your Go apps
- π€ Create specialized agents
- π οΈ Arm agents with tools
- β‘ Stream responses in real-time
Dive includes both a CLI and a polished set of APIs for easy integration into existing Go applications. It comes batteries-included, but also has the modularity you need for extensive customization.
Dive is shaping up nicely, but is still a young project.
- Feedback is highly valued on concepts, APIs, and usability
- Some breaking changes will happen as the API matures
- Not yet recommended for production use
You can also use GitHub Discussions for questions, suggestions, or feedback.
We welcome your input! π
Please leave a GitHub star if you're interested in the project!
- Agents: Chat or assign work to specialized agents with configurable reasoning
- Supervisor Patterns: Create hierarchical agent systems with work delegation
- Declarative Configuration: Define agents using YAML
- Multiple LLMs: Switch between Anthropic, OpenAI, Google, OpenRouter, Grok, Ollama, and others
- Extended Reasoning: Configure reasoning effort and budget for deep thinking
- Model Context Protocol (MCP): Connect to MCP servers for external tool access
- Advanced Model Settings: Fine-tune temperature, penalties, caching, and tool behavior
- Tools: Give agents rich capabilities to interact with the world
- Tool Annotations: Semantic hints about tool behavior (read-only, destructive, etc.)
- Streaming: Stream agent events for realtime UI updates
- CLI: Run agents, chat with agents, and more
- Thread Management: Persistent conversation threads with memory
- Confirmation System: Built-in confirmation system for destructive operations
- Deep Research: Use multiple agents to perform deep research
- Semantic Diff: AI-powered analysis of text differences for output drift detection
You will need some environment variables set to use the Dive CLI, both for the LLM provider and for any tools that you'd like your agents to use.
# LLM Provider API Keys
export ANTHROPIC_API_KEY="your-key-here"
export OPENAI_API_KEY="your-key-here"
export GEMINI_API_KEY="your-key-here"
export GROK_API_KEY="your-key-here"
export OPENROUTER_API_KEY="your-key-here"
# Tool API Keys
export GOOGLE_SEARCH_API_KEY="your-key-here"
export GOOGLE_SEARCH_CX="your-key-here"
export FIRECRAWL_API_KEY="your-key-here"
Firecrawl is used to retrieve webpage content. Create an account with Firecrawl to get a free key to experiment with.
Generating a Google Custom Search key is also quite easy, assuming you have a Google Cloud account. See the Google Custom Search documentation.
To get started with Dive as a library, use go get:
go get github.com/deepnoodle-ai/dive
Here's a quick example of creating a chat agent:
agent, err := agent.New(agent.Options{
Name: "Research Assistant",
Instructions: "You are an enthusiastic and deeply curious researcher.",
Model: anthropic.New(),
})
// Start chatting with the agent
response, err := agent.CreateResponse(ctx, dive.WithInput("Hello there!"))
// Or stream the response
stream, err := agent.StreamResponse(ctx, dive.WithInput("Hello there!"))
Or use the Dive LLM interface directly:
model := anthropic.New()
response, err := model.Generate(
context.Background(),
llm.WithMessages(llm.NewUserTextMessage("Hello there!")),
llm.WithMaxTokens(2048),
llm.WithTemperature(0.7),
)
if err != nil {
log.Fatal(err)
}
fmt.Println(response.Message.Text())
Dive configurations offer a declarative approach to defining agents and MCP servers:
Name: Research
Description: Research a topic
Config:
LogLevel: debug
DefaultProvider: anthropic
DefaultModel: claude-sonnet-4-20250514
ConfirmationMode: if-destructive
Agents:
- Name: Research Assistant
Backstory: You are an enthusiastic and deeply curious researcher.
Tools:
- web_search
- fetch
For the moment, you'll need to build the CLI yourself:
git clone [email protected]:deepnoodle-ai/dive.git
cd dive/cmd/dive
go install .
Available CLI commands include:
-
dive chat --provider anthropic --model claude-sonnet-4-20250514
: Chat with an agent -
dive classify --text "input text" --labels "label1,label2,label3"
: Classify text with confidence scores -
dive config check /path/to/config.yaml
: Validate a Dive configuration -
dive diff old.txt new.txt --explain-changes
: Semantic diff between texts using LLMs to explain changes
The dive diff
command provides AI-powered semantic analysis of differences between text files, which is especially useful for detecting output drift and understanding meaningful changes:
# Basic diff - shows file size changes and suggests AI analysis
dive diff old_output.txt new_output.txt
# AI-powered semantic analysis with detailed explanations
dive diff old_output.txt new_output.txt --explain-changes
# Different output formats
dive diff old.txt new.txt --explain-changes --format markdown
dive diff old.txt new.txt --explain-changes --format json
# Use with different LLM providers
dive diff old.txt new.txt --explain-changes --provider openai --model gpt-4o
dive diff old.txt new.txt --explain-changes --provider anthropic --model claude-sonnet-4-20250514
This is particularly useful for:
- Output Drift Detection: Comparing AI-generated outputs over time
- Code Review: Understanding semantic changes in generated code
- Content Analysis: Analyzing changes in documentation or text content
- Quality Assurance: Detecting meaningful changes in test outputs
Dive provides a unified interface for working with different LLM providers:
- Anthropic (Claude Sonnet, Haiku, Opus)
- OpenAI (GPT-5, GPT-4, o1, o3)
- OpenRouter (Access to 200+ models from multiple providers with unified API)
- Google (Gemini models)
- Grok (Grok 4, Grok Code Fast)
- Ollama (Local model serving)
Each provider implementation handles API communication, token counting, tool calling, and other details.
provider := anthropic.New(anthropic.WithModel("claude-sonnet-4-20250514"))
provider := openai.New(openai.WithModel("gpt-5-2025-08-07"))
provider := openrouter.New(openrouter.WithModel("openai/gpt-4o"))
provider := google.New(google.WithModel("gemini-2.5-flash"))
provider := ollama.New(ollama.WithModel("llama3.2:3b"))
Dive supports the Model Context Protocol (MCP) for connecting to external tools and services:
response, err := anthropic.New().Generate(
context.Background(),
llm.WithMessages(llm.NewUserTextMessage("What are the open tickets?")),
llm.WithMCPServers(
llm.MCPServerConfig{
Type: "url",
Name: "linear",
URL: "https://mcp.linear.app/sse",
AuthorizationToken: "your-token-here",
},
),
)
MCP servers can also be configured in YAML configurations for declarative setup.
These are the models that have been verified to work in Dive:
Provider | Model | Tools Supported |
---|---|---|
Anthropic | claude-sonnet-4-20250514 |
Yes |
Anthropic | claude-opus-4-20250514 |
Yes |
Anthropic | claude-3-7-sonnet-20250219 |
Yes |
Anthropic | claude-3-5-sonnet-20241022 |
Yes |
Anthropic | claude-3-5-haiku-20241022 |
Yes |
gemini-2.5-flash |
Yes | |
gemini-2.5-flash-lite |
Yes | |
gemini-2.5-pro |
Yes | |
Grok | grok-4-0709 |
Yes |
Grok | grok-code-fast-1 |
Yes |
OpenAI | gpt-5-2025-08-07 |
Yes |
OpenAI | gpt-4o |
Yes |
OpenAI | gpt-4.5-preview |
Yes |
OpenAI | o1 |
Yes |
OpenAI | o1-mini |
No |
OpenAI | o3-mini |
Yes |
Ollama | llama3.2:* |
Yes |
Ollama | mistral:* |
No |
Tools extend agent capabilities. Dive includes these built-in tools:
- list_directory: List directory contents
- read_file: Read content from files
- write_file: Write content to files
- text_editor: Advanced file editing with view, create, replace, and insert operations
- web_search: Search the web using Google Custom Search or Kagi Search
- fetch: Fetch and extract content from webpages using Firecrawl
- command: Execute external commands
- generate_image: Generate images using OpenAI's gpt-image-1
Dive's tool system includes rich annotations that provide hints about tool behavior:
type ToolAnnotations struct {
Title string // Human-readable title
ReadOnlyHint bool // Tool only reads, doesn't modify
DestructiveHint bool // Tool may make destructive changes
IdempotentHint bool // Tool is safe to call multiple times
OpenWorldHint bool // Tool accesses external resources
}
Creating custom tools is straightforward using the TypedTool
interface:
type SearchTool struct{}
func (t *SearchTool) Name() string { return "search" }
func (t *SearchTool) Description() string { return "Search for information" }
func (t *SearchTool) Schema() schema.Schema { /* define parameters */ }
func (t *SearchTool) Annotations() dive.ToolAnnotations { /* tool hints */ }
func (t *SearchTool) Call(ctx context.Context, input *SearchInput) (*dive.ToolResult, error) {
// Tool implementation
}
// Use with ToolAdapter for type safety
tool := dive.ToolAdapter(searchTool)
Go interfaces are in-place to support swapping in different tool implementations.
We're looking for contributors! Whether you're fixing bugs, adding features, improving documentation, or spreading the word, your help is appreciated.
- β Ollama support
- β MCP support
- β Google Cloud Vertex AI support
- Docs site
- Server mode
- Documented approach for RAG
- AWS Bedrock support
- Voice interactions
- Agent memory interface
- Integrations (Slack, Google Drive, etc.)
- Expanded CLI
- Hugging Face support
Not at this time. Dive is provided as an open-source framework that you can self-host and integrate into your own applications.
Agents can be configured as supervisors to delegate work to other agents:
supervisor, err := agent.New(agent.Options{
Name: "Research Manager",
Instructions: "You coordinate research tasks across multiple specialists.",
IsSupervisor: true,
Subordinates: []string{"Data Analyst", "Web Researcher"},
Model: anthropic.New(),
})
Supervisor agents automatically get an assign_work
tool for delegating tasks.
Fine-tune LLM behavior with advanced model settings:
agent, err := agent.New(agent.Options{
Name: "Assistant",
ModelSettings: &agent.ModelSettings{
Temperature: ptr(0.7),
ReasoningBudget: ptr(50000),
ReasoningEffort: "high",
MaxTokens: 4096,
ParallelToolCalls: ptr(true),
Caching: ptr(true),
},
Model: anthropic.New(),
})
Agents support persistent conversation threads:
response, err := agent.CreateResponse(ctx,
dive.WithThreadID("conversation-123"),
dive.WithInput("Continue our discussion"),
)
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for dive
Similar Open Source Tools

dive
Dive is an AI toolkit for Go that enables the creation of specialized teams of AI agents and seamless integration with leading LLMs. It offers a CLI and APIs for easy integration, with features like creating specialized agents, hierarchical agent systems, declarative configuration, multiple LLM support, extended reasoning, model context protocol, advanced model settings, tools for agent capabilities, tool annotations, streaming, CLI functionalities, thread management, confirmation system, deep research, and semantic diff. Dive also provides semantic diff analysis, unified interface for LLM providers, tool system with annotations, custom tool creation, and support for various verified models. The toolkit is designed for developers to build AI-powered applications with rich agent capabilities and tool integrations.

pipecat
Pipecat is an open-source framework designed for building generative AI voice bots and multimodal assistants. It provides code building blocks for interacting with AI services, creating low-latency data pipelines, and transporting audio, video, and events over the Internet. Pipecat supports various AI services like speech-to-text, text-to-speech, image generation, and vision models. Users can implement new services and contribute to the framework. Pipecat aims to simplify the development of applications like personal coaches, meeting assistants, customer support bots, and more by providing a complete framework for integrating AI services.

chat
deco.chat is an open-source foundation for building AI-native software, providing developers, engineers, and AI enthusiasts with robust tools to rapidly prototype, develop, and deploy AI-powered applications. It empowers Vibecoders to prototype ideas and Agentic engineers to deploy scalable, secure, and sustainable production systems. The core capabilities include an open-source runtime for composing tools and workflows, MCP Mesh for secure integration of models and APIs, a unified TypeScript stack for backend logic and custom frontends, global modular infrastructure built on Cloudflare, and a visual workspace for building agents and orchestrating everything in code.

evalchemy
Evalchemy is a unified and easy-to-use toolkit for evaluating language models, focusing on post-trained models. It integrates multiple existing benchmarks such as RepoBench, AlpacaEval, and ZeroEval. Key features include unified installation, parallel evaluation, simplified usage, and results management. Users can run various benchmarks with a consistent command-line interface and track results locally or integrate with a database for systematic tracking and leaderboard submission.

curator
Bespoke Curator is an open-source tool for data curation and structured data extraction. It provides a Python library for generating synthetic data at scale, with features like programmability, performance optimization, caching, and integration with HuggingFace Datasets. The tool includes a Curator Viewer for dataset visualization and offers a rich set of functionalities for creating and refining data generation strategies.

orra
Orra is a tool for building production-ready multi-agent applications that handle complex real-world interactions. It coordinates tasks across existing stack, agents, and tools run as services using intelligent reasoning. With features like smart pre-evaluated execution plans, domain grounding, durable execution, and automatic service health monitoring, Orra enables users to go fast with tools as services and revert state to handle failures. It provides real-time status tracking and webhook result delivery, making it ideal for developers looking to move beyond simple crews and agents.

sre
SmythOS is an operating system designed for building, deploying, and managing intelligent AI agents at scale. It provides a unified SDK and resource abstraction layer for various AI services, making it easy to scale and flexible. With an agent-first design, developer-friendly SDK, modular architecture, and enterprise security features, SmythOS offers a robust foundation for AI workloads. The system is built with a philosophy inspired by traditional operating system kernels, ensuring autonomy, control, and security for AI agents. SmythOS aims to make shipping production-ready AI agents accessible and open for everyone in the coming Internet of Agents era.

inferable
Inferable is an open source platform that helps users build reliable LLM-powered agentic automations at scale. It offers a managed agent runtime, durable tool calling, zero network configuration, multiple language support, and is fully open source under the MIT license. Users can define functions, register them with Inferable, and create runs that utilize these functions to automate tasks. The platform supports Node.js/TypeScript, Go, .NET, and React, and provides SDKs, core services, and bootstrap templates for various languages.

crawl4ai
Crawl4AI is a powerful and free web crawling service that extracts valuable data from websites and provides LLM-friendly output formats. It supports crawling multiple URLs simultaneously, replaces media tags with ALT, and is completely free to use and open-source. Users can integrate Crawl4AI into Python projects as a library or run it as a standalone local server. The tool allows users to crawl and extract data from specified URLs using different providers and models, with options to include raw HTML content, force fresh crawls, and extract meaningful text blocks. Configuration settings can be adjusted in the `crawler/config.py` file to customize providers, API keys, chunk processing, and word thresholds. Contributions to Crawl4AI are welcome from the open-source community to enhance its value for AI enthusiasts and developers.

WebAI-to-API
This project implements a web API that offers a unified interface to Google Gemini and Claude 3. It provides a self-hosted, lightweight, and scalable solution for accessing these AI models through a streaming API. The API supports both Claude and Gemini models, allowing users to interact with them in real-time. The project includes a user-friendly web UI for configuration and documentation, making it easy to get started and explore the capabilities of the API.

rag-chat
The `@upstash/rag-chat` package simplifies the development of retrieval-augmented generation (RAG) chat applications by providing Next.js compatibility with streaming support, built-in vector store, optional Redis compatibility for fast chat history management, rate limiting, and disableRag option. Users can easily set up the environment variables and initialize RAGChat to interact with AI models, manage knowledge base, chat history, and enable debugging features. Advanced configuration options allow customization of RAGChat instance with built-in rate limiting, observability via Helicone, and integration with Next.js route handlers and Vercel AI SDK. The package supports OpenAI models, Upstash-hosted models, and custom providers like TogetherAi and Replicate.

R2R
R2R (RAG to Riches) is a fast and efficient framework for serving high-quality Retrieval-Augmented Generation (RAG) to end users. The framework is designed with customizable pipelines and a feature-rich FastAPI implementation, enabling developers to quickly deploy and scale RAG-based applications. R2R was conceived to bridge the gap between local LLM experimentation and scalable production solutions. **R2R is to LangChain/LlamaIndex what NextJS is to React**. A JavaScript client for R2R deployments can be found here. ### Key Features * **π Deploy** : Instantly launch production-ready RAG pipelines with streaming capabilities. * **π§© Customize** : Tailor your pipeline with intuitive configuration files. * **π Extend** : Enhance your pipeline with custom code integrations. * **βοΈ Autoscale** : Scale your pipeline effortlessly in the cloud using SciPhi. * **π€ OSS** : Benefit from a framework developed by the open-source community, designed to simplify RAG deployment.

zo2
ZO2 (Zeroth-Order Offloading) is an innovative framework designed to enhance the fine-tuning of large language models (LLMs) using zeroth-order (ZO) optimization techniques and advanced offloading technologies. It is tailored for setups with limited GPU memory, enabling the fine-tuning of models with over 175 billion parameters on single GPUs with as little as 18GB of memory. ZO2 optimizes CPU offloading, incorporates dynamic scheduling, and has the capability to handle very large models efficiently without extra time costs or accuracy losses.

search_with_ai
Build your own conversation-based search with AI, a simple implementation with Node.js & Vue3. Live Demo Features: * Built-in support for LLM: OpenAI, Google, Lepton, Ollama(Free) * Built-in support for search engine: Bing, Sogou, Google, SearXNG(Free) * Customizable pretty UI interface * Support dark mode * Support mobile display * Support local LLM with Ollama * Support i18n * Support Continue Q&A with contexts.

oxylabs-mcp
The Oxylabs MCP Server acts as a bridge between AI models and the web, providing clean, structured data from any site. It enables scraping of URLs, rendering JavaScript-heavy pages, content extraction for AI use, bypassing anti-scraping measures, and accessing geo-restricted web data from 195+ countries. The implementation utilizes the Model Context Protocol (MCP) to facilitate secure interactions between AI assistants and web content. Key features include scraping content from any site, automatic data cleaning and conversion, bypassing blocks and geo-restrictions, flexible setup with cross-platform support, and built-in error handling and request management.

Crane
Crane is a high-performance inference framework leveraging Rust's Candle for maximum speed on CPU/GPU. It focuses on accelerating LLM inference speed with optimized kernels, reducing development overhead, and ensuring portability for running models on both CPU and GPU. Supported models include TTS systems like Spark-TTS and Orpheus-TTS, foundation models like Qwen2.5 series and basic LLMs, and multimodal models like Namo-R1 and Qwen2.5-VL. Key advantages of Crane include blazing-fast inference outperforming native PyTorch, Rust-powered to eliminate C++ complexity, Apple Silicon optimized for GPU acceleration via Metal, and hardware agnostic with a unified codebase for CPU/CUDA/Metal execution. Crane simplifies deployment with the ability to add new models with less than 100 lines of code in most cases.
For similar tasks

dive
Dive is an AI toolkit for Go that enables the creation of specialized teams of AI agents and seamless integration with leading LLMs. It offers a CLI and APIs for easy integration, with features like creating specialized agents, hierarchical agent systems, declarative configuration, multiple LLM support, extended reasoning, model context protocol, advanced model settings, tools for agent capabilities, tool annotations, streaming, CLI functionalities, thread management, confirmation system, deep research, and semantic diff. Dive also provides semantic diff analysis, unified interface for LLM providers, tool system with annotations, custom tool creation, and support for various verified models. The toolkit is designed for developers to build AI-powered applications with rich agent capabilities and tool integrations.

honcho
Honcho is a platform for creating personalized AI agents and LLM powered applications for end users. The repository is a monorepo containing the server/API for managing database interactions and storing application state, along with a Python SDK. It utilizes FastAPI for user context management and Poetry for dependency management. The API can be run using Docker or manually by setting environment variables. The client SDK can be installed using pip or Poetry. The project is open source and welcomes contributions, following a fork and PR workflow. Honcho is licensed under the AGPL-3.0 License.

sagentic-af
Sagentic.ai Agent Framework is a tool for creating AI agents with hot reloading dev server. It allows users to spawn agents locally by calling specific endpoint. The framework comes with detailed documentation and supports contributions, issues, and feature requests. It is MIT licensed and maintained by Ahyve Inc.

tinyllm
tinyllm is a lightweight framework designed for developing, debugging, and monitoring LLM and Agent powered applications at scale. It aims to simplify code while enabling users to create complex agents or LLM workflows in production. The core classes, Function and FunctionStream, standardize and control LLM, ToolStore, and relevant calls for scalable production use. It offers structured handling of function execution, including input/output validation, error handling, evaluation, and more, all while maintaining code readability. Users can create chains with prompts, LLM models, and evaluators in a single file without the need for extensive class definitions or spaghetti code. Additionally, tinyllm integrates with various libraries like Langfuse and provides tools for prompt engineering, observability, logging, and finite state machine design.

council
Council is an open-source platform designed for the rapid development and deployment of customized generative AI applications using teams of agents. It extends the LLM tool ecosystem by providing advanced control flow and scalable oversight for AI agents. Users can create sophisticated agents with predictable behavior by leveraging Council's powerful approach to control flow using Controllers, Filters, Evaluators, and Budgets. The framework allows for automated routing between agents, comparing, evaluating, and selecting the best results for a task. Council aims to facilitate packaging and deploying agents at scale on multiple platforms while enabling enterprise-grade monitoring and quality control.

mentals-ai
Mentals AI is a tool designed for creating and operating agents that feature loops, memory, and various tools, all through straightforward markdown syntax. This tool enables you to concentrate solely on the agentβs logic, eliminating the necessity to compose underlying code in Python or any other language. It redefines the foundational frameworks for future AI applications by allowing the creation of agents with recursive decision-making processes, integration of reasoning frameworks, and control flow expressed in natural language. Key concepts include instructions with prompts and references, working memory for context, short-term memory for storing intermediate results, and control flow from strings to algorithms. The tool provides a set of native tools for message output, user input, file handling, Python interpreter, Bash commands, and short-term memory. The roadmap includes features like a web UI, vector database tools, agent's experience, and tools for image generation and browsing. The idea behind Mentals AI originated from studies on psychoanalysis executive functions and aims to integrate 'System 1' (cognitive executor) with 'System 2' (central executive) to create more sophisticated agents.

AgentPilot
Agent Pilot is an open source desktop app for creating, managing, and chatting with AI agents. It features multi-agent, branching chats with various providers through LiteLLM. Users can combine models from different providers, configure interactions, and run code using the built-in Open Interpreter. The tool allows users to create agents, manage chats, work with multi-agent workflows, branching workflows, context blocks, tools, and plugins. It also supports a code interpreter, scheduler, voice integration, and integration with various AI providers. Contributions to the project are welcome, and users can report known issues for improvement.

shinkai-apps
Shinkai apps unlock the full capabilities/automation of first-class LLM (AI) support in the web browser. It enables creating multiple agents, each connected to either local or 3rd-party LLMs (ex. OpenAI GPT), which have permissioned (meaning secure) access to act in every webpage you visit. There is a companion repo called Shinkai Node, that allows you to set up the node anywhere as the central unit of the Shinkai Network, handling tasks such as agent management, job processing, and secure communications.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.