swarmgo
SwarmGo is a Go package that allows you to create AI agents capable of interacting, coordinating, and executing tasks. Inspired by OpenAI's Swarm framework, SwarmGo focuses on making agent coordination and execution lightweight, highly controllable, and easily testable.
Stars: 164
SwarmGo is a Go package designed to create AI agents capable of interacting, coordinating, and executing tasks. It focuses on lightweight agent coordination and execution, offering powerful primitives like Agents and handoffs. SwarmGo enables building scalable solutions with rich dynamics between tools and networks of agents, all while keeping the learning curve low. It supports features like memory management, streaming support, concurrent agent execution, LLM interface, and structured workflows for organizing and coordinating multiple agents.
README:
SwarmGo is a Go package that allows you to create AI agents capable of interacting, coordinating, and executing tasks. Inspired by OpenAI's Swarm framework, SwarmGo focuses on making agent coordination and execution lightweight, highly controllable, and easily testable.
It achieves this through two primitive abstractions: Agents and handoffs. An Agent encompasses instructions and tools (functions it can execute), and can at any point choose to hand off a conversation to another Agent.
These primitives are powerful enough to express rich dynamics between tools and networks of agents, allowing you to build scalable, real-world solutions while avoiding a steep learning curve.
- Why SwarmGo
- Installation
- Quick Start
- Usage
- Agent Handoff
- Streaming Support
- Concurrent Agent Execution
- LLM Interface
- Workflows
- Examples
- Contributing
- License
- Acknowledgments
SwarmGo explores patterns that are lightweight, scalable, and highly customizable by design. It's best suited for situations dealing with a large number of independent capabilities and instructions that are difficult to encode into a single prompt.
SwarmGo runs (almost) entirely on the client and, much like the Chat Completions API, does not store state between calls.
go get github.com/prathyushnallamothu/swarmgo
Here's a simple example to get you started:
package main
import (
"context"
"fmt"
"log"
swarmgo "github.com/prathyushnallamothu/swarmgo"
openai "github.com/sashabaranov/go-openai"
llm "github.com/prathyushnallamothu/swarmgo/llm"
)
func main() {
client := swarmgo.NewSwarm("YOUR_OPENAI_API_KEY", llm.OpenAI)
agent := &swarmgo.Agent{
Name: "Agent",
Instructions: "You are a helpful assistant.",
Model: "gpt-3.5-turbo",
}
messages := []openai.ChatCompletionMessage{
{Role: "user", Content: "Hello!"},
}
ctx := context.Background()
response, err := client.Run(ctx, agent, messages, nil, "", false, false, 5, true)
if err != nil {
log.Fatalf("Error: %v", err)
}
fmt.Println(response.Messages[len(response.Messages)-1].Content)
}
Creating an Agent An Agent represents an AI assistant with specific instructions and functions (tools) it can use.
agent := &swarmgo.Agent{
Name: "Agent",
Instructions: "You are a helpful assistant.",
Model: "gpt-4o",
}
- Name: Identifier for the agent (no spaces or special characters).
- Instructions: The system prompt or instructions for the agent.
- Model: The OpenAI model to use (e.g., "gpt-4o").
To interact with the agent, use the Run method:
messages := []openai.ChatCompletionMessage{
{Role: "user", Content: "Hello!"},
}
ctx := context.Background()
response, err := client.Run(ctx, agent, messages, nil, "", false, false, 5, true)
if err != nil {
log.Fatalf("Error: %v", err)
}
fmt.Println(response.Messages[len(response.Messages)-1].Content)
Agents can use functions to perform specific tasks. Functions are defined and then added to an agent.
func getWeather(args map[string]interface{}, contextVariables map[string]interface{}) swarmgo.Result {
location := args["location"].(string)
// Simulate fetching weather data
return swarmgo.Result{
Value: fmt.Sprintf(`{"temp": 67, "unit": "F", "location": "%s"}`, location),
}
}
agent.Functions = []swarmgo.AgentFunction{
{
Name: "getWeather",
Description: "Get the current weather in a given location.",
Parameters: map[string]interface{}{
"type": "object",
"properties": map[string]interface{}{
"location": map[string]interface{}{
"type": "string",
"description": "The city to get the weather for.",
},
},
"required": []interface{}{"location"},
},
Function: getWeather,
},
}
Context variables allow you to pass information between function calls and agents.
func instructions(contextVariables map[string]interface{}) string {
name, ok := contextVariables["name"].(string)
if !ok {
name = "User"
}
return fmt.Sprintf("You are a helpful assistant. Greet the user by name (%s).", name)
}
agent.InstructionsFunc = instructions
Agents can hand off conversations to other agents. This is useful for delegating tasks or escalating when an agent is unable to handle a request.
func transferToAnotherAgent(args map[string]interface{}, contextVariables map[string]interface{}) swarmgo.Result {
anotherAgent := &swarmgo.Agent{
Name: "AnotherAgent",
Instructions: "You are another agent.",
Model: "gpt-3.5-turbo",
}
return swarmgo.Result{
Agent: anotherAgent,
Value: "Transferring to AnotherAgent.",
}
}
agent.Functions = append(agent.Functions, swarmgo.AgentFunction{
Name: "transferToAnotherAgent",
Description: "Transfer the conversation to AnotherAgent.",
Parameters: map[string]interface{}{
"type": "object",
"properties": map[string]interface{}{},
},
Function: transferToAnotherAgent,
})
SwarmGo now includes built-in support for streaming responses, allowing real-time processing of AI responses and tool calls. This is particularly useful for long-running operations or when you want to provide immediate feedback to users.
To use streaming, implement the StreamHandler
interface:
type StreamHandler interface {
OnStart()
OnToken(token string)
OnToolCall(toolCall openai.ToolCall)
OnComplete(message openai.ChatCompletionMessage)
OnError(err error)
}
A default implementation (DefaultStreamHandler
) is provided, but you can create your own handler for custom behavior:
type CustomStreamHandler struct {
totalTokens int
}
func (h *CustomStreamHandler) OnStart() {
fmt.Println("Starting stream...")
}
func (h *CustomStreamHandler) OnToken(token string) {
h.totalTokens++
fmt.Print(token)
}
func (h *CustomStreamHandler) OnComplete(msg openai.ChatCompletionMessage) {
fmt.Printf("\nComplete! Total tokens: %d\n", h.totalTokens)
}
func (h *CustomStreamHandler) OnError(err error) {
fmt.Printf("Error: %v\n", err)
}
func (h *CustomStreamHandler) OnToolCall(tool openai.ToolCall) {
fmt.Printf("\nUsing tool: %s\n", tool.Function.Name)
}
Here's an example of using streaming with a file analyzer:
client := swarmgo.NewSwarm("YOUR_OPENAI_API_KEY", llm.OpenAI)
agent := &swarmgo.Agent{
Name: "FileAnalyzer",
Instructions: "You are an assistant that analyzes files.",
Model: "gpt-4",
}
handler := &CustomStreamHandler{}
err := client.StreamingResponse(
context.Background(),
agent,
messages,
nil,
"",
handler,
true,
)
For a complete example of file analysis with streaming, see examples/file_analyzer_stream/main.go.
SwarmGo supports running multiple agents concurrently using the ConcurrentSwarm
type. This is particularly useful when you need to parallelize agent tasks or run multiple analyses simultaneously.
// Create a concurrent swarm
cs := swarmgo.NewConcurrentSwarm(apiKey)
// Configure multiple agents
configs := map[string]swarmgo.AgentConfig{
"agent1": {
Agent: agent1,
Messages: []openai.ChatCompletionMessage{
{Role: openai.ChatMessageRoleUser, Content: "Task 1"},
},
MaxTurns: 1,
ExecuteTools: true,
},
"agent2": {
Agent: agent2,
Messages: []openai.ChatCompletionMessage{
{Role: openai.ChatMessageRoleUser, Content: "Task 2"},
},
MaxTurns: 1,
ExecuteTools: true,
},
}
// Run agents concurrently with a timeout
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
results := cs.RunConcurrent(ctx, configs)
// Process results
for _, result := range results {
if result.Error != nil {
log.Printf("Error in %s: %v\n", result.AgentName, result.Error)
continue
}
// Handle successful response
fmt.Printf("Result from %s: %s\n", result.AgentName, result.Response)
}
Key features of concurrent execution:
- Run multiple agents in parallel with independent configurations
- Context-based timeout and cancellation support
- Thread-safe result collection
- Support for both ordered and unordered execution
- Error handling for individual agent failures
See the examples/concurrent_analyzer/main.go
for a complete example of concurrent code analysis using multiple specialized agents.
SwarmGo includes a built-in memory management system that allows agents to store and recall information across conversations. The memory system supports both short-term and long-term memory, with features for organizing and retrieving memories based on type and context.
// Create a new agent with memory capabilities
agent := swarmgo.NewAgent("MyAgent", "gpt-4")
// Memory is automatically managed for conversations and tool calls
// You can also explicitly store memories:
memory := swarmgo.Memory{
Content: "User prefers dark mode",
Type: "preference",
Context: map[string]interface{}{"setting": "ui"},
Timestamp: time.Now(),
Importance: 0.8,
}
agent.Memory.AddMemory(memory)
// Retrieve recent memories
recentMemories := agent.Memory.GetRecentMemories(5)
// Search specific types of memories
preferences := agent.Memory.SearchMemories("preference", nil)
Key features of the memory system:
- Automatic Memory Management: Conversations and tool interactions are automatically stored
- Memory Types: Organize memories by type (conversation, fact, tool_result, etc.)
- Context Association: Link memories with relevant context
- Importance Scoring: Assign priority levels to memories (0-1)
- Memory Search: Search by type and context
- Persistence: Save and load memories to/from disk
- Thread Safety: Concurrent memory access is protected
- Short-term Buffer: Recent memories are kept in a FIFO buffer
- Long-term Storage: Organized storage by memory type
See the memory_demo example for a complete demonstration of memory capabilities.
SwarmGo provides a flexible LLM (Language Learning Model) interface that supports multiple providers: currently OpenAI and Gemini.
To initialize a new Swarm with a specific provider:
// Initialize with OpenAI
client := swarmgo.NewSwarm("YOUR_API_KEY", llm.OpenAI)
// Initialize with Gemini
client := swarmgo.NewSwarm("YOUR_API_KEY", llm.Gemini)
Workflows in SwarmGo provide structured patterns for organizing and coordinating multiple agents. They help manage complex interactions between agents, define communication paths, and establish clear hierarchies or collaboration patterns. Think of workflows as the orchestration layer that determines how your agents work together to accomplish tasks.
Each workflow type serves a different organizational need:
A hierarchical pattern where a supervisor agent oversees and coordinates tasks among worker agents. This is ideal for:
- Task delegation and monitoring
- Quality control and oversight
- Centralized decision making
- Resource allocation across workers
workflow := swarmgo.NewWorkflow(apiKey, llm.OpenAI, swarmgo.SupervisorWorkflow)
// Add agents to teams
workflow.AddAgentToTeam(supervisorAgent, swarmgo.SupervisorTeam)
workflow.AddAgentToTeam(workerAgent1, swarmgo.WorkerTeam)
workflow.AddAgentToTeam(workerAgent2, swarmgo.WorkerTeam)
// Set supervisor as team leader
workflow.SetTeamLeader(supervisorAgent.Name, swarmgo.SupervisorTeam)
// Connect agents
workflow.ConnectAgents(supervisorAgent.Name, workerAgent1.Name)
workflow.ConnectAgents(supervisorAgent.Name, workerAgent2.Name)
A tree-like structure where tasks flow from top to bottom through multiple levels. This pattern is best for:
- Complex task decomposition
- Specialized agent roles at each level
- Clear reporting structures
- Sequential processing pipelines
workflow := swarmgo.NewWorkflow(apiKey, llm.OpenAI, swarmgo.HierarchicalWorkflow)
// Add agents to teams
workflow.AddAgentToTeam(managerAgent, swarmgo.SupervisorTeam)
workflow.AddAgentToTeam(researchAgent, swarmgo.ResearchTeam)
workflow.AddAgentToTeam(analysisAgent, swarmgo.AnalysisTeam)
// Connect agents in hierarchy
workflow.ConnectAgents(managerAgent.Name, researchAgent.Name)
workflow.ConnectAgents(researchAgent.Name, analysisAgent.Name)
A peer-based pattern where agents work together as equals, passing tasks between them as needed. This approach excels at:
- Team-based problem solving
- Parallel processing
- Iterative refinement
- Dynamic task sharing
workflow := swarmgo.NewWorkflow(apiKey, llm.OpenAI, swarmgo.CollaborativeWorkflow)
// Add agents to document team
workflow.AddAgentToTeam(editor, swarmgo.DocumentTeam)
workflow.AddAgentToTeam(reviewer, swarmgo.DocumentTeam)
workflow.AddAgentToTeam(writer, swarmgo.DocumentTeam)
// Connect agents in collaborative pattern
workflow.ConnectAgents(editor.Name, reviewer.Name)
workflow.ConnectAgents(reviewer.Name, writer.Name)
workflow.ConnectAgents(writer.Name, editor.Name)
Key workflow features:
- Team Management: Organize agents into functional teams
- Leadership Roles: Designate team leaders for coordination
- Flexible Routing: Dynamic task routing between agents
- Cycle Detection: Built-in cycle detection and handling
- State Management: Share state between agents in a workflow
- Error Handling: Robust error handling and recovery
For more examples, see the examples directory.
Contributions are welcome! Please follow these steps:
- Fork the repository.
- Create a new branch (git checkout -b feature/YourFeature).
- Commit your changes (git commit -am 'Add a new feature').
- Push to the branch (git push origin feature/YourFeature).
- Open a Pull Request.
This project is licensed under the MIT License. See the LICENSE file for details.
- Thanks to OpenAI for the inspiration and the Swarm framework.
- Thanks to Sashabaranov for the go-openai package.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for swarmgo
Similar Open Source Tools
swarmgo
SwarmGo is a Go package designed to create AI agents capable of interacting, coordinating, and executing tasks. It focuses on lightweight agent coordination and execution, offering powerful primitives like Agents and handoffs. SwarmGo enables building scalable solutions with rich dynamics between tools and networks of agents, all while keeping the learning curve low. It supports features like memory management, streaming support, concurrent agent execution, LLM interface, and structured workflows for organizing and coordinating multiple agents.
gollm
gollm is a Go package designed to simplify interactions with Large Language Models (LLMs) for AI engineers and developers. It offers a unified API for multiple LLM providers, easy provider and model switching, flexible configuration options, advanced prompt engineering, prompt optimization, memory retention, structured output and validation, provider comparison tools, high-level AI functions, robust error handling and retries, and extensible architecture. The package enables users to create AI-powered golems for tasks like content creation workflows, complex reasoning tasks, structured data generation, model performance analysis, prompt optimization, and creating a mixture of agents.
modelfusion
ModelFusion is an abstraction layer for integrating AI models into JavaScript and TypeScript applications, unifying the API for common operations such as text streaming, object generation, and tool usage. It provides features to support production environments, including observability hooks, logging, and automatic retries. You can use ModelFusion to build AI applications, chatbots, and agents. ModelFusion is a non-commercial open source project that is community-driven. You can use it with any supported provider. ModelFusion supports a wide range of models including text generation, image generation, vision, text-to-speech, speech-to-text, and embedding models. ModelFusion infers TypeScript types wherever possible and validates model responses. ModelFusion provides an observer framework and logging support. ModelFusion ensures seamless operation through automatic retries, throttling, and error handling mechanisms. ModelFusion is fully tree-shakeable, can be used in serverless environments, and only uses a minimal set of dependencies.
redisvl
Redis Vector Library (RedisVL) is a Python client library for building AI applications on top of Redis. It provides a high-level interface for managing vector indexes, performing vector search, and integrating with popular embedding models and providers. RedisVL is designed to make it easy for developers to build and deploy AI applications that leverage the speed, flexibility, and reliability of Redis.
llm-sandbox
LLM Sandbox is a lightweight and portable sandbox environment designed to securely execute large language model (LLM) generated code in a safe and isolated manner using Docker containers. It provides an easy-to-use interface for setting up, managing, and executing code in a controlled Docker environment, simplifying the process of running code generated by LLMs. The tool supports multiple programming languages, offers flexibility with predefined Docker images or custom Dockerfiles, and allows scalability with support for Kubernetes and remote Docker hosts.
flow-prompt
Flow Prompt is a dynamic library for managing and optimizing prompts for large language models. It facilitates budget-aware operations, dynamic data integration, and efficient load distribution. Features include CI/CD testing, dynamic prompt development, multi-model support, real-time insights, and prompt testing and evolution.
parea-sdk-py
Parea AI provides a SDK to evaluate & monitor AI applications. It allows users to test, evaluate, and monitor their AI models by defining and running experiments. The SDK also enables logging and observability for AI applications, as well as deploying prompts to facilitate collaboration between engineers and subject-matter experts. Users can automatically log calls to OpenAI and Anthropic, create hierarchical traces of their applications, and deploy prompts for integration into their applications.
OpenAI
OpenAI is a Swift community-maintained implementation over OpenAI public API. It is a non-profit artificial intelligence research organization founded in San Francisco, California in 2015. OpenAI's mission is to ensure safe and responsible use of AI for civic good, economic growth, and other public benefits. The repository provides functionalities for text completions, chats, image generation, audio processing, edits, embeddings, models, moderations, utilities, and Combine extensions.
redis-vl-python
The Python Redis Vector Library (RedisVL) is a tailor-made client for AI applications leveraging Redis. It enhances applications with Redis' speed, flexibility, and reliability, incorporating capabilities like vector-based semantic search, full-text search, and geo-spatial search. The library bridges the gap between the emerging AI-native developer ecosystem and the capabilities of Redis by providing a lightweight, elegant, and intuitive interface. It abstracts the features of Redis into a grammar that is more aligned to the needs of today's AI/ML Engineers or Data Scientists.
azure-functions-openai-extension
Azure Functions OpenAI Extension is a project that adds support for OpenAI LLM (GPT-3.5-turbo, GPT-4) bindings in Azure Functions. It provides NuGet packages for various functionalities like text completions, chat completions, assistants, embeddings generators, and semantic search. The project requires .NET 6 SDK or greater, Azure Functions Core Tools v4.x, and specific settings in Azure Function or local settings for development. It offers features like text completions, chat completion, assistants with custom skills, embeddings generators for text relatedness, and semantic search using vector databases. The project also includes examples in C# and Python for different functionalities.
LightRAG
LightRAG is a PyTorch library designed for building and optimizing Retriever-Agent-Generator (RAG) pipelines. It follows principles of simplicity, quality, and optimization, offering developers maximum customizability with minimal abstraction. The library includes components for model interaction, output parsing, and structured data generation. LightRAG facilitates tasks like providing explanations and examples for concepts through a question-answering pipeline.
instructor-js
Instructor is a Typescript library for structured extraction in Typescript, powered by llms, designed for simplicity, transparency, and control. It stands out for its simplicity, transparency, and user-centric design. Whether you're a seasoned developer or just starting out, you'll find Instructor's approach intuitive and steerable.
whetstone.chatgpt
Whetstone.ChatGPT is a simple light-weight library that wraps the Open AI API with support for dependency injection. It supports features like GPT 4, GPT 3.5 Turbo, chat completions, audio transcription and translation, vision completions, files, fine tunes, images, embeddings, moderations, and response streaming. The library provides a video walkthrough of a Blazor web app built on it and includes examples such as a command line bot. It offers quickstarts for dependency injection, chat completions, completions, file handling, fine tuning, image generation, and audio transcription.
videokit
VideoKit is a full-featured user-generated content solution for Unity Engine, enabling video recording, camera streaming, microphone streaming, social sharing, and conversational interfaces. It is cross-platform, with C# source code available for inspection. Users can share media, save to camera roll, pick from camera roll, stream camera preview, record videos, remove background, caption audio, and convert text commands. VideoKit requires Unity 2022.3+ and supports Android, iOS, macOS, Windows, and WebGL platforms.
fractl
Fractl is a programming language designed for generative AI, making it easier for developers to work with AI-generated code. It features a data-oriented and declarative syntax, making it a better fit for generative AI-powered code generation. Fractl also bridges the gap between traditional programming and visual building, allowing developers to use multiple ways of building, including traditional coding, visual development, and code generation with generative AI. Key concepts in Fractl include a graph-based hierarchical data model, zero-trust programming, declarative dataflow, resolvers, interceptors, and entity-graph-database mapping.
comet-llm
CometLLM is a tool to log and visualize your LLM prompts and chains. Use CometLLM to identify effective prompt strategies, streamline your troubleshooting, and ensure reproducible workflows!
For similar tasks
genaiscript
GenAIScript is a scripting environment designed to facilitate file ingestion, prompt development, and structured data extraction. Users can define metadata and model configurations, specify data sources, and define tasks to extract specific information. The tool provides a convenient way to analyze files and extract desired content in a structured format. It offers a user-friendly interface for working with data and automating data extraction processes, making it suitable for various data processing tasks.
swarmgo
SwarmGo is a Go package designed to create AI agents capable of interacting, coordinating, and executing tasks. It focuses on lightweight agent coordination and execution, offering powerful primitives like Agents and handoffs. SwarmGo enables building scalable solutions with rich dynamics between tools and networks of agents, all while keeping the learning curve low. It supports features like memory management, streaming support, concurrent agent execution, LLM interface, and structured workflows for organizing and coordinating multiple agents.
flock
Flock is a workflow-based low-code platform that enables rapid development of chatbots, RAG applications, and coordination of multi-agent teams. It offers a flexible, low-code solution for orchestrating collaborative agents, supporting various node types for specific tasks, such as input processing, text generation, knowledge retrieval, tool execution, intent recognition, answer generation, and more. Flock integrates LangChain and LangGraph to provide offline operation capabilities and supports future nodes like Conditional Branch, File Upload, and Parameter Extraction for creating complex workflows. Inspired by StreetLamb, Lobe-chat, Dify, and fastgpt projects, Flock introduces new features and directions while leveraging open-source models and multi-tenancy support.
OpenAI-DotNet
OpenAI-DotNet is a simple C# .NET client library for OpenAI to use through their RESTful API. It is independently developed and not an official library affiliated with OpenAI. Users need an OpenAI API account to utilize this library. The library targets .NET 6.0 and above, working across various platforms like console apps, winforms, wpf, asp.net, etc., and on Windows, Linux, and Mac. It provides functionalities for authentication, interacting with models, assistants, threads, chat, audio, images, files, fine-tuning, embeddings, and moderations.
clippinator
Clippinator is a code assistant tool that helps users develop code autonomously by planning, writing, debugging, and testing projects. It consists of agents based on GPT-4 that work together to assist the user in coding tasks. The main agent, Taskmaster, delegates tasks to specialized subagents like Architect, Writer, Frontender, Editor, QA, and Devops. The tool provides project architecture, tools for file and terminal operations, browser automation with Selenium, linting capabilities, CI integration, and memory management. Users can interact with the tool to provide feedback and guide the coding process, making it a powerful tool when combined with human intervention.
dwata
dwata is an open source desktop app designed to manage all your private data on your laptop, providing offline access, fast search capabilities, and organization features for emails, files, contacts, events, and tasks. It aims to reduce cognitive overhead in daily digital life by offering a centralized platform for personal data management. The tool prioritizes user privacy, with no data being sent outside the user's computer without explicit permission. dwata is still in early development stages and offers integration with AI providers for advanced functionalities.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.