oh-my-pi
AI agent toolkit: coding agent CLI, unified LLM API, TUI & web UI libraries
Stars: 90
oh-my-pi is an AI coding agent for the terminal, providing tools for interactive coding, AI-powered git commits, Python code execution, LSP integration, time-traveling streamed rules, interactive code review, task management, interactive questioning, custom TypeScript slash commands, universal config discovery, MCP & plugin system, web search & fetch, SSH tool, Cursor provider integration, multi-credential support, image generation, TUI overhaul, edit fuzzy matching, and more. It offers a modern terminal interface with smart session management, supports multiple AI providers, and includes various tools for coding, task management, code review, and interactive questioning.
README:
AI coding agent for the terminal
Fork of badlogic/pi-mono by @mariozechner
Requires Bun runtime:
bun install -g @oh-my-pi/pi-coding-agentLinux / macOS:
curl -fsSL https://raw.githubusercontent.com/can1357/oh-my-pi/main/scripts/install.sh | shWindows (PowerShell):
irm https://raw.githubusercontent.com/can1357/oh-my-pi/main/scripts/install.ps1 | iexBy default, the installer uses bun if available, otherwise downloads the prebuilt binary.
Options:
-
--source/-Source: Install via bun (installs bun first if needed) -
--binary/-Binary: Always use prebuilt binary -
--ref <ref>/-Ref <ref>: Install a tag/commit/branch (defaults to source install)
# Force bun installation
curl -fsSL .../install.sh | sh -s -- --source
# Install a tag via binary
curl -fsSL .../install.sh | sh -s -- --binary --ref v3.20.1
# Install a branch or commit via source
curl -fsSL .../install.sh | sh -s -- --source --ref main# Install a tag via binary
& ([scriptblock]::Create((irm https://raw.githubusercontent.com/can1357/oh-my-pi/main/scripts/install.ps1))) -Binary -Ref v3.20.1
# Install a branch or commit via source
& ([scriptblock]::Create((irm https://raw.githubusercontent.com/can1357/oh-my-pi/main/scripts/install.ps1))) -Source -Ref mainDownload binaries directly from GitHub Releases.
AI-powered conventional commit generation with intelligent change analysis:
-
Agentic mode: Tool-based git inspection with
git-overview,git-file-diff,git-hunkfor fine-grained analysis - Split commits: Automatically separates unrelated changes into atomic commits with dependency ordering
- Hunk-level staging: Stage individual hunks when changes span multiple concerns
-
Changelog generation: Proposes and applies changelog entries to
CHANGELOG.mdfiles - Commit validation: Detects filler words, meta phrases, and enforces conventional commit format
-
Legacy mode:
--legacyflag for deterministic pipeline when preferred - Run via
omp commitwith options:--push,--dry-run,--no-changelog,--context
Execute Python code with a persistent IPython kernel and 30+ shell-like helpers:
- Streaming output: Real-time stdout/stderr with image and JSON rendering
-
Prelude helpers:
cat(),sed(),rsed(),find(),grep(),batch(),sh(),run()and more -
Git utilities:
git_status(),git_diff(),git_log(),git_show()for repository operations -
Line operations:
extract_lines(),delete_lines(),insert_lines(),lines_matching()for text manipulation -
Shared gateway: Resource-efficient kernel reuse across sessions (
python.sharedGatewaysetting) -
Custom modules: Load extensions from
.omp/modules/and.pi/modules/directories -
Rich output: Supports
display()for HTML, Markdown, images, and interactive JSON trees - Mermaid diagrams: Renders mermaid code blocks as inline graphics in iTerm2/Kitty terminals
- Install dependencies via
omp setup python
Full IDE-like code intelligence with automatic formatting and diagnostics:
- Format-on-write: Auto-format code using the language server's formatter (rustfmt, gofmt, prettier, etc.)
- Diagnostics on write/edit: Immediate feedback on syntax errors and type issues after every file change
-
Workspace diagnostics: Check entire project for errors (
lsp action=workspace_diagnostics) - 40+ language configs: Out-of-the-box support for Rust, Go, Python, TypeScript, Java, Kotlin, Scala, Haskell, OCaml, Elixir, Ruby, PHP, C#, Lua, Nix, and many more
-
Local binary resolution: Auto-discovers project-local LSP servers in
node_modules/.bin/,.venv/bin/, etc. - Hover docs, symbol references, code actions, workspace-wide symbol search
Zero context-use rules that inject themselves only when needed:
- Pattern-triggered injection: Rules define regex triggers that watch the model's output stream
- Just-in-time activation: When a pattern matches, the stream aborts, the rule injects as a system reminder, and the request retries
- Zero upfront cost: TTSR rules consume no context until they're actually relevant
- One-shot per session: Each rule only triggers once, preventing loops
- Define via
ttsrTriggerfield in rule files (regex pattern)
Example: A "don't use deprecated API" rule only activates when the model starts writing deprecated code, saving context for sessions that never touch that API.
Structured code review with priority-based findings:
-
/reviewcommand: Interactive mode selection (branch comparison, uncommitted changes, commit review) -
Structured findings:
report_findingtool with priority levels (P0-P3: critical → nit) - Verdict rendering: aggregates findings into approve/request-changes/comment
- Combined result tree showing verdict and all findings
Parallel execution framework with specialized agents and real-time streaming:
- 5 bundled agents: explore, plan, browser, task, reviewer
- Parallel exploration: Reviewer agent can spawn explore agents for large codebase analysis
- Real-time artifact streaming: Task outputs stream as they're created, not just at completion
- Output tool: Read full agent outputs by ID when truncated previews aren't sufficient
-
Isolated execution:
isolated: trueruns tasks in git worktrees, generates patches, and applies cleanly - User-level (
~/.omp/agent/agents/) and project-level (.omp/agents/) custom agents - Concurrency-limited batch execution with progress tracking
Configure different models for different purposes with automatic discovery:
-
Three roles:
default(main model),smol(fast/cheap),slow(comprehensive reasoning) - Auto-discovery: Smol finds haiku → flash → mini; Slow finds codex → gpt → opus → pro
-
Role-based selection: Task tool agents can use
model: pi/smolfor cost-effective exploration - CLI args (
--smol,--slow) and env vars (PI_SMOL_MODEL,PI_SLOW_MODEL) - Configure via
/modelselector with keybindings (Enter=default, S=smol, L=slow)
Structured task management with persistent visual tracking:
-
todo_writetool: Create and manage task lists during coding sessions - Persistent panel: Todo list displays above the editor with real-time progress
-
Task states:
pending,in_progress,completedwith automatic status updates -
Completion reminders: Agent warned when stopping with incomplete todos (
todoCompletionsetting) -
Toggle visibility:
Ctrl+Texpands/collapses the todo panel
Structured user interaction with typed options:
- Multiple choice questions: Present options with descriptions for user selection
- Multi-select support: Allow multiple answers when choices aren't mutually exclusive
-
Multi-part questions: Ask multiple related questions in sequence via
questionsarray parameter
Programmable commands with full API access:
- Create at
~/.omp/agent/commands/[name]/index.tsor.omp/commands/[name]/index.ts - Export factory returning
{ name, description, execute(args, ctx) } - Full access to
HookCommandContextfor UI dialogs, session control, shell execution - Return string to send as LLM prompt, or void for fire-and-forget actions
- Also loads from Claude Code directories (
~/.claude/commands/,.claude/commands/)
Unified capability-based discovery that loads configuration from 8 AI coding tools:
- Multi-tool support: Claude Code, Cursor, Windsurf, Gemini, Codex, Cline, GitHub Copilot, VS Code
- Discovers everything: MCP servers, rules, skills, hooks, tools, slash commands, prompts, context files
-
Native format support: Cursor MDC frontmatter, Windsurf rules, Cline
.clinerules, CopilotapplyToglobs, Geminisystem.md, CodexAGENTS.md - Provider attribution: See which tool contributed each configuration item
-
Discovery settings: Enable/disable individual providers via
/configinteractive tab -
Priority ordering: Multi-path resolution across
.omp,.pi, and.claudedirectories
Full Model Context Protocol support with external tool integration:
- Stdio and HTTP transports for connecting to MCP servers
- Plugin CLI (
omp plugin install/enable/configure/doctor) - Hot-loadable plugins from
~/.omp/plugins/with npm/bun integration - Automatic Exa MCP server filtering with API key extraction
Multi-provider search and full-page scraping with 80+ specialized scrapers:
- Multi-provider search: Anthropic, Perplexity, and Exa with automatic fallback chain
- 80+ site-specific scrapers: GitHub, GitLab, npm, PyPI, crates.io, arXiv, PubMed, Stack Overflow, Hacker News, Reddit, Wikipedia, YouTube transcripts, and many more
- Package registries: npm, PyPI, crates.io, Hex, Hackage, NuGet, Maven, RubyGems, Packagist, pub.dev, Go packages
- Security databases: NVD, OSV, CISA KEV vulnerability data
- HTML-to-markdown conversion with link preservation
Remote command execution with persistent connections:
-
Project discovery: Reads SSH hosts from
ssh.json/.ssh.jsonin your project - Persistent connections: Reuses SSH connections across commands for faster execution
- OS/shell detection: Automatically detects remote OS and shell type
- SSHFS mounts: Optional automatic mounting of remote directories
- Compat mode: Windows host support with automatic shell probing
Use your Cursor Pro subscription for AI completions:
- Browser-based OAuth: Authenticate through Cursor's OAuth flow
- Tool execution bridge: Maps Cursor's native tools to omp equivalents (read, write, shell, diagnostics)
- Conversation caching: Persists context across requests in the same session
- Shell streaming: Real-time stdout/stderr during command execution
Distribute load across multiple API keys:
- Round-robin distribution: Automatically cycles through credentials per session
- Usage-aware selection: For OpenAI Codex, checks account limits before credential selection
- Automatic fallback: Switches credentials mid-session when rate limits are hit
- Consistent hashing: FNV-1a hashing ensures stable credential assignment per session
Create images directly from the agent:
-
Gemini integration: Uses
gemini-3-pro-image-previewby default -
OpenRouter fallback: Automatically uses OpenRouter when
OPENROUTER_API_KEYis set - Inline display: Images render in terminals supporting Kitty/iTerm2 graphics
- Saves to temp files and reports paths for further manipulation
Modern terminal interface with smart session management:
- Auto session titles: Sessions automatically titled based on first message using smol model
- Welcome screen: Logo, tips, recent sessions with selection
- Powerline footer: Model, cwd, git branch/status, token usage, context %
- LSP status: Shows which language servers are active and ready
-
Hotkeys:
?displays shortcuts when editor empty -
Persistent prompt history: SQLite-backed with
Ctrl+Rsearch across sessions - Grouped tool display: Consecutive Read calls shown in compact tree view
- Emergency terminal restore: Crash handlers prevent terminal corruption
Handles whitespace and indentation variance automatically:
- High-confidence fuzzy matching for
oldTextin edit operations - Fixes the #1 pain point: edits failing due to invisible whitespace differences
- Configurable via
edit.fuzzyMatchsetting (enabled by default)
-
omp configsubcommand: Manage settings from CLI (list,get,set,reset,path) -
omp setupsubcommand: Install optional dependencies (e.g.,omp setup pythonfor Jupyter kernel) -
omp statssubcommand: Local observability dashboard for AI usage (requests, cost, cache rate, tokens/s) -
xhighthinking level: Extended reasoning for Anthropic models with increased token budgets -
Background mode:
/backgrounddetaches UI and continues agent execution - Completion notifications: Configurable bell/OSC99/OSC9 when agent finishes
- 65+ built-in themes: Catppuccin, Dracula, Nord, Gruvbox, Tokyo Night, and material variants
- Auto environment detection: OS, distro, kernel, CPU, GPU, shell, terminal, DE in system prompt
- Git context: System prompt includes branch, status, recent commits
- Bun runtime: Native TypeScript execution, faster startup, all packages migrated
-
Centralized file logging: Debug logs with daily rotation to
~/.omp/logs/ - Bash interceptor: Optionally block shell commands that have dedicated tools
-
@file auto-read: Type
@path/to/filein prompts to inject file contents inline - Additional tools: AST (structural code analysis), Replace (find & replace across files)
| Package | Description |
|---|---|
| @oh-my-pi/pi-ai | Multi-provider LLM client (Anthropic, OpenAI, Gemini, Bedrock, Cursor, Codex, Copilot) |
| @oh-my-pi/pi-agent-core | Agent runtime with tool calling and state management |
| @oh-my-pi/pi-coding-agent | Interactive coding agent CLI |
| @oh-my-pi/pi-tui | Terminal UI library with differential rendering |
| @oh-my-pi/pi-natives | WASM bindings for native text, image, and grep operations |
| @oh-my-pi/omp-stats | Local observability dashboard for AI usage statistics |
| Crate | Description |
|---|---|
| pi-natives | Rust N-API crate for performance-critical ops |
MIT - Original work copyright Mario Zechner
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for oh-my-pi
Similar Open Source Tools
oh-my-pi
oh-my-pi is an AI coding agent for the terminal, providing tools for interactive coding, AI-powered git commits, Python code execution, LSP integration, time-traveling streamed rules, interactive code review, task management, interactive questioning, custom TypeScript slash commands, universal config discovery, MCP & plugin system, web search & fetch, SSH tool, Cursor provider integration, multi-credential support, image generation, TUI overhaul, edit fuzzy matching, and more. It offers a modern terminal interface with smart session management, supports multiple AI providers, and includes various tools for coding, task management, code review, and interactive questioning.
ai-real-estate-assistant
AI Real Estate Assistant is a modern platform that uses AI to assist real estate agencies in helping buyers and renters find their ideal properties. It features multiple AI model providers, intelligent query processing, advanced search and retrieval capabilities, and enhanced user experience. The tool is built with a FastAPI backend and Next.js frontend, offering semantic search, hybrid agent routing, and real-time analytics.
agentfield
AgentField is an open-source control plane designed for autonomous AI agents, providing infrastructure for agents to make decisions beyond chatbots. It offers features like scaling infrastructure, routing & discovery, async execution, durable state, observability, trust infrastructure with cryptographic identity, verifiable credentials, and policy enforcement. Users can write agents in Python, Go, TypeScript, or interact via REST APIs. The tool enables the creation of AI backends that reason autonomously within defined boundaries, offering predictability and flexibility. AgentField aims to bridge the gap between AI frameworks and production-ready infrastructure for AI agents.
alphora
Alphora is a full-stack framework for building production AI agents, providing agent orchestration, prompt engineering, tool execution, memory management, streaming, and deployment with an async-first, OpenAI-compatible design. It offers features like agent derivation, reasoning-action loop, async streaming, visual debugger, OpenAI compatibility, multimodal support, tool system with zero-config tools and type safety, prompt engine with dynamic prompts, memory and storage management, sandbox for secure execution, deployment as API, and more. Alphora allows users to build sophisticated AI agents easily and efficiently.
lighteval
LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron. We're releasing it with the community in the spirit of building in the open. Note that it is still very much early so don't expect 100% stability ^^' In case of problems or question, feel free to open an issue!
osaurus
Osaurus is a native, Apple Silicon-only local LLM server built on Apple's MLX for maximum performance on M‑series chips. It is a SwiftUI app + SwiftNIO server with OpenAI‑compatible and Ollama‑compatible endpoints. The tool supports native MLX text generation, model management, streaming and non‑streaming chat completions, OpenAI‑compatible function calling, real-time system resource monitoring, and path normalization for API compatibility. Osaurus is designed for macOS 15.5+ and Apple Silicon (M1 or newer) with Xcode 16.4+ required for building from source.
asktube
AskTube is an AI-powered YouTube video summarizer and QA assistant that utilizes Retrieval Augmented Generation (RAG) technology. It offers a comprehensive solution with Q&A functionality and aims to provide a user-friendly experience for local machine usage. The project integrates various technologies including Python, JS, Sanic, Peewee, Pytubefix, Sentence Transformers, Sqlite, Chroma, and NuxtJs/DaisyUI. AskTube supports multiple providers for analysis, AI services, and speech-to-text conversion. The tool is designed to extract data from YouTube URLs, store embedding chapter subtitles, and facilitate interactive Q&A sessions with enriched questions. It is not intended for production use but rather for end-users on their local machines.
ChordMiniApp
ChordMini is an advanced music analysis platform with AI-powered chord recognition, beat detection, and synchronized lyrics. It features a clean and intuitive interface for YouTube search, chord progression visualization, interactive guitar diagrams with accurate fingering patterns, lead sheet with AI assistant for synchronized lyrics transcription, and various add-on features like Roman Numeral Analysis, Key Modulation Signals, Simplified Chord Notation, and Enhanced Chord Correction. The tool requires Node.js, Python 3.9+, and a Firebase account for setup. It offers a hybrid backend architecture for local development and production deployments, with features like beat detection, chord recognition, lyrics processing, rate limiting, and audio processing supporting MP3, WAV, and FLAC formats. ChordMini provides a comprehensive music analysis workflow from user input to visualization, including dual input support, environment-aware processing, intelligent caching, advanced ML pipeline, and rich visualization options.
flashinfer
FlashInfer is a library for Language Languages Models that provides high-performance implementation of LLM GPU kernels such as FlashAttention, PageAttention and LoRA. FlashInfer focus on LLM serving and inference, and delivers state-the-art performance across diverse scenarios.
kiss_ai
KISS AI is a lightweight and powerful multi-agent evolutionary framework that simplifies building AI agents. It uses native function calling for efficiency and accuracy, making building AI agents as straightforward as possible. The framework includes features like multi-agent orchestration, agent evolution and optimization, relentless coding agent for long-running tasks, output formatting, trajectory saving and visualization, GEPA for prompt optimization, KISSEvolve for algorithm discovery, self-evolving multi-agent, Docker integration, multiprocessing support, and support for various models from OpenAI, Anthropic, Gemini, Together AI, and OpenRouter.
botserver
General Bots is a self-hosted AI automation platform and LLM conversational platform focused on convention over configuration and code-less approaches. It serves as the core API server handling LLM orchestration, business logic, database operations, and multi-channel communication. The platform offers features like multi-vendor LLM API, MCP + LLM Tools Generation, Semantic Caching, Web Automation Engine, Enterprise Data Connectors, and Git-like Version Control. It enforces a ZERO TOLERANCE POLICY for code quality and security, with strict guidelines for error handling, performance optimization, and code patterns. The project structure includes modules for core functionalities like Rhai BASIC interpreter, security, shared types, tasks, auto task system, file operations, learning system, and LLM assistance.
leetcode-py
A Python package to generate professional LeetCode practice environments. Features automated problem generation from LeetCode URLs, beautiful data structure visualizations (TreeNode, ListNode, GraphNode), and comprehensive testing with 10+ test cases per problem. Built with professional development practices including CI/CD, type hints, and quality gates. The tool provides a modern Python development environment with production-grade features such as linting, test coverage, logging, and CI/CD pipeline. It also offers enhanced data structure visualization for debugging complex structures, flexible notebook support, and a powerful CLI for generating problems anywhere.
chat-ollama
ChatOllama is an open-source chatbot based on LLMs (Large Language Models). It supports a wide range of language models, including Ollama served models, OpenAI, Azure OpenAI, and Anthropic. ChatOllama supports multiple types of chat, including free chat with LLMs and chat with LLMs based on a knowledge base. Key features of ChatOllama include Ollama models management, knowledge bases management, chat, and commercial LLMs API keys management.
aegra
Aegra is a self-hosted AI agent backend platform that provides LangGraph power without vendor lock-in. Built with FastAPI + PostgreSQL, it offers complete control over agent orchestration for teams looking to escape vendor lock-in, meet data sovereignty requirements, enable custom deployments, and optimize costs. Aegra is Agent Protocol compliant and perfect for teams seeking a free, self-hosted alternative to LangGraph Platform with zero lock-in, full control, and compatibility with existing LangGraph Client SDK.
nanolang
NanoLang is a minimal, LLM-friendly programming language that transpiles to C for native performance. It features mandatory testing, unambiguous syntax, automatic memory management, LLM-powered autonomous optimization, dual notation for operators, static typing, C interop, and native performance. The language supports variables, functions with mandatory tests, control flow, structs, enums, generic types, and provides a clean, modern syntax optimized for both human readability and AI code generation.
human
AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition, Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis, Age & Gender & Emotion Prediction, Gaze Tracking, Gesture Recognition, Body Segmentation
For similar tasks
Trellis
Trellis is an all-in-one AI framework and toolkit designed for Claude Code, Cursor, and iFlow. It offers features such as auto-injection of required specs and workflows, auto-updated spec library, parallel sessions for running multiple agents simultaneously, team sync for sharing specs, and session persistence. Trellis helps users educate their AI, work on multiple features in parallel, define custom workflows, and provides a structured project environment with workflow guides, spec library, personal journal, task management, and utilities. The tool aims to enhance code review, introduce skill packs, integrate with broader tools, improve session continuity, and visualize progress for each agent.
codemie-code
Unified AI Coding Assistant CLI for managing multiple AI agents like Claude Code, Google Gemini, OpenCode, and custom AI agents. Supports OpenAI, Azure OpenAI, AWS Bedrock, LiteLLM, Ollama, and Enterprise SSO. Features built-in LangGraph agent with file operations, command execution, and planning tools. Cross-platform support for Windows, Linux, and macOS. Ideal for developers seeking a powerful alternative to GitHub Copilot or Cursor.
oh-my-pi
oh-my-pi is an AI coding agent for the terminal, providing tools for interactive coding, AI-powered git commits, Python code execution, LSP integration, time-traveling streamed rules, interactive code review, task management, interactive questioning, custom TypeScript slash commands, universal config discovery, MCP & plugin system, web search & fetch, SSH tool, Cursor provider integration, multi-credential support, image generation, TUI overhaul, edit fuzzy matching, and more. It offers a modern terminal interface with smart session management, supports multiple AI providers, and includes various tools for coding, task management, code review, and interactive questioning.
CVPR2024-Papers-with-Code-Demo
This repository contains a collection of papers and code for the CVPR 2024 conference. The papers cover a wide range of topics in computer vision, including object detection, image segmentation, image generation, and video analysis. The code provides implementations of the algorithms described in the papers, making it easy for researchers and practitioners to reproduce the results and build upon the work of others. The repository is maintained by a team of researchers at the University of California, Berkeley.
ezlocalai
ezlocalai is an artificial intelligence server that simplifies running multimodal AI models locally. It handles model downloading and server configuration based on hardware specs. It offers OpenAI Style endpoints for integration, voice cloning, text-to-speech, voice-to-text, and offline image generation. Users can modify environment variables for customization. Supports NVIDIA GPU and CPU setups. Provides demo UI and workflow visualization for easy usage.
ms-copilot-play
Microsoft Copilot Play is a Cloudflare Worker service that accelerates Microsoft Copilot functionalities in China. It allows high-speed access to Microsoft Copilot features like chatting, notebook, plugins, image generation, and sharing. The service filters out meaningless requests used for statistics, saving up to 80% of Cloudflare Worker requests. Users can deploy the service easily with Cloudflare Worker, ensuring fast and unlimited access with no additional operations. The service leverages the power of Microsoft Copilot, based on OpenAI GPT-4, and utilizes Bing search to answer questions.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.











