
ruler
Ruler — apply the same rules to all coding agents
Stars: 1496

Ruler is a tool designed to centralize AI coding assistant instructions, providing a single source of truth for managing instructions across multiple AI coding tools. It helps in avoiding inconsistent guidance, duplicated effort, context drift, onboarding friction, and complex project structures by automatically distributing instructions to the right configuration files. With support for nested rule loading, Ruler can handle complex project structures with context-specific instructions for different components. It offers features like centralised rule management, nested rule loading, automatic distribution, targeted agent configuration, MCP server propagation, .gitignore automation, and a simple CLI for easy configuration management.
README:
Animation by Isaac Flath of Elite AI-Assisted Coding ➡︎ |
![]() |
Beta Research Preview
- Please test this version carefully in your environment
- Report issues at https://github.com/intellectronica/ruler/issues
Managing instructions across multiple AI coding tools becomes complex as your team grows. Different agents (GitHub Copilot, Claude, Cursor, Aider, etc.) require their own configuration files, leading to:
- Inconsistent guidance across AI tools
- Duplicated effort maintaining multiple config files
- Context drift as project requirements evolve
- Onboarding friction for new AI tools
- Complex project structures requiring context-specific instructions for different components
Ruler solves this by providing a single source of truth for all your AI agent instructions, automatically distributing them to the right configuration files. With support for nested rule loading, Ruler can handle complex project structures with context-specific instructions for different components.
-
Centralised Rule Management: Store all AI instructions in a dedicated
.ruler/
directory using Markdown files -
Nested Rule Loading: Support complex project structures with multiple
.ruler/
directories for context-specific instructions - Automatic Distribution: Ruler applies these rules to configuration files of supported AI agents
-
Targeted Agent Configuration: Fine-tune which agents are affected and their specific output paths via
ruler.toml
- MCP Server Propagation: Manage and distribute Model Context Protocol (MCP) server settings
-
.gitignore
Automation: Keeps generated agent config files out of version control automatically - Simple CLI: Easy-to-use commands for initialising and applying configurations
Agent | Rules File(s) | MCP Configuration / Notes |
---|---|---|
AGENTS.md | AGENTS.md |
(pseudo-agent ensuring root AGENTS.md exists) |
GitHub Copilot |
AGENTS.md , .github/copilot-instructions.md
|
.vscode/mcp.json |
Claude Code | CLAUDE.md |
.mcp.json |
OpenAI Codex CLI | AGENTS.md |
.codex/config.toml |
Jules | AGENTS.md |
- |
Cursor | .cursor/rules/ruler_cursor_instructions.mdc |
.cursor/mcp.json |
Windsurf | .windsurf/rules/ruler_windsurf_instructions.md |
- |
Cline | .clinerules |
- |
Crush | CRUSH.md |
.crush.json |
Amp | AGENTS.md |
- |
Amazon Q CLI | .amazonq/rules/ruler_q_rules.md |
.amazonq/mcp.json |
Aider |
AGENTS.md , .aider.conf.yml
|
.mcp.json |
Firebase Studio | .idx/airules.md |
- |
Open Hands | .openhands/microagents/repo.md |
config.toml |
Gemini CLI | AGENTS.md |
.gemini/settings.json |
Junie | .junie/guidelines.md |
- |
AugmentCode | .augment/rules/ruler_augment_instructions.md |
.vscode/settings.json |
Kilo Code | .kilocode/rules/ruler_kilocode_instructions.md |
.kilocode/mcp.json |
opencode | AGENTS.md |
opencode.json |
Goose | .goosehints |
- |
Qwen Code | AGENTS.md |
.qwen/settings.json |
RooCode | AGENTS.md |
.roo/mcp.json |
Zed | AGENTS.md |
.zed/settings.json (project root, never $HOME) |
Trae AI | .trae/rules/project_rules.md |
- |
Warp | WARP.md |
- |
Kiro | .kiro/steering/ruler_kiro_instructions.md |
- |
Global Installation (Recommended for CLI use):
npm install -g @intellectronica/ruler
Using npx
(for one-off commands):
npx @intellectronica/ruler apply
- Navigate to your project's root directory
- Run
ruler init
- This creates:
-
.ruler/
directory -
.ruler/AGENTS.md
: The primary starter Markdown file for your rules -
.ruler/ruler.toml
: The main configuration file for Ruler (now contains sample MCP server sections; legacy.ruler/mcp.json
no longer scaffolded) - (Optional legacy fallback) If you previously used
.ruler/instructions.md
, it is still respected whenAGENTS.md
is absent. (The prior runtime warning was removed.)
Additionally, you can create a global configuration to use when no local .ruler/
directory is found:
ruler init --global
The global configuration will be created to $XDG_CONFIG_HOME/ruler
(default: ~/.config/ruler
).
This is your central hub for all AI agent instructions:
-
Primary File Order & Precedence:
- A repository root
AGENTS.md
(outside.ruler/
) if present (highest precedence, prepended) -
.ruler/AGENTS.md
(new default starter file) - Legacy
.ruler/instructions.md
(only if.ruler/AGENTS.md
absent; no longer emits a deprecation warning) - Remaining discovered
.md
files under.ruler/
(and subdirectories) in sorted order
- A repository root
-
Rule Files (
*.md
): Discovered recursively from.ruler/
or$XDG_CONFIG_HOME/ruler
and concatenated in the order above -
Concatenation Marker: Each file's content is prepended with
--- Source: <relative_path_to_md_file> ---
for traceability -
ruler.toml
: Master configuration for Ruler's behavior, agent selection, and output paths -
mcp.json
: Shared MCP server settings
This ordering lets you keep a short, high-impact root AGENTS.md
(e.g. executive project summary) while housing detailed guidance inside .ruler/
.
Ruler now supports nested rule loading with the --nested
flag, enabling context-specific instructions for different parts of your project:
project/
├── .ruler/ # Global project rules
│ ├── AGENTS.md
│ └── coding_style.md
├── src/
│ └── .ruler/ # Component-specific rules
│ └── api_guidelines.md
├── tests/
│ └── .ruler/ # Test-specific rules
│ └── testing_conventions.md
└── docs/
└── .ruler/ # Documentation rules
└── writing_style.md
How it works:
- Discover all
.ruler/
directories in the project hierarchy - Load and concatenate rules from each directory in order
- Enable with:
ruler apply --nested
Perfect for:
- Monorepos with multiple services
- Projects with distinct components (frontend/backend)
- Teams needing different instructions for different areas
- Complex codebases with varying standards
Granularity: Break down complex instructions into focused .md
files:
coding_style.md
api_conventions.md
project_architecture.md
security_guidelines.md
Example rule file (.ruler/python_guidelines.md
):
# Python Project Guidelines
## General Style
- Follow PEP 8 for all Python code
- Use type hints for all function signatures and complex variables
- Keep functions short and focused on a single task
## Error Handling
- Use specific exception types rather than generic `Exception`
- Log errors effectively with context
## Security
- Always validate and sanitize user input
- Be mindful of potential injection vulnerabilities
ruler apply [options]
The apply
command looks for .ruler/
in the current directory tree, reading the first match. If no such directory is found, it will look for a global configuration in $XDG_CONFIG_HOME/ruler
.
Option | Description |
---|---|
--project-root <path> |
Path to your project's root (default: current directory) |
--agents <agent1,agent2,...> |
Comma-separated list of agent names to target (agentsmd, amazonqcli, amp, copilot, claude, codex, cursor, windsurf, cline, aider, firebase, openhands, gemini-cli, jules, junie, augmentcode, kilocode, opencode, goose, crush, zed, qwen, kiro, warp, roo, trae) |
--config <path> |
Path to a custom ruler.toml configuration file |
--mcp / --with-mcp
|
Enable applying MCP server configurations (default: true) |
--no-mcp |
Disable applying MCP server configurations |
--mcp-overwrite |
Overwrite native MCP config entirely instead of merging |
--gitignore |
Enable automatic .gitignore updates (default: true) |
--no-gitignore |
Disable automatic .gitignore updates |
--nested |
Enable nested rule loading from nested .ruler directories (default: disabled) |
--backup |
Enable/disable creation of .bak backup files (default: enabled) |
--local-only |
Do not look for configuration in $XDG_CONFIG_HOME
|
--verbose / -v
|
Display detailed output during execution |
Apply rules to all configured agents:
ruler apply
Apply rules only to GitHub Copilot and Claude:
ruler apply --agents copilot,claude
Apply rules only to Firebase Studio:
ruler apply --agents firebase
Apply rules only to Warp:
ruler apply --agents warp
Apply rules only to Trae AI:
ruler apply --agents trae
Apply rules only to RooCode:
ruler apply --agents roo
Use a specific configuration file:
ruler apply --config ./team-configs/ruler.frontend.toml
Apply rules with verbose output:
ruler apply --verbose
Apply rules but skip MCP and .gitignore updates:
ruler apply --no-mcp --no-gitignore
The revert
command safely undoes all changes made by ruler apply
, restoring your project to its pre-ruler state. It intelligently restores files from backups (.bak
files) when available, or removes generated files that didn't exist before.
When experimenting with different rule configurations or switching between projects, you may want to:
- Clean slate: Remove all ruler-generated files to start fresh
- Restore originals: Revert modified files back to their original state
- Selective cleanup: Remove configurations for specific agents only
- Safe experimentation: Try ruler without fear of permanent changes
ruler revert [options]
Option | Description |
---|---|
--project-root <path> |
Path to your project's root (default: current directory) |
--agents <agent1,agent2,...> |
Comma-separated list of agent names to revert (agentsmd, amazonqcli, amp, copilot, claude, codex, cursor, windsurf, cline, aider, firebase, openhands, gemini-cli, jules, junie, augmentcode, kilocode, opencode, goose, crush, zed, qwen, kiro, warp, roo, trae) |
--config <path> |
Path to a custom ruler.toml configuration file |
--keep-backups |
Keep backup files (.bak) after restoration (default: false) |
--dry-run |
Preview changes without actually reverting files |
--verbose / -v
|
Display detailed output during execution |
--local-only |
Only search for local .ruler directories, ignore global config |
Revert all ruler changes:
ruler revert
Preview what would be reverted (dry-run):
ruler revert --dry-run
Revert only specific agents:
ruler revert --agents claude,copilot
Revert with detailed output:
ruler revert --verbose
Keep backup files after reverting:
ruler revert --keep-backups
Defaults to .ruler/ruler.toml
in the project root. Override with --config
CLI option.
# Default agents to run when --agents is not specified
# Uses case-insensitive substring matching
default_agents = ["copilot", "claude", "aider"]
# --- Global MCP Server Configuration ---
[mcp]
# Enable/disable MCP propagation globally (default: true)
enabled = true
# Global merge strategy: 'merge' or 'overwrite' (default: 'merge')
merge_strategy = "merge"
# --- MCP Server Definitions ---
[mcp_servers.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
[mcp_servers.git]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-git", "--repository", "."]
[mcp_servers.remote_api]
url = "https://api.example.com"
[mcp_servers.remote_api.headers]
Authorization = "Bearer your-token"
# --- Global .gitignore Configuration ---
[gitignore]
# Enable/disable automatic .gitignore updates (default: true)
enabled = true
# --- Agent-Specific Configurations ---
[agents.copilot]
enabled = true
output_path = ".github/copilot-instructions.md"
[agents.claude]
enabled = true
output_path = "CLAUDE.md"
[agents.aider]
enabled = true
output_path_instructions = "AGENTS.md"
output_path_config = ".aider.conf.yml"
# OpenAI Codex CLI agent and MCP config
[agents.codex]
enabled = true
output_path = "AGENTS.md"
output_path_config = ".codex/config.toml"
# Agent-specific MCP configuration for Codex CLI
[agents.codex.mcp]
enabled = true
merge_strategy = "merge"
[agents.firebase]
enabled = true
output_path = ".idx/airules.md"
[agents.gemini-cli]
enabled = true
[agents.jules]
enabled = true
[agents.junie]
enabled = true
output_path = ".junie/guidelines.md"
# Agent-specific MCP configuration
[agents.cursor.mcp]
enabled = true
merge_strategy = "merge"
# Disable specific agents
[agents.windsurf]
enabled = false
[agents.kilocode]
enabled = true
output_path = ".kilocode/rules/ruler_kilocode_instructions.md"
[agents.warp]
enabled = true
output_path = "WARP.md"
-
CLI flags (e.g.,
--agents
,--no-mcp
,--mcp-overwrite
,--no-gitignore
) -
Settings in
ruler.toml
(default_agents
, specific agent settings, global sections) - Ruler's built-in defaults (all agents enabled, standard output paths, MCP enabled with 'merge')
MCP provides broader context to AI models through server configurations. Ruler can manage and distribute these settings across compatible agents.
You can now define MCP servers directly in ruler.toml
using the [mcp_servers.<name>]
syntax:
# Global MCP behavior
[mcp]
enabled = true
merge_strategy = "merge" # or "overwrite"
# Local (stdio) server
[mcp_servers.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
[mcp_servers.filesystem.env]
API_KEY = "your-api-key"
# Remote server
[mcp_servers.search]
url = "https://mcp.example.com"
[mcp_servers.search.headers]
Authorization = "Bearer your-token"
"X-API-Version" = "v1"
For backward compatibility, you can still use the JSON format; a warning is issued encouraging migration to TOML. The file is no longer created during ruler init
.
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/project"
]
},
"git": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-git", "--repository", "."]
}
}
}
When both TOML and JSON configurations are present:
- TOML servers take precedence over JSON servers with the same name
- Servers are merged from both sources (unless using overwrite strategy)
- Deprecation warning is shown encouraging migration to TOML (warning shown once per run)
Local/stdio servers require a command
field:
[mcp_servers.local_server]
command = "node"
args = ["server.js"]
[mcp_servers.local_server.env]
DEBUG = "1"
Remote servers require a url
field (headers optional; bearer Authorization token auto-extracted for OpenHands when possible):
[mcp_servers.remote_server]
url = "https://api.example.com"
[mcp_servers.remote_server.headers]
Authorization = "Bearer token"
Ruler uses this configuration with the merge
(default) or overwrite
strategy, controlled by ruler.toml
or CLI flags.
Home Directory Safety: Ruler never writes MCP configuration files outside your project root. Any historical references to user home directories (e.g. ~/.codeium/windsurf/mcp_config.json
or ~/.zed/settings.json
) have been removed; only project-local paths are targeted.
Note for OpenAI Codex CLI: To apply the local Codex CLI MCP configuration, set the CODEX_HOME
environment variable to your project’s .codex
directory:
export CODEX_HOME="$(pwd)/.codex"
Ruler automatically manages your .gitignore
file to keep generated agent configuration files out of version control.
- Creates or updates
.gitignore
in your project root - Adds paths to a managed block marked with
# START Ruler Generated Files
and# END Ruler Generated Files
- Preserves existing content outside this block
- Sorts paths alphabetically and uses relative POSIX-style paths
# Your existing rules
node_modules/
*.log
# START Ruler Generated Files
.aider.conf.yml
.clinerules
.cursor/rules/ruler_cursor_instructions.mdc
.github/copilot-instructions.md
.windsurf/rules/ruler_windsurf_instructions.md
AGENTS.md
CLAUDE.md
AGENTS.md
# END Ruler Generated Files
dist/
-
CLI flags:
--gitignore
or--no-gitignore
-
Configuration:
[gitignore].enabled
inruler.toml
- Default: enabled
# Initialize Ruler in your project
cd your-project
ruler init
# Edit the generated files
# - Add your coding guidelines to .ruler/AGENTS.md (or keep adding additional .md files)
# - Customize .ruler/ruler.toml if needed
# Apply rules to all AI agents
ruler apply
For large projects with multiple components or services, use nested rule loading:
# Set up nested .ruler directories
mkdir -p src/.ruler tests/.ruler docs/.ruler
# Add component-specific instructions
echo "# API Design Guidelines" > src/.ruler/api_rules.md
echo "# Testing Best Practices" > tests/.ruler/test_rules.md
echo "# Documentation Standards" > docs/.ruler/docs_rules.md
# Apply with nested loading
ruler apply --nested --verbose
This creates context-specific instructions for different parts of your project while maintaining global rules in the root .ruler/
directory.
- Create
.ruler/coding_standards.md
,.ruler/api_usage.md
- Commit the
.ruler
directory to your repository - Team members pull changes and run
ruler apply
to update their local AI agent configurations
- Detail your project's architecture in
.ruler/project_overview.md
- Describe primary data structures in
.ruler/data_models.md
- Run
ruler apply
to help AI tools provide more relevant suggestions
{
"scripts": {
"ruler:apply": "ruler apply",
"dev": "npm run ruler:apply && your_dev_command",
"precommit": "npm run ruler:apply"
}
}
# .github/workflows/ruler-check.yml
name: Check Ruler Configuration
on:
pull_request:
paths: ['.ruler/**']
jobs:
check-ruler:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '18'
cache: 'npm'
- name: Install Ruler
run: npm install -g @intellectronica/ruler
- name: Apply Ruler configuration
run: ruler apply --no-gitignore
- name: Check for uncommitted changes
run: |
if [[ -n $(git status --porcelain) ]]; then
echo "::error::Ruler configuration is out of sync!"
echo "Please run 'ruler apply' locally and commit the changes."
exit 1
fi
"Cannot find module" errors:
- Ensure Ruler is installed globally:
npm install -g @intellectronica/ruler
- Or use
npx @intellectronica/ruler
Permission denied errors:
- On Unix systems, you may need
sudo
for global installation
Agent files not updating:
- Check if the agent is enabled in
ruler.toml
- Verify agent isn't excluded by
--agents
flag - Use
--verbose
to see detailed execution logs
Configuration validation errors:
- Ruler now validates
ruler.toml
format and will show specific error details - Check that all configuration values match the expected types and formats
Use --verbose
flag to see detailed execution logs:
ruler apply --verbose
This shows:
- Configuration loading details
- Agent selection logic
- File processing information
- MCP configuration steps
Q: Can I use different rules for different agents? A: Currently, all agents receive the same concatenated rules. For agent-specific instructions, include sections in your rule files like "## GitHub Copilot Specific" or "## Aider Configuration".
Q: How do I set up different instructions for different parts of my project?
A: Use the --nested
flag with ruler apply --nested
. This enables Ruler to discover and load rules from multiple .ruler/
directories throughout your project hierarchy. Place component-specific instructions in src/.ruler/
, test-specific rules in tests/.ruler/
, etc., while keeping global rules in the root .ruler/
directory.
Q: How do I temporarily disable Ruler for an agent?
A: Set enabled = false
in ruler.toml
under [agents.agentname]
, or use --agents
flag to specify only the agents you want.
Q: What happens to my existing agent configuration files?
A: Ruler creates backups with .bak
extension before overwriting any existing files.
Q: Can I run Ruler in CI/CD pipelines?
A: Yes! Use ruler apply --no-gitignore
in CI to avoid modifying .gitignore
. See the GitHub Actions example above.
Q: How do I migrate from older versions using instructions.md
?
A: Simply rename .ruler/instructions.md
to .ruler/AGENTS.md
(recommended). If you keep the legacy file and omit AGENTS.md
, Ruler will still use it (without emitting the old deprecation warning). Having both causes AGENTS.md
to take precedence; the legacy file is still concatenated afterward.
Q: How does OpenHands MCP propagation classify servers?
A: Local stdio servers become stdio_servers
. Remote URLs containing /sse
are classified as sse_servers
; others become shttp_servers
. Bearer tokens in an Authorization
header are extracted into api_key
where possible.
Q: Where is Zed configuration written now?
A: Ruler writes a settings.json
in the project root (not the user home dir) and transforms MCP server definitions to Zed's context_servers
format including source: "custom"
.
Q: What changed about MCP initialization?
A: ruler init
now only adds example MCP server sections to ruler.toml
instead of creating .ruler/mcp.json
. The JSON file is still consumed if present, but TOML servers win on name conflicts.
Q: Is Kiro supported?
A: Yes. Kiro receives concatenated rules at .kiro/steering/ruler_kiro_instructions.md
.
git clone https://github.com/intellectronica/ruler.git
cd ruler
npm install
npm run build
# Run all tests
npm test
# Run tests with coverage
npm run test:coverage
# Run tests in watch mode
npm run test:watch
# Run linting
npm run lint
# Run formatting
npm run format
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
For bugs and feature requests, please open an issue.
MIT
The project includes comprehensive test coverage with unit, integration, and end-to-end tests:
# Run all tests
npm test
# Run with coverage
npm run test:coverage
# Run integration tests specifically
npm run test:integration
# Run tests in watch mode
npm run test:watch
The project includes comprehensive integration tests that validate the complete CLI workflow:
-
npm run test:integration
: Runs a full end-to-end integration test that:- Creates a temporary test directory
- Runs
ruler init
to set up the initial structure - Creates custom
ruler.toml
with MCP servers (both stdio and remote) - Creates sample
AGENTS.md
and additional markdown files for concatenation - Runs
ruler apply
to generate all agent configuration files - Inspects and validates all generated files contain expected content
- Outputs the content of all generated files for manual verification
- Validates MCP server configurations were properly applied
This integration test ensures the entire CLI workflow functions correctly and can be used for manual testing or CI validation.
© Eleanor Berger
ai.intellectronica.net
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for ruler
Similar Open Source Tools

ruler
Ruler is a tool designed to centralize AI coding assistant instructions, providing a single source of truth for managing instructions across multiple AI coding tools. It helps in avoiding inconsistent guidance, duplicated effort, context drift, onboarding friction, and complex project structures by automatically distributing instructions to the right configuration files. With support for nested rule loading, Ruler can handle complex project structures with context-specific instructions for different components. It offers features like centralised rule management, nested rule loading, automatic distribution, targeted agent configuration, MCP server propagation, .gitignore automation, and a simple CLI for easy configuration management.

VimLM
VimLM is an AI-powered coding assistant for Vim that integrates AI for code generation, refactoring, and documentation directly into your Vim workflow. It offers native Vim integration with split-window responses and intuitive keybindings, offline first execution with MLX-compatible models, contextual awareness with seamless integration with codebase and external resources, conversational workflow for iterating on responses, project scaffolding for generating and deploying code blocks, and extensibility for creating custom LLM workflows with command chains.

aicommit2
AICommit2 is a Reactive CLI tool that streamlines interactions with various AI providers such as OpenAI, Anthropic Claude, Gemini, Mistral AI, Cohere, and unofficial providers like Huggingface and Clova X. Users can request multiple AI simultaneously to generate git commit messages without waiting for all AI responses. The tool runs 'git diff' to grab code changes, sends them to configured AI, and returns the AI-generated commit message. Users can set API keys or Cookies for different providers and configure options like locale, generate number of messages, commit type, proxy, timeout, max-length, and more. AICommit2 can be used both locally with Ollama and remotely with supported providers, offering flexibility and efficiency in generating commit messages.

AI-Agent-Starter-Kit
AI Agent Starter Kit is a modern full-stack AI-enabled template using Next.js for frontend and Express.js for backend, with Telegram and OpenAI integrations. It offers AI-assisted development, smart environment variable setup assistance, intelligent error resolution, context-aware code completion, and built-in debugging helpers. The kit provides a structured environment for developers to interact with AI tools seamlessly, enhancing the development process and productivity.

forge
Forge is a powerful open-source tool for building modern web applications. It provides a simple and intuitive interface for developers to quickly scaffold and deploy projects. With Forge, you can easily create custom components, manage dependencies, and streamline your development workflow. Whether you are a beginner or an experienced developer, Forge offers a flexible and efficient solution for your web development needs.

paperless-gpt
paperless-gpt is a tool designed to generate accurate and meaningful document titles and tags for paperless-ngx using Large Language Models (LLMs). It supports multiple LLM providers, including OpenAI and Ollama. With paperless-gpt, you can streamline your document management by automatically suggesting appropriate titles and tags based on the content of your scanned documents. The tool offers features like multiple LLM support, customizable prompts, easy integration with paperless-ngx, user-friendly interface for reviewing and applying suggestions, dockerized deployment, automatic document processing, and an experimental OCR feature.

mistral.rs
Mistral.rs is a fast LLM inference platform written in Rust. We support inference on a variety of devices, quantization, and easy-to-use application with an Open-AI API compatible HTTP server and Python bindings.

LLMTSCS
LLMLight is a novel framework that employs Large Language Models (LLMs) as decision-making agents for Traffic Signal Control (TSC). The framework leverages the advanced generalization capabilities of LLMs to engage in a reasoning and decision-making process akin to human intuition for effective traffic control. LLMLight has been demonstrated to be remarkably effective, generalizable, and interpretable against various transportation-based and RL-based baselines on nine real-world and synthetic datasets.

quantalogic
QuantaLogic is a ReAct framework for building advanced AI agents that seamlessly integrates large language models with a robust tool system. It aims to bridge the gap between advanced AI models and practical implementation in business processes by enabling agents to understand, reason about, and execute complex tasks through natural language interaction. The framework includes features such as ReAct Framework, Universal LLM Support, Secure Tool System, Real-time Monitoring, Memory Management, and Enterprise Ready components.

gonzo
Gonzo is a powerful, real-time log analysis terminal UI tool inspired by k9s. It allows users to analyze log streams with beautiful charts, AI-powered insights, and advanced filtering directly from the terminal. The tool provides features like live streaming log processing, OTLP support, interactive dashboard with real-time charts, advanced filtering options including regex support, and AI-powered insights such as pattern detection, anomaly analysis, and root cause suggestions. Users can also configure AI models from providers like OpenAI, LM Studio, and Ollama for intelligent log analysis. Gonzo is built with Bubble Tea, Lipgloss, Cobra, Viper, and OpenTelemetry, following a clean architecture with separate modules for TUI, log analysis, frequency tracking, OTLP handling, and AI integration.

text-extract-api
The text-extract-api is a powerful tool that allows users to convert images, PDFs, or Office documents to Markdown text or JSON structured documents with high accuracy. It is built using FastAPI and utilizes Celery for asynchronous task processing, with Redis for caching OCR results. The tool provides features such as PDF/Office to Markdown and JSON conversion, improving OCR results with LLama, removing Personally Identifiable Information from documents, distributed queue processing, caching using Redis, switchable storage strategies, and a CLI tool for task management. Users can run the tool locally or on cloud services, with support for GPU processing. The tool also offers an online demo for testing purposes.

oxylabs-mcp
The Oxylabs MCP Server acts as a bridge between AI models and the web, providing clean, structured data from any site. It enables scraping of URLs, rendering JavaScript-heavy pages, content extraction for AI use, bypassing anti-scraping measures, and accessing geo-restricted web data from 195+ countries. The implementation utilizes the Model Context Protocol (MCP) to facilitate secure interactions between AI assistants and web content. Key features include scraping content from any site, automatic data cleaning and conversion, bypassing blocks and geo-restrictions, flexible setup with cross-platform support, and built-in error handling and request management.

atlas-mcp-server
ATLAS (Adaptive Task & Logic Automation System) is a high-performance Model Context Protocol server designed for LLMs to manage complex task hierarchies. Built with TypeScript, it features ACID-compliant storage, efficient task tracking, and intelligent template management. ATLAS provides LLM Agents task management through a clean, flexible tool interface. The server implements the Model Context Protocol (MCP) for standardized communication between LLMs and external systems, offering hierarchical task organization, task state management, smart templates, enterprise features, and performance optimization.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

one
ONE is a modern web and AI agent development toolkit that empowers developers to build AI-powered applications with high performance, beautiful UI, AI integration, responsive design, type safety, and great developer experience. It is perfect for building modern web applications, from simple landing pages to complex AI-powered platforms.

rag-chatbot
rag-chatbot is a tool that allows users to chat with multiple PDFs using Ollama and LlamaIndex. It provides an easy setup for running on local machines or Kaggle notebooks. Users can leverage models from Huggingface and Ollama, process multiple PDF inputs, and chat in multiple languages. The tool offers a simple UI with Gradio, supporting chat with history and QA modes. Setup instructions are provided for both Kaggle and local environments, including installation steps for Docker, Ollama, Ngrok, and the rag_chatbot package. Users can run the tool locally and access it via a web interface. Future enhancements include adding evaluation, better embedding models, knowledge graph support, improved document processing, MLX model integration, and Corrective RAG.
For similar tasks

ruler
Ruler is a tool designed to centralize AI coding assistant instructions, providing a single source of truth for managing instructions across multiple AI coding tools. It helps in avoiding inconsistent guidance, duplicated effort, context drift, onboarding friction, and complex project structures by automatically distributing instructions to the right configuration files. With support for nested rule loading, Ruler can handle complex project structures with context-specific instructions for different components. It offers features like centralised rule management, nested rule loading, automatic distribution, targeted agent configuration, MCP server propagation, .gitignore automation, and a simple CLI for easy configuration management.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.