RA.Aid
Aider in a ReAct loop
Stars: 109
RA.Aid is an AI software development agent powered by `aider` and advanced reasoning models like `o1`. It combines `aider`'s code editing capabilities with LangChain's agent-based task execution framework to provide an intelligent assistant for research, planning, and implementation of multi-step development tasks. It handles complex programming tasks by breaking them down into manageable steps, running shell commands automatically, and leveraging expert reasoning models like OpenAI's o1. RA.Aid is designed for everyday software development, offering features such as multi-step task planning, automated command execution, and the ability to handle complex programming tasks beyond single-shot code edits.
README:
██▀███ ▄▄▄ ▄▄▄ ██▓▓█████▄
▓██ ▒ ██▒▒████▄ ▒████▄ ▓██▒▒██▀ ██▌
▓██ ░▄█ ▒▒██ ▀█▄ ▒██ ▀█▄ ▒██▒░██ █▌
▒██▀▀█▄ ░██▄▄▄▄██ ░██▄▄▄▄██ ░██░░▓█▄ ▌
░██▓ ▒██▒ ▓█ ▓██▒ ██▓ ▓█ ▓██▒░██░░▒████▓
░ ▒▓ ░▒▓░ ▒▒ ▓▒█░ ▒▓▒ ▒▒ ▓▒█░░▓ ▒▒▓ ▒
░▒ ░ ▒░ ▒ ▒▒ ░ ░▒ ▒ ▒▒ ░ ▒ ░ ░ ▒ ▒
░░ ░ ░ ▒ ░ ░ ▒ ▒ ░ ░ ░ ░
░ ░ ░ ░ ░ ░ ░ ░
░ ░
AI software development agent powered by aider
and advanced reasoning models like o1
.
RA.Aid (ReAct Aid) was made by putting aider
(https://aider.chat/) in a LangChain ReAct agent loop. This unique combination allows developers to leverage aider's code editing capabilities while benefiting from LangChain's agent-based task execution framework. The tool provides an intelligent assistant that can help with research, planning, and implementation of multi-step development tasks.
RA.Aid is a practical tool for everyday software development and is used for developing real-world applications.
Here's a demo of RA.Aid adding a feature to itself:
- Features
- Installation
- Usage
- Architecture
- Dependencies
- Development Setup
- Contributing
- License
- Contact
👋 Pull requests are very welcome! Have ideas for how to impove RA.Aid? Don't be shy - your help makes a real difference!
💬 Join our Discord community: Click here to join
- This tool can and will automatically execute shell commands and make code changes
- The --cowboy-mode flag can be enabled to skip shell command approval prompts
- No warranty is provided, either express or implied
- Always use in version-controlled repositories
- Review proposed changes in your git diff before committing
-
Multi-Step Task Planning: The agent breaks down complex tasks into discrete, manageable steps and executes them sequentially. This systematic approach ensures thorough implementation and reduces errors.
-
Automated Command Execution: The agent can run shell commands automatically to accomplish tasks. While this makes it powerful, it also means you should carefully review its actions.
-
Ability to Leverage Expert Reasoning Models: The agent can use advanced reasoning models such as OpenAI's o1 just when needed, e.g. to solve complex debugging problems or in planning for complex feature implementation.
-
Web Research Capabilities: Leverages Tavily API for intelligent web searches to enhance research and gather real-world context for development tasks
-
Three-Stage Architecture:
- Research: Analyzes codebases and gathers context
- Planning: Breaks down tasks into specific, actionable steps
- Implementation: Executes each planned step sequentially
What sets RA.Aid apart is its ability to handle complex programming tasks that extend beyond single-shot code edits. By combining research, strategic planning, and implementation into a cohesive workflow, RA.Aid can:
- Break down and execute multi-step programming tasks
- Research and analyze complex codebases to answer architectural questions
- Plan and implement significant code changes across multiple files
- Provide detailed explanations of existing code structure and functionality
- Execute sophisticated refactoring operations with proper planning
-
Three-Stage Architecture: The workflow consists of three powerful stages:
- Research 🔍 - Gather and analyze information
- Planning 📋 - Develop execution strategy
- Implementation ⚡ - Execute the plan with AI assistance
Each stage is powered by dedicated AI agents and specialized toolsets.
-
Advanced AI Integration: Built on LangChain and leverages the latest LLMs for natural language understanding and generation.
-
Human-in-the-Loop Interaction: Optional mode that enables the agent to ask you questions during task execution, ensuring higher accuracy and better handling of complex tasks that may require your input or clarification
-
Comprehensive Toolset:
- Shell command execution
- Expert querying system
- File operations and management
- Memory management
- Research and planning tools
- Code analysis capabilities
-
Interactive CLI Interface: Simple yet powerful command-line interface for seamless interaction
-
Modular Design: Structured as a Python package with specialized modules for console output, processing, text utilities, and tools
-
Git Integration: Built-in support for Git operations and repository management
RA.Aid can be installed directly using pip:
pip install ra-aid
Before using RA.Aid, you'll need:
- Python package
aider
installed and available in your PATH:
pip install aider-chat
- API keys for the required AI services:
# Set up API keys based on your preferred provider:
# For Anthropic Claude models (recommended)
export ANTHROPIC_API_KEY=your_api_key_here
# For OpenAI models
export OPENAI_API_KEY=your_api_key_here
# For OpenRouter provider (optional)
export OPENROUTER_API_KEY=your_api_key_here
# For OpenAI-compatible providers (optional)
export OPENAI_API_BASE=your_api_base_url
# For web research capabilities
export TAVILY_API_KEY=your_api_key_here
Note: The programmer tool (aider) will automatically select its model based on your available API keys:
- If ANTHROPIC_API_KEY is set, it will use Claude models
- If only OPENAI_API_KEY is set, it will use OpenAI models
- You can set multiple API keys to enable different features
You can get your API keys from:
- Anthropic API key: https://console.anthropic.com/
- OpenAI API key: https://platform.openai.com/api-keys
- OpenRouter API key: https://openrouter.ai/keys
RA.Aid is designed to be simple yet powerful. Here's how to use it:
# Basic usage
ra-aid -m "Your task or query here"
# Research-only mode (no implementation)
ra-aid -m "Explain the authentication flow" --research-only
# Enable verbose logging for detailed execution information
ra-aid -m "Add new feature" --verbose
-
-m, --message
: The task or query to be executed (required) -
--research-only
: Only perform research without implementation -
--cowboy-mode
: Skip interactive approval for shell commands -
--hil, -H
: Enable human-in-the-loop mode, allowing the agent to interactively ask you questions during task execution -
--provider
: Specify the model provider (See Model Configuration section) -
--model
: Specify the model name (See Model Configuration section) -
--expert-provider
: Specify the provider for the expert tool (defaults to OpenAI) -
--expert-model
: Specify the model name for the expert tool (defaults to o1-preview for OpenAI) -
--chat
: Enable chat mode for interactive assistance -
--verbose
: Enable detailed logging output for debugging and monitoring
-
Code Analysis:
ra-aid -m "Explain how the authentication middleware works" --research-only
-
Complex Changes:
ra-aid -m "Refactor the database connection code to use connection pooling" --cowboy-mode
-
Automated Updates:
ra-aid -m "Update deprecated API calls across the entire codebase" --cowboy-mode
-
Code Research:
ra-aid -m "Analyze the current error handling patterns" --research-only
-
Code Research:
ra-aid -m "Explain how the authentication middleware works" --research-only
-
Refactoring:
ra-aid -m "Refactor the database connection code to use connection pooling" --cowboy-mode
Enable interactive mode to allow the agent to ask you questions during task execution:
ra-aid -m "Implement a new feature" --hil
# or
ra-aid -m "Implement a new feature" -H
This mode is particularly useful for:
- Complex tasks requiring human judgment
- Clarifying ambiguous requirements
- Making architectural decisions
- Validating critical changes
- Providing domain-specific knowledge
The agent features autonomous web research capabilities powered by the Tavily API, seamlessly integrating real-world information into its problem-solving workflow. Web research is conducted automatically when the agent determines additional context would be valuable - no explicit configuration required.
For example, when researching modern authentication practices or investigating new API requirements, the agent will autonomously:
- Search for current best practices and security recommendations
- Find relevant documentation and technical specifications
- Gather real-world implementation examples
- Stay updated on latest industry standards
While web research happens automatically as needed, you can also explicitly request research-focused tasks:
# Focused research task with web search capabilities
ra-aid -m "Research current best practices for API rate limiting" --research-only
Make sure to set your TAVILY_API_KEY environment variable to enable this feature.
Enable with --chat
to transform ra-aid into an interactive assistant that guides you through research and implementation tasks. Have a natural conversation about what you want to build, explore options together, and dispatch work - all while maintaining context of your discussion. Perfect for when you want to think through problems collaboratively rather than just executing commands.
You can interrupt the agent at any time by pressing Ctrl-C
. This pauses the agent, allowing you to provide feedback, adjust your instructions, or steer the execution in a new direction. Press Ctrl-C
again if you want to completely exit the program.
The --cowboy-mode
flag enables automated shell command execution without confirmation prompts. This is useful for:
- CI/CD pipelines
- Automated testing environments
- Batch processing operations
- Scripted workflows
ra-aid -m "Update all deprecated API calls" --cowboy-mode
- Cowboy mode skips confirmation prompts for shell commands
- Always use in version-controlled repositories
- Ensure you have a clean working tree before running
- Review changes in git diff before committing
RA.Aid supports multiple AI providers and models. The default model is Anthropic's Claude 3 Sonnet (claude-3-5-sonnet-20241022
).
The programmer tool (aider) automatically selects its model based on your available API keys. It will use Claude models if ANTHROPIC_API_KEY is set, or fall back to OpenAI models if only OPENAI_API_KEY is available.
Note: The expert tool can be configured to use different providers (OpenAI, Anthropic, OpenRouter) using the --expert-provider flag along with the corresponding EXPERT_*API_KEY environment variables. Each provider requires its own API key set through the appropriate environment variable.
RA.Aid supports multiple providers through environment variables:
-
ANTHROPIC_API_KEY
: Required for the default Anthropic provider -
OPENAI_API_KEY
: Required for OpenAI provider -
OPENROUTER_API_KEY
: Required for OpenRouter provider -
OPENAI_API_BASE
: Required for OpenAI-compatible providers along withOPENAI_API_KEY
Expert Tool Environment Variables:
-
EXPERT_OPENAI_API_KEY
: API key for expert tool using OpenAI provider -
EXPERT_ANTHROPIC_API_KEY
: API key for expert tool using Anthropic provider -
EXPERT_OPENROUTER_API_KEY
: API key for expert tool using OpenRouter provider -
EXPERT_OPENAI_API_BASE
: Base URL for expert tool using OpenAI-compatible provider
You can set these permanently in your shell's configuration file (e.g., ~/.bashrc
or ~/.zshrc
):
# Default provider (Anthropic)
export ANTHROPIC_API_KEY=your_api_key_here
# For OpenAI features and expert tool
export OPENAI_API_KEY=your_api_key_here
# For OpenRouter provider
export OPENROUTER_API_KEY=your_api_key_here
# For OpenAI-compatible providers
export OPENAI_API_BASE=your_api_base_url
-
Using Anthropic (Default)
# Uses default model (claude-3-5-sonnet-20241022) ra-aid -m "Your task" # Or explicitly specify: ra-aid -m "Your task" --provider anthropic --model claude-3-5-sonnet-20241022
-
Using OpenAI
ra-aid -m "Your task" --provider openai --model gpt-4o
-
Using OpenRouter
ra-aid -m "Your task" --provider openrouter --model mistralai/mistral-large-2411
-
Configuring Expert Provider
The expert tool is used by the agent for complex logic and debugging tasks. It can be configured to use different providers (OpenAI, Anthropic, OpenRouter) using the --expert-provider flag along with the corresponding EXPERT_*API_KEY environment variables.
# Use Anthropic for expert tool export EXPERT_ANTHROPIC_API_KEY=your_anthropic_api_key ra-aid -m "Your task" --expert-provider anthropic --expert-model claude-3-5-sonnet-20241022 # Use OpenRouter for expert tool export OPENROUTER_API_KEY=your_openrouter_api_key ra-aid -m "Your task" --expert-provider openrouter --expert-model mistralai/mistral-large-2411 # Use default OpenAI for expert tool export EXPERT_OPENAI_API_KEY=your_openai_api_key ra-aid -m "Your task" --expert-provider openai --expert-model o1-preview
Aider specific Environment Variables you can add:
-
AIDER_FLAGS
: Optional comma-separated list of flags to pass to the underlying aider tool (e.g., "yes-always,dark-mode")
# Optional: Configure aider behavior
export AIDER_FLAGS="yes-always,dark-mode,no-auto-commits"
Note: For AIDER_FLAGS
, you can specify flags with or without the leading --
. Multiple flags should be comma-separated, and spaces around flags are automatically handled. For example, both "yes-always,dark-mode"
and "--yes-always, --dark-mode"
are valid.
Important Notes:
- Performance varies between models. The default Claude 3 Sonnet model currently provides the best and most reliable results.
- Model configuration is done via command line arguments:
--provider
and--model
- The
--model
argument is required for all providers except Anthropic (which defaults toclaude-3-5-sonnet-20241022
)
RA.Aid implements a three-stage architecture for handling development and research tasks:
-
Research Stage:
- Gathers information and context
- Analyzes requirements
- Identifies key components and dependencies
-
Planning Stage:
- Develops detailed implementation plans
- Breaks down tasks into manageable steps
- Identifies potential challenges and solutions
-
Implementation Stage:
- Executes planned tasks
- Generates code or documentation
- Performs necessary system operations
-
Console Module (
console/
): Handles console output formatting and user interaction -
Processing Module (
proc/
): Manages interactive processing and workflow control -
Text Module (
text/
): Provides text processing and manipulation utilities -
Tools Module (
tools/
): Contains various utility tools for file operations, search, and more
-
langchain-anthropic
: LangChain integration with Anthropic's Claude -
tavily-python
: Tavily API client for web research -
langgraph
: Graph-based workflow management -
rich>=13.0.0
: Terminal formatting and output -
GitPython==3.1.41
: Git repository management -
fuzzywuzzy==0.18.0
: Fuzzy string matching -
python-Levenshtein==0.23.0
: Fast string matching -
pathspec>=0.11.0
: Path specification utilities
-
pytest>=7.0.0
: Testing framework -
pytest-timeout>=2.2.0
: Test timeout management
- Clone the repository:
git clone https://github.com/ai-christianson/ra-aid.git
cd ra-aid
- Create and activate a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows use `venv\Scripts\activate`
- Install development dependencies:
pip install -r requirements-dev.txt
- Run tests:
python -m pytest
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch:
git checkout -b feature/your-feature-name
- Make your changes and commit:
git commit -m 'Add some feature'
- Push to your fork:
git push origin feature/your-feature-name
- Open a Pull Request
- Follow PEP 8 style guidelines
- Add tests for new features
- Update documentation as needed
- Keep commits focused and message clear
- Ensure all tests pass before submitting PR
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Copyright (c) 2024 AI Christianson
- Issues: Please report bugs and feature requests on our Issue Tracker
- Repository: https://github.com/ai-christianson/ra-aid
- Documentation: https://github.com/ai-christianson/ra-aid#readme
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for RA.Aid
Similar Open Source Tools
RA.Aid
RA.Aid is an AI software development agent powered by `aider` and advanced reasoning models like `o1`. It combines `aider`'s code editing capabilities with LangChain's agent-based task execution framework to provide an intelligent assistant for research, planning, and implementation of multi-step development tasks. It handles complex programming tasks by breaking them down into manageable steps, running shell commands automatically, and leveraging expert reasoning models like OpenAI's o1. RA.Aid is designed for everyday software development, offering features such as multi-step task planning, automated command execution, and the ability to handle complex programming tasks beyond single-shot code edits.
shell-ai
Shell-AI (`shai`) is a CLI utility that enables users to input commands in natural language and receive single-line command suggestions. It leverages natural language understanding and interactive CLI tools to enhance command line interactions. Users can describe tasks in plain English and receive corresponding command suggestions, making it easier to execute commands efficiently. Shell-AI supports cross-platform usage and is compatible with Azure OpenAI deployments, offering a user-friendly and efficient way to interact with the command line.
code2prompt
Code2Prompt is a powerful command-line tool that generates comprehensive prompts from codebases, designed to streamline interactions between developers and Large Language Models (LLMs) for code analysis, documentation, and improvement tasks. It bridges the gap between codebases and LLMs by converting projects into AI-friendly prompts, enabling users to leverage AI for various software development tasks. The tool offers features like holistic codebase representation, intelligent source tree generation, customizable prompt templates, smart token management, Gitignore integration, flexible file handling, clipboard-ready output, multiple output options, and enhanced code readability.
Deep-Live-Cam
Deep-Live-Cam is a software tool designed to assist artists in tasks such as animating custom characters or using characters as models for clothing. The tool includes built-in checks to prevent unethical applications, such as working on inappropriate media. Users are expected to use the tool responsibly and adhere to local laws, especially when using real faces for deepfake content. The tool supports both CPU and GPU acceleration for faster processing and provides a user-friendly GUI for swapping faces in images or videos.
xlang
XLang™ is a cutting-edge language designed for AI and IoT applications, offering exceptional dynamic and high-performance capabilities. It excels in distributed computing and seamless integration with popular languages like C++, Python, and JavaScript. Notably efficient, running 3 to 5 times faster than Python in AI and deep learning contexts. Features optimized tensor computing architecture for constructing neural networks through tensor expressions. Automates tensor data flow graph generation and compilation for specific targets, enhancing GPU performance by 6 to 10 times in CUDA environments.
distilabel
Distilabel is a framework for synthetic data and AI feedback for AI engineers that require high-quality outputs, full data ownership, and overall efficiency. It helps you synthesize data and provide AI feedback to improve the quality of your AI models. With Distilabel, you can: * **Synthesize data:** Generate synthetic data to train your AI models. This can help you to overcome the challenges of data scarcity and bias. * **Provide AI feedback:** Get feedback from AI models on your data. This can help you to identify errors and improve the quality of your data. * **Improve your AI output quality:** By using Distilabel to synthesize data and provide AI feedback, you can improve the quality of your AI models and get better results.
crewAI-tools
This repository provides a guide for setting up tools for crewAI agents to enhance functionality. It offers steps to equip agents with ready-to-use tools and create custom ones. Tools are expected to return strings for generating responses. Users can create tools by subclassing BaseTool or using the tool decorator. Contributions are welcome to enrich the toolset, and guidelines are provided for contributing. The development setup includes installing dependencies, activating virtual environment, setting up pre-commit hooks, running tests, static type checking, packaging, and local installation. The goal is to empower AI solutions through advanced tooling.
backend.ai
Backend.AI is a streamlined, container-based computing cluster platform that hosts popular computing/ML frameworks and diverse programming languages, with pluggable heterogeneous accelerator support including CUDA GPU, ROCm GPU, TPU, IPU and other NPUs. It allocates and isolates the underlying computing resources for multi-tenant computation sessions on-demand or in batches with customizable job schedulers with its own orchestrator. All its functions are exposed as REST/GraphQL/WebSocket APIs.
openai-kotlin
OpenAI Kotlin API client is a Kotlin client for OpenAI's API with multiplatform and coroutines capabilities. It allows users to interact with OpenAI's API using Kotlin programming language. The client supports various features such as models, chat, images, embeddings, files, fine-tuning, moderations, audio, assistants, threads, messages, and runs. It also provides guides on getting started, chat & function call, file source guide, and assistants. Sample apps are available for reference, and troubleshooting guides are provided for common issues. The project is open-source and licensed under the MIT license, allowing contributions from the community.
upgini
Upgini is an intelligent data search engine with a Python library that helps users find and add relevant features to their ML pipeline from various public, community, and premium external data sources. It automates the optimization of connected data sources by generating an optimal set of machine learning features using large language models, GraphNNs, and recurrent neural networks. The tool aims to simplify feature search and enrichment for external data to make it a standard approach in machine learning pipelines. It democratizes access to data sources for the data science community.
llm-vscode
llm-vscode is an extension designed for all things LLM, utilizing llm-ls as its backend. It offers features such as code completion with 'ghost-text' suggestions, the ability to choose models for code generation via HTTP requests, ensuring prompt size fits within the context window, and code attribution checks. Users can configure the backend, suggestion behavior, keybindings, llm-ls settings, and tokenization options. Additionally, the extension supports testing models like Code Llama 13B, Phind/Phind-CodeLlama-34B-v2, and WizardLM/WizardCoder-Python-34B-V1.0. Development involves cloning llm-ls, building it, and setting up the llm-vscode extension for use.
ChatGPT
The ChatGPT API Free Reverse Proxy provides free self-hosted API access to ChatGPT (`gpt-3.5-turbo`) with OpenAI's familiar structure, eliminating the need for code changes. It offers streaming response, API endpoint compatibility, and complimentary access without an API key. Installation options include Docker, PC/Server, and Termux on Android devices. The API can be accessed through a self-hosted local server or a pre-hosted API with an API key obtained from the Discord server. Usage examples are provided for Python and Node.js, and the project is licensed under AGPL-3.0.
rag-gpt
RAG-GPT is a tool that allows users to quickly launch an intelligent customer service system with Flask, LLM, and RAG. It includes frontend, backend, and admin console components. The tool supports cloud-based and local LLMs, enables deployment of conversational service robots in minutes, integrates diverse knowledge bases, offers flexible configuration options, and features an attractive user interface.
generative-fusion-decoding
Generative Fusion Decoding (GFD) is a novel shallow fusion framework that integrates Large Language Models (LLMs) into multi-modal text recognition systems such as automatic speech recognition (ASR) and optical character recognition (OCR). GFD operates across mismatched token spaces of different models by mapping text token space to byte token space, enabling seamless fusion during the decoding process. It simplifies the complexity of aligning different model sample spaces, allows LLMs to correct errors in tandem with the recognition model, increases robustness in long-form speech recognition, and enables fusing recognition models deficient in Chinese text recognition with LLMs extensively trained on Chinese. GFD significantly improves performance in ASR and OCR tasks, offering a unified solution for leveraging existing pre-trained models through step-by-step fusion.
LLMBox
LLMBox is a comprehensive library designed for implementing Large Language Models (LLMs) with a focus on a unified training pipeline and comprehensive model evaluation. It serves as a one-stop solution for training and utilizing LLMs, offering flexibility and efficiency in both training and utilization stages. The library supports diverse training strategies, comprehensive datasets, tokenizer vocabulary merging, data construction strategies, parameter efficient fine-tuning, and efficient training methods. For utilization, LLMBox provides comprehensive evaluation on various datasets, in-context learning strategies, chain-of-thought evaluation, evaluation methods, prefix caching for faster inference, support for specific LLM models like vLLM and Flash Attention, and quantization options. The tool is suitable for researchers and developers working with LLMs for natural language processing tasks.
llm-compressor
llm-compressor is an easy-to-use library for optimizing models for deployment with vllm. It provides a comprehensive set of quantization algorithms, seamless integration with Hugging Face models and repositories, and supports mixed precision, activation quantization, and sparsity. Supported algorithms include PTQ, GPTQ, SmoothQuant, and SparseGPT. Installation can be done via git clone and local pip install. Compression can be easily applied by selecting an algorithm and calling the oneshot API. The library also offers end-to-end examples for model compression. Contributions to the code, examples, integrations, and documentation are appreciated.
For similar tasks
RA.Aid
RA.Aid is an AI software development agent powered by `aider` and advanced reasoning models like `o1`. It combines `aider`'s code editing capabilities with LangChain's agent-based task execution framework to provide an intelligent assistant for research, planning, and implementation of multi-step development tasks. It handles complex programming tasks by breaking them down into manageable steps, running shell commands automatically, and leveraging expert reasoning models like OpenAI's o1. RA.Aid is designed for everyday software development, offering features such as multi-step task planning, automated command execution, and the ability to handle complex programming tasks beyond single-shot code edits.
gitingest
GitIngest is a tool that allows users to turn any Git repository into a prompt-friendly text ingest for LLMs. It provides easy code context by generating a text digest from a git repository URL or directory. The tool offers smart formatting for optimized output format for LLM prompts and provides statistics about file and directory structure, size of the extract, and token count. GitIngest can be used as a CLI tool on Linux and as a Python package for code integration. The tool is built using Tailwind CSS for frontend, FastAPI for backend framework, tiktoken for token estimation, and apianalytics.dev for simple analytics. Users can self-host GitIngest by building the Docker image and running the container. Contributions to the project are welcome, and the tool aims to be beginner-friendly for first-time contributors with a simple Python and HTML codebase.
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
code2prompt
code2prompt is a command-line tool that converts your codebase into a single LLM prompt with a source tree, prompt templating, and token counting. It automates generating LLM prompts from codebases of any size, customizing prompt generation with Handlebars templates, respecting .gitignore, filtering and excluding files using glob patterns, displaying token count, including Git diff output, copying prompt to clipboard, saving prompt to an output file, excluding files and folders, adding line numbers to source code blocks, and more. It helps streamline the process of creating LLM prompts for code analysis, generation, and other tasks.
ClashRoyaleBuildABot
Clash Royale Build-A-Bot is a project that allows users to build their own bot to play Clash Royale. It provides an advanced state generator that accurately returns detailed information using cutting-edge technologies. The project includes tutorials for setting up the environment, building a basic bot, and understanding state generation. It also offers updates such as replacing YOLOv5 with YOLOv8 unit model and enhancing performance features like placement and elixir management. The future roadmap includes plans to label more images of diverse cards, add a tracking layer for unit predictions, publish tutorials on Q-learning and imitation learning, release the YOLOv5 training notebook, implement chest opening and card upgrading features, and create a leaderboard for the best bots developed with this repository.
moatless-tools
Moatless Tools is a hobby project focused on experimenting with using Large Language Models (LLMs) to edit code in large existing codebases. The project aims to build tools that insert the right context into prompts and handle responses effectively. It utilizes an agentic loop functioning as a finite state machine to transition between states like Search, Identify, PlanToCode, ClarifyChange, and EditCode for code editing tasks.
sourcegraph
Sourcegraph is a code search and navigation tool that helps developers read, write, and fix code in large, complex codebases. It provides features such as code search across all repositories and branches, code intelligence for navigation and refactoring, and the ability to fix and refactor code across multiple repositories at once.
continue
Continue is an open-source autopilot for VS Code and JetBrains that allows you to code with any LLM. With Continue, you can ask coding questions, edit code in natural language, generate files from scratch, and more. Continue is easy to use and can help you save time and improve your coding skills.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.