
hound
Language-agnostic AI auditor that autonomously builds and refines adaptive knowledge graphs for deep, iterative code reasoning
Stars: 325

Hound is a security audit automation pipeline for AI-assisted code review that mirrors how expert auditors think, learn, and collaborate. It features graph-driven analysis, sessionized audits, provider-agnostic models, belief system and hypotheses, precise code grounding, and adaptive planning. The system employs a senior/junior auditor pattern where the Scout actively navigates the codebase and annotates knowledge graphs while the Strategist handles high-level planning and vulnerability analysis. Hound is optimized for small-to-medium sized projects like smart contract applications and is language-agnostic.
README:
Autonomous agents for code security auditing
Overview • Configuration • Workflow • Chatbot • Contributing
Hound is a Language-agnostic AI auditor that autonomously builds and refines adaptive knowledge graphs for deep, iterative code reasoning.
- Graph-driven analysis – Flexible, agent-designed graphs that can model any aspect of a system (e.g. architecture, access control, value flows, math, etc.)
- Relational graph views – High-level graphs support cross-aspect reasoning and precise retrieval of the code snippets that back each subsystem investigated.
- Belief & hypothesis system – Observations, assumptions, and hypotheses evolve with confidence scores, enabling long-horizon reasoning and cumulative audits.
- Dynamic model switching – Lightweight "scout" models handle exploration; heavyweight "strategist" models provide deep reasoning, mirroring expert workflows while keeping costs efficient.
- Strategic audit planning - Balances broad code coverage with focused investigation of the most promising aspects, ensuring both depth and efficiency.
Codebase size considerations: While Hound can analyze any codebase, it's optimized for small-to-medium sized projects like typical smart contract applications. Large enterprise codebases may exceed context limits and require selective analysis of specific subsystems.
pip install -r requirements.txt
Set up your OpenAI API key and optional base URL:
export OPENAI_API_KEY=your_key_here
# Optional: override the base URL (defaults to https://api.openai.com)
export OPENAI_BASE_URL=https://api.openai.com
Copy the example configuration and edit as needed:
cp hound/config.yaml.example hound/config.yaml
# then edit hound/config.yaml to select providers/models and options
Notes:
- Defaults work out-of-the-box; you can override many options via CLI flags.
- Keep API keys out of the repo;
API_KEYS.txt
is gitignored and can be sourced.
Note: Audit quality scales with time and model capability. Use longer runs and advanced models for more complete results.
Projects organize your audits and store all analysis data:
# Create a project from local code
./hound.py project create myaudit /path/to/code
# List all projects
./hound.py project ls
# View project details and coverage
./hound.py project info myaudit
Hound analyzes your codebase and builds aspect‑oriented knowledge graphs that serve as the foundation for all subsequent analysis.
Recommended (one‑liner):
# Auto-generate a default set of graphs (up to 5) and refine
# Strongly recommended: pass a whitelist of files (comma-separated)
./hound.py graph build myaudit --auto \
--files "src/A.sol,src/B.sol,src/utils/Lib.sol"
# View generated graphs
./hound.py graph ls myaudit
Alternative (manual guidance):
# 1) Initialize the baseline SystemArchitecture graph
./hound.py graph build myaudit --init \
--files "src/A.sol,src/B.sol,src/utils/Lib.sol"
# 2) Add a specific graph with your own description (exactly one graph)
./hound.py graph custom myaudit \
"Call graph focusing on function call relationships across modules" \
--iterations 2 \
--files "src/A.sol,src/B.sol,src/utils/Lib.sol"
# (Repeat 'graph custom' for additional targeted graphs as needed)
Operational notes:
-
--auto
always includes the SystemArchitecture graph as the first graph. You do not need to run--init
in addition to--auto
. - If
--init
is used and aSystemArchitecture
graph already exists, initialization is skipped. Use--auto
to add more graphs, or remove existing graphs first if you want a clean re‑init. - When running
--auto
and graphs already exist, Hound asks for confirmation before updating/overwriting graphs (including SystemArchitecture). To clear graphs:
./hound.py graph rm myaudit --all # remove all graphs
./hound.py graph rm myaudit --name SystemArchitecture # remove one graph
- For large repos, you can constrain scope with
--files
(comma‑separated whitelist) alongside either approach.
Whitelists (strongly recommended):
- Always pass a whitelist of input files via
--files
. For the best results, the selected files should fit in the model’s available context window; whitelisting keeps the graph builder focused and avoids token overflows. - If you do not pass
--files
, Hound will consider all files in the repository. On large codebases this triggers sampling and may degrade coverage/quality. -
--files
expects a comma‑separated list of paths relative to the repo root.
Examples:
# Manual (small projects)
./hound.py graph build myaudit --auto \
--files "src/A.sol,src/B.sol,src/utils/Lib.sol"
# Generate a whitelist automatically (recommended for larger projects)
python whitelist_builder.py \
--input /path/to/repo \
--limit-loc 20000 \
--output whitelists/myaudit
# Use the generated list (newline-separated) as a comma list for --files
./hound.py graph build myaudit --auto \
--files "$(tr '\n' ',' < whitelists/myaudit | sed 's/,$//')"
- Refine existing graphs (resume building):
You can resume/refine an existing graph without creating new ones using graph refine
. This skips discovery and saves updates incrementally.
# Refine a single graph by name (internal or display)
./hound.py graph refine myaudit SystemArchitecture \
--iterations 2 \
--files "src/A.sol,src/B.sol,src/utils/Lib.sol"
# Refine all existing graphs
./hound.py graph refine myaudit --all --iterations 2 \
--files "src/A.sol,src/B.sol,src/utils/Lib.sol"
The audit phase uses the senior/junior pattern with planning and investigation:
# 1. Sweep all components for shallow bugs, build code understanding
./hound.py agent audit myaudit --mode sweep
# 2. Intuition-guided search to find complex bugs
./hound.py agent audit myaudit --mode intuition --time-limit 300
# Start with telemetry (connect the Chatbot UI to steer)
./hound.py agent audit myaudit --mode intuition --time-limit 30 --telemetry
# Attach to an existing session and continue where you left off
./hound.py agent audit myaudit --mode intuition --session <session_id>
Tip: When started with --telemetry
, you can connect the Chatbot UI and steer the audit interactively (see Chatbot section above).
Audit Modes:
Hound supports two distinct audit modes:
-
Sweep Mode (
--mode sweep
): Phase 1 - Systematic component analysis- Performs a broad, systematic analysis of every major component
- Examines each contract, module, and class for vulnerabilities
- Builds comprehensive graph annotations for later analysis
- Terminates when all accessible components have been analyzed
- Best for: Initial vulnerability discovery and building code understanding
-
Intuition Mode (
--mode intuition
): Phase 2 - Deep, targeted exploration- Uses intuition-guided search to find high-impact vulnerabilities
- Prioritizes monetary flows, value transfers, and theft opportunities
- Investigates contradictions between assumptions and observations
- Focuses on authentication bypasses and state corruption
- Best for: Finding complex, cross-component vulnerabilities
Key parameters:
- --time-limit: Stop after N minutes (useful for incremental audits)
- --plan-n: Number of investigations per planning batch
- --session: Resume a specific session (continues coverage/planning)
-
--debug: Save all LLM interactions to
.hound_debug/
Normally, you want to run sweep mode first followed by intuition mode. The quality and duration depend heavily on the models used. Faster models provide quick results but may miss subtle issues, while advanced reasoning models find deeper vulnerabilities but require more time.
Check audit progress and findings at any time during the audit. If you started the agent with --telemetry
, you can also monitor and steer via the Chatbot UI:
- Open http://127.0.0.1:5280 and attach to the running instance
- Watch live Activity, Plan, and Findings
- Use the Steer form to guide the next investigations
# View current hypotheses (findings)
./hound.py project ls-hypotheses myaudit
# See detailed hypothesis information
./hound.py project hypotheses myaudit --details
# List hypotheses with confidence ratings
./hound.py project hypotheses myaudit
# Check coverage statistics
./hound.py project coverage myaudit
# View session details
./hound.py project sessions myaudit --list
Understanding hypotheses: Each hypothesis represents a potential vulnerability with:
- Confidence score: 0.0-1.0 indicating likelihood of being a real issue
-
Status:
proposed
(initial),investigating
,confirmed
,rejected
- Severity: critical, high, medium, low
- Type: reentrancy, access control, logic error, etc.
- Annotations: Exact code locations and evidence
For specific concerns, run focused investigations without full planning:
# Investigate a specific concern
./hound.py agent investigate "Check for reentrancy in withdraw function" myaudit
# Quick investigation with fewer iterations
./hound.py agent investigate "Analyze access control in admin functions" myaudit \
--iterations 5
# Use specific models for investigation
./hound.py agent investigate "Review emergency functions" myaudit \
--model gpt-4o \
--strategist-model gpt-5
When to use targeted investigations:
- Following up on specific concerns after initial audit
- Testing a hypothesis about a particular vulnerability
- Quick checks before full audit
- Investigating areas not covered by automatic planning
Note: These investigations still update the hypothesis store and coverage tracking.
A reasoning model reviews all hypotheses and updates their status based on evidence:
# Run finalization with quality review
./hound.py finalize myaudit
# Re-run all pending (including below threshold)
./hound.py finalize myaudit --include-below-threshold
# Customize confidence threshold
./hound.py finalize myaudit -t 0.7 --model gpt-4o
# Include all findings (not just confirmed)
# (Use on the report command, not finalize)
./hound.py report myaudit --include-all
What happens during finalization:
- A reasoning model (default: GPT-5) reviews each hypothesis
- Evaluates the evidence and code context
- Updates status to
confirmed
orrejected
based on analysis - Adjusts confidence scores based on evidence strength
- Prepares findings for report generation
Important: By default, only confirmed
findings appear in the final report. Use --include-all
to include all hypotheses regardless of status.
Create and manage proof-of-concept exploits for confirmed vulnerabilities:
# Generate PoC prompts for confirmed vulnerabilities
./hound.py poc make-prompt myaudit
# Generate for a specific hypothesis
./hound.py poc make-prompt myaudit --hypothesis hyp_12345
# Import existing PoC files
./hound.py poc import myaudit hyp_12345 exploit.sol test.js \
--description "Demonstrates reentrancy exploit"
# List all imported PoCs
./hound.py poc list myaudit
The PoC workflow:
-
make-prompt: Generates detailed prompts for coding agents (like Claude Code)
- Includes vulnerable file paths (project-relative)
- Specifies exact functions to target
- Provides clear exploit requirements
- Saves prompts to
poc_prompts/
directory
-
import: Links PoC files to specific vulnerabilities
- Files stored in
poc/[hypothesis-id]/
- Metadata tracks descriptions and timestamps
- Multiple files per vulnerability supported
- Files stored in
-
Automatic inclusion: Imported PoCs appear in reports with syntax highlighting
Produce comprehensive audit reports with all findings and PoCs:
# Generate HTML report (includes imported PoCs)
./hound.py report myaudit
# Include all hypotheses, not just confirmed
./hound.py report myaudit --include-all
# Export report to specific location
./hound.py report myaudit --output /path/to/report.html
Report contents:
- Executive summary: High-level overview and risk assessment
- System architecture: Understanding of the codebase structure
-
Findings: Detailed vulnerability descriptions (only
confirmed
by default) - Code snippets: Relevant vulnerable code with line numbers
- Proof-of-concepts: Any imported PoCs with syntax highlighting
- Severity distribution: Visual breakdown of finding severities
- Recommendations: Suggested fixes and improvements
Note: The report uses a professional dark theme and includes all imported PoCs automatically.
Each audit run operates under a session with comprehensive tracking and per-session planning:
- Planning is stored in a per-session PlanStore with statuses:
planned
,in_progress
,done
,dropped
,superseded
. - Existing
planned
items are executed first; Strategist only tops up new items to reach your--plan-n
. - On resume, any stale
in_progress
items are reset toplanned
; completed items remaindone
and are not duplicated. - Completed investigations, coverage, and hypotheses are fed back into planning to avoid repeats and guide prioritization.
# View session details
./hound.py project sessions myaudit <session_id>
# List and inspect sessions
./hound.py project sessions myaudit --list
./hound.py project sessions myaudit <session_id>
# Show planned investigations for a session (Strategist PlanStore)
./hound.py project plan myaudit <session_id>
# Session data includes:
# - Coverage statistics (nodes/cards visited)
# - Investigation history
# - Token usage by model
# - Planning decisions
# - Hypothesis formation
Sessions are stored in ~/.hound/projects/myaudit/sessions/
and contain:
-
session_id
: Unique identifier -
coverage
: Visited nodes and analyzed code -
investigations
: All executed investigations -
planning_history
: Strategic decisions made -
token_usage
: Detailed API usage metrics
Resume/attach to an existing session during an audit run by passing the session ID:
# Attach to a specific session and continue auditing under it
./hound.py agent audit myaudit --session <session_id>
When you attach to a session, its status is set to active
while the audit runs and finalized on completion (completed
or interrupted
if a time limit was hit). Any in_progress
plan items are reset to planned
so you can continue cleanly.
# Start an audit (creates a session automatically)
./hound.py agent audit myaudit
# List sessions to get the session id
./hound.py project sessions myaudit --list
# Show planned investigations for that session
./hound.py project plan myaudit <session_id>
# Attach later and continue planning/execution under the same session
./hound.py agent audit myaudit --session <session_id>
Hound ships with a lightweight web UI for steering and monitoring a running audit session. It discovers local runs via a simple telemetry registry and streams status/decisions live.
Prerequisites:
- Set API keys (at least
OPENAI_API_KEY
, optionalOPENAI_BASE_URL
for custom endpoints):source ../API_KEYS.txt
or export manually - Install Python deps in this submodule:
pip install -r requirements.txt
- Start the agent with telemetry enabled
# From the hound/ directory
./hound.py agent audit myaudit --telemetry --debug
# Notes
# - The --telemetry flag exposes a local SSE/control endpoint and registers the run
# - Optional: ensure the registry dir matches the chatbot by setting:
# export HOUND_REGISTRY_DIR="$HOME/.local/state/hound/instances"
- Launch the chatbot server
# From the hound/ directory
python chatbot/run.py
# Optional: customize host/port
HOST=0.0.0.0 PORT=5280 python chatbot/run.py
Open the UI: http://127.0.0.1:5280
- Select the running instance and stream activity
- The input next to “Start” lists detected instances as
project_path | instance_id
. - Click “Start” to attach; the UI auto‑connects the realtime channel and begins streaming decisions/results.
- The lower panel has tabs:
- Activity: live status/decisions
- Plan: current strategist plan (✓ done, ▶ active, • pending)
- Findings: hypotheses with confidence; you can Confirm/Reject manually
- Steer the audit
- Use the “Steer” form (e.g., “Investigate reentrancy across the whole app next”).
- Steering is queued at
<project>/.hound/steering.jsonl
and consumed exactly once when applied. - Broad, global instructions may preempt the current investigation and trigger immediate replanning.
Troubleshooting
- No instances in dropdown: ensure you started the agent with
--telemetry
. - Wrong or stale project shown: clear the input; the UI defaults to the most recent alive instance.
- Registry mismatch: confirm both processes print the same
Using registry dir:
or setHOUND_REGISTRY_DIR
for both. - Raw API: open
/api/instances
in the browser to inspect entries (includesalive
flag and registry path).
Hypotheses are the core findings that accumulate across sessions:
# List hypotheses with confidence scores
./hound.py project hypotheses myaudit
# View with full details
./hound.py project hypotheses myaudit --details
# Update hypothesis status
./hound.py project set-hypothesis-status myaudit hyp_12345 confirmed
# Reset hypotheses (creates backup)
./hound.py project reset-hypotheses myaudit
# Force reset without confirmation
./hound.py project reset-hypotheses myaudit --force
Hypothesis statuses:
- proposed: Initial finding, needs review
- investigating: Under active investigation
- confirmed: Verified vulnerability
- rejected: False positive
- resolved: Fixed in code
Override default models per component:
# Use different models for each role
./hound.py agent audit myaudit \
--platform openai --model gpt-4o-mini \ # Scout
--strategist-platform anthropic --strategist-model claude-3-opus # Strategist
Capture all LLM interactions for analysis:
# Enable debug logging
./hound.py agent audit myaudit --debug
# Debug logs saved to .hound_debug/
# Includes HTML reports with all prompts and responses
Monitor audit progress and completeness:
# View coverage statistics
./hound.py project coverage myaudit
# Coverage shows:
# - Graph nodes visited vs total
# - Code cards analyzed vs total
# - Percentage completion
See CONTRIBUTING.md for development setup and guidelines.
Apache 2.0 with additional terms:
You may use Hound however you want, except selling it as an online service or as an appliance - that requires written permission from the author.
- See LICENSE for details.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for hound
Similar Open Source Tools

hound
Hound is a security audit automation pipeline for AI-assisted code review that mirrors how expert auditors think, learn, and collaborate. It features graph-driven analysis, sessionized audits, provider-agnostic models, belief system and hypotheses, precise code grounding, and adaptive planning. The system employs a senior/junior auditor pattern where the Scout actively navigates the codebase and annotates knowledge graphs while the Strategist handles high-level planning and vulnerability analysis. Hound is optimized for small-to-medium sized projects like smart contract applications and is language-agnostic.

RA.Aid
RA.Aid is an AI software development agent powered by `aider` and advanced reasoning models like `o1`. It combines `aider`'s code editing capabilities with LangChain's agent-based task execution framework to provide an intelligent assistant for research, planning, and implementation of multi-step development tasks. It handles complex programming tasks by breaking them down into manageable steps, running shell commands automatically, and leveraging expert reasoning models like OpenAI's o1. RA.Aid is designed for everyday software development, offering features such as multi-step task planning, automated command execution, and the ability to handle complex programming tasks beyond single-shot code edits.

well-architected-iac-analyzer
Well-Architected Infrastructure as Code (IaC) Analyzer is a project demonstrating how generative AI can evaluate infrastructure code for alignment with best practices. It features a modern web application allowing users to upload IaC documents, complete IaC projects, or architecture diagrams for assessment. The tool provides insights into infrastructure code alignment with AWS best practices, offers suggestions for improving cloud architecture designs, and can generate IaC templates from architecture diagrams. Users can analyze CloudFormation, Terraform, or AWS CDK templates, architecture diagrams in PNG or JPEG format, and complete IaC projects with supporting documents. Real-time analysis against Well-Architected best practices, integration with AWS Well-Architected Tool, and export of analysis results and recommendations are included.

dbt-llm-agent
dbt-llm-agent is an LLM-powered agent designed for interacting with dbt projects. It offers features such as question answering, documentation generation, agentic model interpretation, Postgres integration with pgvector, dbt model selection, question tracking, and upcoming Slack integration. The agent utilizes dbt project parsing, PostgreSQL with pgvector, model selection syntax, large language models like GPT-4, and question tracking to provide its functionalities. Users can set up the agent by checking Python version, cloning the repository, installing dependencies, setting up PostgreSQL with pgvector, configuring environment variables, and initializing the database schema. The agent can be initialized in Cloud Mode, Local Mode, or Source Code Mode to load project metadata. Once set up, users can work with model documentation, ask questions, provide feedback, list models, get detailed model information, and contribute to the project.

company-research-agent
Agentic Company Researcher is a multi-agent tool that generates comprehensive company research reports by utilizing a pipeline of AI agents to gather, curate, and synthesize information from various sources. It features multi-source research, AI-powered content filtering, real-time progress streaming, dual model architecture, modern React frontend, and modular architecture. The tool follows an agentic framework with specialized research and processing nodes, leverages separate models for content generation, uses a content curation system for relevance scoring and document processing, and implements a real-time communication system via WebSocket connections. Users can set up the tool quickly using the provided setup script or manually, and it can also be deployed using Docker and Docker Compose. The application can be used for local development and deployed to various cloud platforms like AWS Elastic Beanstalk, Docker, Heroku, and Google Cloud Run.

backend.ai
Backend.AI is a streamlined, container-based computing cluster platform that hosts popular computing/ML frameworks and diverse programming languages, with pluggable heterogeneous accelerator support including CUDA GPU, ROCm GPU, TPU, IPU and other NPUs. It allocates and isolates the underlying computing resources for multi-tenant computation sessions on-demand or in batches with customizable job schedulers with its own orchestrator. All its functions are exposed as REST/GraphQL/WebSocket APIs.

Fabric
Fabric is an open-source framework designed to augment humans using AI by organizing prompts by real-world tasks. It addresses the integration problem of AI by creating and organizing prompts for various tasks. Users can create, collect, and organize AI solutions in a single place for use in their favorite tools. Fabric also serves as a command-line interface for those focused on the terminal. It offers a wide range of features and capabilities, including support for multiple AI providers, internationalization, speech-to-text, AI reasoning, model management, web search, text-to-speech, desktop notifications, and more. The project aims to help humans flourish by leveraging AI technology to solve human problems and enhance creativity.

action_mcp
Action MCP is a powerful tool for managing and automating your cloud infrastructure. It provides a user-friendly interface to easily create, update, and delete resources on popular cloud platforms. With Action MCP, you can streamline your deployment process, reduce manual errors, and improve overall efficiency. The tool supports various cloud providers and offers a wide range of features to meet your infrastructure management needs. Whether you are a developer, system administrator, or DevOps engineer, Action MCP can help you simplify and optimize your cloud operations.

code2prompt
Code2Prompt is a powerful command-line tool that generates comprehensive prompts from codebases, designed to streamline interactions between developers and Large Language Models (LLMs) for code analysis, documentation, and improvement tasks. It bridges the gap between codebases and LLMs by converting projects into AI-friendly prompts, enabling users to leverage AI for various software development tasks. The tool offers features like holistic codebase representation, intelligent source tree generation, customizable prompt templates, smart token management, Gitignore integration, flexible file handling, clipboard-ready output, multiple output options, and enhanced code readability.

evolving-agents
A toolkit for agent autonomy, evolution, and governance enabling agents to learn from experience, collaborate, communicate, and build new tools within governance guardrails. It focuses on autonomous evolution, agent self-discovery, governance firmware, self-building systems, and agent-centric architecture. The toolkit leverages existing frameworks to enable agent autonomy and self-governance, moving towards truly autonomous AI systems.

AutoAgent
AutoAgent is a fully-automated and zero-code framework that enables users to create and deploy LLM agents through natural language alone. It is a top performer on the GAIA Benchmark, equipped with a native self-managing vector database, and allows for easy creation of tools, agents, and workflows without any coding. AutoAgent seamlessly integrates with a wide range of LLMs and supports both function-calling and ReAct interaction modes. It is designed to be dynamic, extensible, customized, and lightweight, serving as a personal AI assistant.

llama.vim
llama.vim is a plugin that provides local LLM-assisted text completion for Vim users. It offers features such as auto-suggest on cursor movement, manual suggestion toggling, suggestion acceptance with Tab and Shift+Tab, control over text generation time, context configuration, ring context with chunks from open and edited files, and performance stats display. The plugin requires a llama.cpp server instance to be running and supports FIM-compatible models. It aims to be simple, lightweight, and provide high-quality and performant local FIM completions even on consumer-grade hardware.

aira-dojo
aira-dojo is a scalable and customizable framework for AI research agents, designed to accelerate hill-climbing on research capabilities toward a fully automated AI research scientist. The framework provides a general abstraction for tasks and agents, implements the MLE-bench task, and includes state-of-the-art agents. It features an isolated code execution environment that integrates smoothly with job schedulers like Slurm, enabling large-scale experiments and rapid iteration across a portfolio of tasks and solvers.

pentagi
PentAGI is an innovative tool for automated security testing that leverages cutting-edge artificial intelligence technologies. It is designed for information security professionals, researchers, and enthusiasts who need a powerful and flexible solution for conducting penetration tests. The tool provides secure and isolated operations in a sandboxed Docker environment, fully autonomous AI-powered agent for penetration testing steps, a suite of 20+ professional security tools, smart memory system for storing research results, web intelligence for gathering information, integration with external search systems, team delegation system, comprehensive monitoring and reporting, modern interface, API integration, persistent storage, scalable architecture, self-hosted solution, flexible authentication, and quick deployment through Docker Compose.

oso
Open Source Observer is a free analytics suite that helps funders measure the impact of open source software contributions to the health of their ecosystem. The repository contains various subprojects such as OSO apps, documentation, frontend application, API services, Docker files, common libraries, utilities, GitHub app for validating pull requests, Helm charts for Kubernetes, Kubernetes configuration, Terraform modules, data warehouse code, Python utilities for managing data, OSO agent, Dagster configuration, sqlmesh configuration, Python package for pyoso, and other tools to manage warehouse pipelines.

pastemax
PasteMax is a modern file viewer application designed for developers to easily navigate, search, and copy code from repositories. It provides features such as file tree navigation, token counting, search capabilities, selection management, sorting options, dark mode, binary file detection, and smart file exclusion. Built with Electron, React, and TypeScript, PasteMax is ideal for pasting code into ChatGPT or other language models. Users can download the application or build it from source, and customize file exclusions. Troubleshooting steps are provided for common issues, and contributions to the project are welcome under the MIT License.
For similar tasks

hound
Hound is a security audit automation pipeline for AI-assisted code review that mirrors how expert auditors think, learn, and collaborate. It features graph-driven analysis, sessionized audits, provider-agnostic models, belief system and hypotheses, precise code grounding, and adaptive planning. The system employs a senior/junior auditor pattern where the Scout actively navigates the codebase and annotates knowledge graphs while the Strategist handles high-level planning and vulnerability analysis. Hound is optimized for small-to-medium sized projects like smart contract applications and is language-agnostic.

AutoAudit
AutoAudit is an open-source large language model specifically designed for the field of network security. It aims to provide powerful natural language processing capabilities for security auditing and network defense, including analyzing malicious code, detecting network attacks, and predicting security vulnerabilities. By coupling AutoAudit with ClamAV, a security scanning platform has been created for practical security audit applications. The tool is intended to assist security professionals with accurate and fast analysis and predictions to combat evolving network threats.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.