
forge
AI enabled shell
Stars: 88

README:
Forge: AI-Enhanced Terminal Development Environment
Forge is a comprehensive coding agent that integrates AI capabilities with your development environment, offering sophisticated assistance while maintaining the efficiency of your existing workflow.
- Advanced AI coding assistant with comprehensive understanding, planning, and execution of complex development tasks
- Lightning-fast performance with sub-50ms startup times
- Seamless integration with existing Unix tools and workflows
- Context-aware assistance that understands your development environment and workflows
- Natural language interface to powerful system operations
- Enhanced security features with optional restricted shell mode
- Multi-agent architecture that orchestrates specialized AI agents to solve complex problems collaboratively
- Powered by Claude 3.7 Sonnet for state-of-the-art AI capabilities
Table of Contents
- Installation
- Get Started
- Features
- Provider Configuration
- Custom Workflows and Multi-Agent Systems
- Why Shell?
- Community
- Support Us
Install Forge globally using npm:
# Install Forge globally using npm
npm install -g @antinomyhq/forge
# Or run directly without installation using npx
npx @antinomyhq/forge
This method works on Windows, macOS, and Linux, providing a consistent installation experience across all platforms.
-
Create a
.env
file in your home directory with your API credentials:# Your API key for accessing AI models (see Environment Configuration section) OPENROUTER_API_KEY=<Enter your Open Router Key> # Optional: Set a custom URL for OpenAI-compatible providers OPENAI_URL=https://custom-openai-provider.com/v1 # Optional: Set a custom URL for Anthropic ANTHROPIC_URL=https://custom-anthropic-provider.com/v1
You can get a Key at Open Router
-
Launch Code Forge: Type
@
and press[tab]
to tag files. You can also use and define custom slash commands.
Code Forge functions as a comprehensive development assistant with capabilities to:
- Write, refactor, and optimize code based on specifications
- Debug complex issues through systematic error analysis
- Generate test suites for existing codebases
- Document code and generate technical specifications
- Propose architectural improvements and optimizations
Transform your command-line experience with natural language interaction while maintaining the power and flexibility of traditional shell commands.
Code-Forge prioritizes security by providing a restricted shell mode (rbash) that limits potentially dangerous operations:
- Flexible Security Options: Choose between standard and restricted modes based on your needs
-
Restricted Mode: Enable with
-r
flag to prevent potentially harmful operations - Standard Mode: Uses regular shell by default (bash on Unix/Mac, cmd on Windows)
-
Security Controls: Restricted mode prevents:
- Changing directories
- Setting/modifying environment variables
- Executing commands with absolute paths
- Modifying shell options
Example:
# Standard mode (default)
forge
# Restricted secure mode
forge -r
Additional security features include:
- Direct API connection to Open Router without intermediate servers
- Local terminal operation for maximum control and data privacy
Forge offers several built-in commands to enhance your interaction:
-
/new
- Start a new task when you've completed your current one -
/info
- View environment summary, logs folder location, and command history -
/models
- List all available AI models with capabilities and context limits -
/dump
- Save the current conversation in JSON format to a file for reference -
/act
- Switch to ACT mode (default), allowing Forge to execute commands and implement changes -
/plan
- Switch to PLAN mode, where Forge analyzes and plans but doesn't modify files
Boost your productivity with intelligent command completion:
- Type
@
and press Tab for contextual file/path completion - Use Right Arrow to complete previously executed commands
- Access command history with Up Arrow
- Quick history search with Ctrl+R
Easily incorporate images into your conversations:
- Use the
@
special character to tag and upload images directly in your messages - Works with both relative and absolute paths:
- Relative path:
@screenshots/bug.png
to include an image from a subfolder - Absolute path:
@/Users/username/Documents/diagrams/architecture.png
- Relative path:
- Perfect for sharing screenshots, diagrams, or any visual context relevant to your development tasks
Enhance your interactive shell experience with WYSIWYG (What You See Is What You Get) integration. 'forge' now visualizes each command executed, complete with colorful formatting, allowing you to see command outputs just as if you were typing them directly into your terminal. This feature ensures clarity and enhances interaction, making every command visible in rich detail.
Stay in control of your shell environment with intuitive command handling:
-
Cancel with
CTRL+C
: Gracefully interrupt ongoing operations, providing the flexibility to halt processes that no longer need execution. -
Exit with
CTRL+D
: Easily exit the shell session without hassle, ensuring you can quickly terminate your operations when needed.
Forge operates in two distinct modes to provide flexible assistance based on your needs:
In ACT mode, which is the default when you start Forge, the assistant is empowered to directly implement changes to your codebase and execute commands:
- Full Execution: Forge can modify files, create new ones, and execute shell commands
- Implementation: Directly implements the solutions it proposes
- Verification: Performs verification steps to ensure changes work as intended
- Best For: When you want Forge to handle implementation details and fix issues directly
Example:
# Switch to ACT mode within a Forge session
/act
In PLAN mode, Forge analyzes and plans but doesn't modify your codebase:
- Read-Only Operations: Can only read files and run non-destructive commands
- Detailed Analysis: Thoroughly examines code, identifies issues, and proposes solutions
- Structured Planning: Provides step-by-step action plans for implementing changes
- Best For: When you want to understand what changes are needed before implementing them yourself
Example:
# Switch to PLAN mode within a Forge session
/plan
You can easily switch between modes during a session using the /act
and /plan
commands. PLAN mode is especially useful for reviewing potential changes before they're implemented, while ACT mode streamlines the development process by handling implementation details for you.
Forge generates detailed JSON-formatted logs that help with troubleshooting and understanding the application's behavior. These logs provide valuable insights into system operations and API interactions.
Log Location and Access
Logs are stored in your application support directory with date-based filenames. The typical path looks like:
/Users/username/Library/Application Support/forge/logs/forge.log.YYYY-MM-DD
You can easily locate log files using the built-in command /info
, which displays system information including the exact path to your log files.
Viewing and Filtering Logs
To view logs in real-time with automatic updates, use the tail
command:
tail -f /Users/tushar/Library/Application Support/forge/logs/forge.log.2025-03-07
Formatted Log Viewing with jq
Since Forge logs are in JSON format, you can pipe them through jq
for better readability:
tail -f /Users/tushar/Library/Application Support/forge/logs/forge.log.2025-03-07 | jq
This displays the logs in a nicely color-coded structure that's much easier to analyze, helping you quickly identify patterns, errors, or specific behavior during development and debugging.
Forge supports multiple AI providers and allows custom configuration to meet your specific needs.
Forge automatically detects and uses your API keys from environment variables in the following priority order:
-
FORGE_KEY
- Antinomy's provider (OpenAI-compatible) -
OPENROUTER_API_KEY
- Open Router provider (aggregates multiple models) -
OPENAI_API_KEY
- Official OpenAI provider -
ANTHROPIC_API_KEY
- Official Anthropic provider
To use a specific provider, set the corresponding environment variable in your .env
file.
# Examples of different provider configurations (use only one)
# For Open Router (recommended, provides access to multiple models)
OPENROUTER_API_KEY=your_openrouter_key_here
# For official OpenAI
OPENAI_API_KEY=your_openai_key_here
# For official Anthropic
ANTHROPIC_API_KEY=your_anthropic_key_here
# For Antinomy's provider
FORGE_KEY=your_forge_key_here
For OpenAI-compatible providers (including Open Router), you can customize the API endpoint URL by setting the OPENAI_URL
environment variable:
# Custom OpenAI-compatible provider
OPENAI_API_KEY=your_api_key_here
OPENAI_URL=https://your-custom-provider.com/v1
# Or with Open Router but custom endpoint
OPENROUTER_API_KEY=your_openrouter_key_here
OPENAI_URL=https://alternative-openrouter-endpoint.com/v1
For Anthropic, you can customize the API endpoint URL by setting the ANTHROPIC_URL
environment variable:
# Custom Anthropic endpoint
ANTHROPIC_API_KEY=your_anthropic_key_here
ANTHROPIC_URL=https://your-custom-anthropic-endpoint.com/v1
This is particularly useful when:
- Using self-hosted models with OpenAI-compatible APIs
- Connecting to enterprise OpenAI deployments
- Using proxy services or API gateways
- Working with regional API endpoints
For complex tasks, a single agent may not be sufficient. Forge allows you to create custom workflows with multiple specialized agents working together to accomplish sophisticated tasks.
You can configure your own workflows by creating a YAML file and pointing to it with the -w
flag:
forge -w /path/to/your/workflow.yaml
Forge loads workflow configurations using the following precedence rules:
-
Explicit Path: When a path is provided with the
-w
flag, Forge loads that configuration directly without any merging -
Project Configuration: If no explicit path is provided, Forge looks for
forge.yaml
in the current directory - Default Configuration: An embedded default configuration is always available as a fallback
When a project configuration exists in the current directory, Forge creates a merged configuration where:
- Project settings in
forge.yaml
take precedence over default settings - Any settings not specified in the project configuration inherit from defaults
This approach allows you to customize only the parts of the configuration you need while inheriting sensible defaults for everything else.
A workflow consists of agents connected via events. Each agent has specific capabilities and can perform designated tasks.
Agents communicate through events which they can publish and subscribe to:
Built-in Events
-
user_task_init
- Published when a new task is initiated -
user_task_update
- Published when follow-up instructions are provided by the user
Each agent needs tools to perform tasks, configured in the tools
field:
Built-in Tools
-
tool_forge_fs_read
- Read from the filesystem -
tool_forge_fs_create
- Create or overwrite files -
tool_forge_fs_remove
- Remove files -
tool_forge_fs_search
- Search for patterns in files -
tool_forge_fs_list
- List files in a directory -
tool_forge_fs_info
- Get file metadata -
tool_forge_process_shell
- Execute shell commands -
tool_forge_process_think
- Perform internal reasoning -
tool_forge_net_fetch
- Fetch data from the internet -
tool_forge_event_dispatch
- Dispatch events to other agents -
tool_forge_fs_patch
- Patch existing files
-
id
- Unique identifier for the agent -
model
- AI model to use (from the\models
list) -
tools
- List of tools the agent can use -
subscribe
- Events the agent listens to -
ephemeral
- If true, agent is destroyed after task completion -
custom_rules
- (Optional) Clear instructions or guidelines that define the rules for the agent. This section ensures the agent follows the desired processes and complies with any specific conditions or constraints set. -
tool_supported
- (Optional) Boolean flag that determines whether tools defined in the agent configuration are actually made available to the LLM. When set tofalse
, tools are listed in the configuration but not included in AI model requests, causing the agent to format tool calls in XML rather than in the model's native format. Default:true
. -
system_prompt
- (Optional) Instructions for how the agent should behave. While optional, it's recommended to provide clear instructions for best results. -
user_prompt
- (Optional) Format for user inputs. If not provided, the raw event value is used. -
temperature
- (Optional) Controls randomness in model output. Lower values (0.0-0.3) produce more deterministic, focused responses, while higher values (0.7-2.0) generate more creative, diverse outputs. Valid range is 0.0 to 2.0. If not specified, the default value depends on the specific provider being used.
Example Agent Configuration:
agents:
- id: software-engineer
model: gpt-4
custom_rules: always ensure you compile and run tests before presenting to user
temperature: 0.3
system_prompt: |
You are a software engineer...
Forge provides templates to simplify system prompt creation:
-
system-prompt-engineer.hbs
- Template for engineering tasks -
system-prompt-title-generator.hbs
- Template for generating descriptive titles -
system-prompt-advocate.hbs
- Template for user advocacy and explanation -
partial-tool-information.hbs
- Tool documentation for agents -
partial-tool-examples.hbs
- Usage examples for tools
Use these templates with the syntax: {{> name-of-the-template.hbs }}
Forge allows you to define custom commands in your workflow configuration. These commands can be executed within the Forge CLI using the /command_name
syntax.
Configuration Options:
-
name
- The name of the command (used as/name
in the CLI) -
description
- A description of what the command does -
value
- (Optional) A default prompt value that will be used if no arguments are provided when executing the command
Example Custom Command Configuration:
commands:
- name: commit
description: Commit changes with a standard prefix
value: |
Understand the diff produced and commit using the 'conventional commit' standard
- name: branch
description: Create and checkout a new branch
- name: pull-request
description: Create a pull request with standard template
value: |
Understand the diff with respect to `main` and create a pull-request.
Ensure it follows 'conventional commit' standard.
With this configuration, users can type /commit
in the Forge CLI to execute the commit command with the default instructions for handling commits using the conventional commit standard. If specific instructions are needed, they can be provided as an argument: /commit Create a detailed commit message for the login feature
. Commands without a default value like /branch
require an argument to be provided: /branch feature/new-auth
.
How Custom Commands Work:
How Custom Commands Work With the Event System:
When a custom command is executed in the Forge CLI, it follows a specific event flow:
-
Command Execution - User types a command like
/commit feat: add user authentication
-
Event Dispatch - Forge dispatches an event with:
- Name: The command name (e.g.,
commit
) - Value: The provided argument or default value (e.g.,
feat: add user authentication
)
- Name: The command name (e.g.,
- Agent Subscription - Any agent that has subscribed to this event name receives the event
- Event Processing - The agent processes the event according to its configuration
For an agent to respond to a custom command, it must explicitly subscribe to the event with the same name as the command in its configuration. The agent can then use conditional logic in its user prompt to handle different types of events appropriately.
Forge provides two main approaches for handling custom command events in agents. Below are examples of both approaches.
In this approach, the event value itself contains complete instructions that are passed directly to the agent:
variables:
models:
advanced_model: &advanced_model anthropic/claude-3.7-sonnet
efficiency_model: &efficiency_model anthropic/claude-3.5-haiku
commands:
- name: commit
description: Commit changes with a standard prefix
value: |
Understand the diff produced and commit using the 'conventional commit' standard
- name: pull-request
description: Create a pull request with standard template
value: |
Understand the diff with respect to `main` and create a pull-request.
Ensure it follows 'conventional commit' standard.
agents:
- id: developer
model: *advanced_model
tools:
- tool_forge_fs_read
- tool_forge_fs_create
- tool_forge_fs_remove
- tool_forge_fs_patch
- tool_forge_process_shell
- tool_forge_net_fetch
- tool_forge_fs_search
subscribe:
- user_task_init
- user_task_update
- commit # Subscribe to the commit command event
- pull-request # Subscribe to the pull-request command event
ephemeral: false
tool_supported: true
system_prompt: "{{> system-prompt-engineer.hbs }}"
user_prompt: |
<task>{{event.value}}</task>
In this example, the entire value from the event is passed directly as the task. The agent receives the complete instructions as they were defined in the command value or provided by the user.
In this approach, the event value is used as data within a template that formats different tasks based on the event name:
variables:
models:
advanced_model: &advanced_model anthropic/claude-3.7-sonnet
efficiency_model: &efficiency_model anthropic/claude-3.5-haiku
commands:
- name: commit
description: Create a git commit with the provided message
- name: pull-request
description: Create a pull request with the provided title
agents:
- id: developer
model: *advanced_model
tools:
- tool_forge_fs_read
- tool_forge_fs_create
- tool_forge_fs_remove
- tool_forge_fs_patch
- tool_forge_process_shell
- tool_forge_net_fetch
- tool_forge_fs_search
subscribe:
- user_task_init
- user_task_update
- commit # Subscribe to the commit command event
- pull-request # Subscribe to the pull-request command event
ephemeral: false
tool_supported: true
system_prompt: "{{> system-prompt-engineer.hbs }}"
user_prompt: |
{{#if (eq event.name "commit")}}
<task>Create a git commit with the following message: {{event.value}}</task>
{{else if (eq event.name "pull-request")}}
<task>Create a pull request with the title: {{event.value}}</task>
{{else}}
<task>{{event.value}}</task>
{{/if}}
This example, the event value is a simpler string that gets embedded within a template. The template uses Handlebars conditional logic ({{#if (eq event.name "commit")}}
) to format different tasks based on the event name. The event value is used as data within these task templates.
Approach 1: Event Value as Instructions
- Best for: When the command itself represents a complete task or instruction set
- Flexibility: Users can provide detailed, multi-line instructions via the command
- Implementation: Simpler user_prompt template that just passes the event value through
- Example use case: Complex operations where instructions vary significantly
Approach 2: Event Value as Data
- Best for: When commands follow predictable patterns with varying data points
- Structure: More consistent task formatting across different command types
- Implementation: More complex user_prompt template with conditional logic
- Example use case: Standardized workflows like git operations with varying messages/titles
You can choose the approach that best fits your specific workflow needs. For simple command structures, Approach 2 provides more consistency, while Approach 1 offers greater flexibility for complex operations.
There's a reason why the shell has stood the test of time for development tools and remains a cornerstone of development environments across the globe: it's fast, versatile, and seamlessly integrated with the system. The shell is where developers navigate code, run tests, manage processes, and orchestrate development environments, providing an unmatched level of control and productivity.
Why a shell-based AI assistant like Code-Forge makes sense:
-
Rich Tool Ecosystem: The shell gives you immediate access to powerful Unix tools (grep, awk, sed, find) that LLMs already understand deeply. This means the AI can leverage
ripgrep
for code search,jq
for JSON processing,git
for version control, and hundreds of other battle-tested tools without reinventing them. -
Context is Everything: Your shell session already has your complete development context - current directory, project structure, environment variables, installed tools, and system state. This rich context makes the AI interactions more accurate and relevant.
-
Speed Matters: Unlike IDEs and Web UIs, Code Forge's shell is extremely lightweight. This exceptional speed unlocks powerful capabilities that directly enhance your productivity: seamlessly get in and out of workflows, manage multiple feature developments in parallel, effortlessly coordinate across git worktrees, and instantly access AI assistance in any directory.
-
Tool Composition: Unix philosophy teaches us to make tools that compose well. The AI can pipe commands together, combining tools like
find | xargs forge -p | grep "foo"
in ways that solve complex problems elegantly.
Join our vibrant Discord community to connect with other Code-Forge users and contributors, get help with your projects, share ideas, and provide feedback!
Your support drives Code-Forge's continued evolution! By starring our GitHub repository, you:
- Help others discover this powerful tool
- Motivate our development team
- Enable us to prioritize new features
- Strengthen our open-source community
Recent community feedback has helped us implement features like improved autocomplete, cross-platform optimization, and enhanced security features. Join our growing community of developers who are reshaping the future of AI-powered development!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for forge
Similar Open Source Tools

langmanus
LangManus is a community-driven AI automation framework that combines language models with specialized tools for tasks like web search, crawling, and Python code execution. It implements a hierarchical multi-agent system with agents like Coordinator, Planner, Supervisor, Researcher, Coder, Browser, and Reporter. The framework supports LLM integration, search and retrieval tools, Python integration, workflow management, and visualization. LangManus aims to give back to the open-source community and welcomes contributions in various forms.

gitdiagram
GitDiagram is a tool that turns any GitHub repository into an interactive diagram for visualization in seconds. It offers instant visualization, interactivity, fast generation, customization, and API access. The tool utilizes a tech stack including Next.js, FastAPI, PostgreSQL, Claude 3.5 Sonnet, Vercel, EC2, GitHub Actions, PostHog, and Api-Analytics. Users can self-host the tool for local development and contribute to its development. GitDiagram is inspired by Gitingest and has future plans to use larger context models, allow user API key input, implement RAG with Mermaid.js docs, and include font-awesome icons in diagrams.

trip_planner_agent
VacAIgent is an AI tool that automates and enhances trip planning by leveraging the CrewAI framework. It integrates a user-friendly Streamlit interface for interactive travel planning. Users can input preferences and receive tailored travel plans with the help of autonomous AI agents. The tool allows for collaborative decision-making on cities and crafting complete itineraries based on specified preferences, all accessible via a streamlined Streamlit user interface. VacAIgent can be customized to use different AI models like GPT-3.5 or local models like Ollama for enhanced privacy and customization.

code2prompt
Code2Prompt is a powerful command-line tool that generates comprehensive prompts from codebases, designed to streamline interactions between developers and Large Language Models (LLMs) for code analysis, documentation, and improvement tasks. It bridges the gap between codebases and LLMs by converting projects into AI-friendly prompts, enabling users to leverage AI for various software development tasks. The tool offers features like holistic codebase representation, intelligent source tree generation, customizable prompt templates, smart token management, Gitignore integration, flexible file handling, clipboard-ready output, multiple output options, and enhanced code readability.

LLMBox
LLMBox is a comprehensive library designed for implementing Large Language Models (LLMs) with a focus on a unified training pipeline and comprehensive model evaluation. It serves as a one-stop solution for training and utilizing LLMs, offering flexibility and efficiency in both training and utilization stages. The library supports diverse training strategies, comprehensive datasets, tokenizer vocabulary merging, data construction strategies, parameter efficient fine-tuning, and efficient training methods. For utilization, LLMBox provides comprehensive evaluation on various datasets, in-context learning strategies, chain-of-thought evaluation, evaluation methods, prefix caching for faster inference, support for specific LLM models like vLLM and Flash Attention, and quantization options. The tool is suitable for researchers and developers working with LLMs for natural language processing tasks.

RainbowGPT
RainbowGPT is a versatile tool that offers a range of functionalities, including Stock Analysis for financial decision-making, MySQL Management for database navigation, and integration of AI technologies like GPT-4 and ChatGlm3. It provides a user-friendly interface suitable for all skill levels, ensuring seamless information flow and continuous expansion of emerging technologies. The tool enhances adaptability, creativity, and insight, making it a valuable asset for various projects and tasks.

AutoAgent
AutoAgent is a fully-automated and zero-code framework that enables users to create and deploy LLM agents through natural language alone. It is a top performer on the GAIA Benchmark, equipped with a native self-managing vector database, and allows for easy creation of tools, agents, and workflows without any coding. AutoAgent seamlessly integrates with a wide range of LLMs and supports both function-calling and ReAct interaction modes. It is designed to be dynamic, extensible, customized, and lightweight, serving as a personal AI assistant.

rag-gpt
RAG-GPT is a tool that allows users to quickly launch an intelligent customer service system with Flask, LLM, and RAG. It includes frontend, backend, and admin console components. The tool supports cloud-based and local LLMs, enables deployment of conversational service robots in minutes, integrates diverse knowledge bases, offers flexible configuration options, and features an attractive user interface.

Auto-Analyst
Auto-Analyst is an AI-driven data analytics agentic system designed to simplify and enhance the data science process. By integrating various specialized AI agents, this tool aims to make complex data analysis tasks more accessible and efficient for data analysts and scientists. Auto-Analyst provides a streamlined approach to data preprocessing, statistical analysis, machine learning, and visualization, all within an interactive Streamlit interface. It offers plug and play Streamlit UI, agents with data science speciality, complete automation, LLM agnostic operation, and is built using lightweight frameworks.

extension-gen-ai
The Looker GenAI Extension provides code examples and resources for building a Looker Extension that integrates with Vertex AI Large Language Models (LLMs). Users can leverage the power of LLMs to enhance data exploration and analysis within Looker. The extension offers generative explore functionality to ask natural language questions about data and generative insights on dashboards to analyze data by asking questions. It leverages components like BQML Remote Models, BQML Remote UDF with Vertex AI, and Custom Fine Tune Model for different integration options. Deployment involves setting up infrastructure with Terraform and deploying the Looker Extension by creating a Looker project, copying extension files, configuring BigQuery connection, connecting to Git, and testing the extension. Users can save example prompts and configure user settings for the extension. Development of the Looker Extension environment includes installing dependencies, starting the development server, and building for production.

RA.Aid
RA.Aid is an AI software development agent powered by `aider` and advanced reasoning models like `o1`. It combines `aider`'s code editing capabilities with LangChain's agent-based task execution framework to provide an intelligent assistant for research, planning, and implementation of multi-step development tasks. It handles complex programming tasks by breaking them down into manageable steps, running shell commands automatically, and leveraging expert reasoning models like OpenAI's o1. RA.Aid is designed for everyday software development, offering features such as multi-step task planning, automated command execution, and the ability to handle complex programming tasks beyond single-shot code edits.

shell-ai
Shell-AI (`shai`) is a CLI utility that enables users to input commands in natural language and receive single-line command suggestions. It leverages natural language understanding and interactive CLI tools to enhance command line interactions. Users can describe tasks in plain English and receive corresponding command suggestions, making it easier to execute commands efficiently. Shell-AI supports cross-platform usage and is compatible with Azure OpenAI deployments, offering a user-friendly and efficient way to interact with the command line.

resume-job-matcher
Resume Job Matcher is a Python script that automates the process of matching resumes to a job description using AI. It leverages the Anthropic Claude API or OpenAI's GPT API to analyze resumes and provide a match score along with personalized email responses for candidates. The tool offers comprehensive resume processing, advanced AI-powered analysis, in-depth evaluation & scoring, comprehensive analytics & reporting, enhanced candidate profiling, and robust system management. Users can customize font presets, generate PDF versions of unified resumes, adjust logging level, change scoring model, modify AI provider, and adjust AI model. The final score for each resume is calculated based on AI-generated match score and resume quality score, ensuring content relevance and presentation quality are considered. Troubleshooting tips, best practices, contribution guidelines, and required Python packages are provided.

manifold
Manifold is a powerful platform for workflow automation using AI models. It supports text generation, image generation, and retrieval-augmented generation, integrating seamlessly with popular AI endpoints. Additionally, Manifold provides robust semantic search capabilities using PGVector combined with the SEFII engine. It is under active development and not production-ready.

well-architected-iac-analyzer
Well-Architected Infrastructure as Code (IaC) Analyzer is a project demonstrating how generative AI can evaluate infrastructure code for alignment with best practices. It features a modern web application allowing users to upload IaC documents, complete IaC projects, or architecture diagrams for assessment. The tool provides insights into infrastructure code alignment with AWS best practices, offers suggestions for improving cloud architecture designs, and can generate IaC templates from architecture diagrams. Users can analyze CloudFormation, Terraform, or AWS CDK templates, architecture diagrams in PNG or JPEG format, and complete IaC projects with supporting documents. Real-time analysis against Well-Architected best practices, integration with AWS Well-Architected Tool, and export of analysis results and recommendations are included.