fraim

fraim

A flexible framework for security teams to build and deploy AI-powered workflows that complement their existing security operations.

Stars: 114

Visit
 screenshot

Fraim is an AI-powered toolkit designed for security engineers to enhance their workflows by leveraging AI capabilities. It offers solutions to find, detect, fix, and flag vulnerabilities throughout the development lifecycle. The toolkit includes features like Risk Flagger for identifying risks in code changes, Code Security Analysis for context-aware vulnerability detection, and Infrastructure as Code Analysis for spotting misconfigurations in cloud environments. Fraim can be run as a CLI tool or integrated into Github Actions, making it a versatile solution for security teams and organizations looking to enhance their security practices with AI technology.

README:

Fraim - A Security Engineer's AI Toolkit

๐Ÿ”ญ Overview

Fraim gives security engineers AI-powered workflows to help them leverage the power of AI to solve REAL business needs. The workflows in this project are companions to a security engineer to help them find, detect, fix, and flag vulnerabilities across the development lifecycle. You can run Fraim as a CLI or inside Github Actions.

๐Ÿšฉ Risk Flagger

Most security teams do not have visibility into the code changes happening on a day-to-day basis, and it is unrealistic to review every change. Risk Flagger solves this by requesting review on a Pull Request only if a "risk" is identified. These "risks" can be defined to match your specific use cases (ie "Flag any changes that make changes to authentication").

Perfect for:

  • Security teams with no visibility into code changes
  • Teams needing to focus limited security resources on the highest-priority risks
  • Organizations wanting to implement "security left" practices
# Basic risk flagger with built-in risks
fraim run risk_flagger --model anthropic/claude-sonnet-4-20250514 --diff --base <base_sha> --head <head_sha> --approver security

# Custom risk considerations inline
fraim run risk_flagger --model anthropic/claude-sonnet-4-20250514 --diff --base <base_sha> --head <head_sha> --custom-risk-list-json '{"Database Changes": "All changes to a database should be flagged, similarly any networking changes that might affect the database should be flagged."}' --custom-risk-list-action replace --approver security

# Custom risk considerations
fraim run risk_flagger --model anthropic/claude-sonnet-4-20250514 --diff --base <base_sha> --head <head_sha> --custom-risk-list-filepath ./custom-risks.yaml --approver security

NOTE: we recommend using the Anthropic or OpenAI latest models for this workflow.

Risk Flagger Preview

๐Ÿ›ก๏ธ Code Security Analysis

Most security teams rely on signature-based scanners and scattered linters that miss context and overwhelm engineers with noise. Code Security Analysis applies LLM-powered, context-aware review to surface real vulnerabilities across languages (e.g. injection, authentication/authorization flaws, insecure cryptography, secret exposure, and unsafe configurations), explaining impact and suggesting fixes. It integrates cleanly into CI via SARIF output and can run on full repos or just diffs to keep PRs secure without slowing delivery.

Perfect for:

  • Security teams needing comprehensive vulnerability coverage
  • Organizations requiring compliance with secure coding standards
  • Teams wanting to catch vulnerabilities before they reach production
# Comprehensive code analysis
fraim run code --location https://github.com/username/repo-name

# Focus on recent changes
fraim run code --location . --diff --base main --head HEAD

๐Ÿ—๏ธ Infrastructure as Code (IAC) Analysis

Cloud misconfigurations often slip through because policy-as-code checks and scattered linters miss context across modules, environments, and providers. Infrastructure as Code Analysis uses LLM-powered, context-aware review of Terraform, CloudFormation, and Kubernetes manifests to spot risky defaults, excessive permissions, insecure networking and storage, and compliance gapsโ€”explaining impact and proposing safer configurations. It integrates cleanly into CI via SARIF and can run on full repos or just diffs to prevent drift without slowing delivery.

Perfect for:

  • DevOps teams managing cloud infrastructure
  • Organizations with strict compliance requirements
  • Teams implementing Infrastructure as Code practices
  • Security teams overseeing cloud security posture
# Analyze infrastructure configurations
fraim run iac --location https://github.com/username/repo-name

๐Ÿš€ Getting Started

Github Action Quick Start

NOTE: This example assumes you are using an Anthropic based model.

Set your API key as a Secret in your repo. - Settings -> Secrets and Variables -> New Repository Secret -> ANTHROPIC_API_KEY Define your workflow inside your repo at .github/workflows/<action_name>.yml

name: AI Security Scan
on:
  pull_request:
    branches: [main]

jobs:
  security-scan:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      actions: read
      security-events: write # Required for uploading SARIF
      pull-requests: write # Required for PR comments and annotations

    steps:
      - name: Run Fraim Security Scan
        uses: fraim-dev/fraim-action@v0
        with:
          anthropic-api-key: ${{ secrets.ANTHROPIC_API_KEY }}
          workflows: "code"

CLI Quick Start

Prerequisites

  • Python 3.12+
  • pipx installation tool
  • API Key for your chosen AI provider (Google Gemini, OpenAI, etc.)

Installation

NOTE: These instructions are for Linux based systems, see docs for Windows installation instructions

  1. Install Fraim:
pipx install fraim
  1. Configure your AI provider:

    Google Gemini

    1. Get an API key from Google AI Studio
    2. Export it in your environment:
      export GEMINI_API_KEY=your_api_key_here
      

    OpenAI

    1. Get an API key from OpenAI Platform
    2. Export it in your environment:
      export OPENAI_API_KEY=your_api_key_here
      

Common CLI Arguments

Global Options (apply to all commands)

  • --debug: Enable debug logging for troubleshooting
  • --show-logs SHOW_LOGS: Print logs to standard error output
  • --log-output LOG_OUTPUT: Specify directory for log files
  • --observability langfuse: Enable LLM observability and analytics

Workflow Options (apply to most workflows)

  • --location LOCATION: Repository URL or local path to analyze
  • --model MODEL: AI model to use (default varies by workflow, e.g., gemini/gemini-2.5-flash)
  • --temperature TEMPERATURE: Model temperature setting (0.0-1.0, default: 0)
  • --chunk-size CHUNK_SIZE: Number of lines per processing chunk
  • --limit LIMIT: Maximum number of files to scan
  • --globs GLOBS: File patterns to include in analysis
  • --max-concurrent-chunks MAX_CONCURRENT_CHUNKS: Control parallelism

Git Diff Options

  • --diff: Analyze only git diff instead of full repository
  • --head HEAD: Git head commit for diff (default: HEAD)
  • --base BASE: Git base commit for diff (default: empty tree)

Pull Request Integration

  • --pr-url PR_URL: URL of pull request to analyze
  • --approver APPROVER: GitHub username/group to notify

Observability

Fraim supports optional observability and tracing through Langfuse, which helps track workflow performance, debug issues, and analyze AI model usage.

To enable observability:

  1. Install with observability support:
pipx install 'fraim[langfuse]'
  1. Enable observability during execution:
fraim --observability langfuse run code --location /code

This will trace your workflow execution, LLM calls, and performance metrics in Langfuse for analysis and debugging.

๐Ÿ’ฌ Community & Support

Join our growing community of security professionals using Fraim:

  • Documentation: Visit docs.fraim.dev for comprehensive guides and tutorials
  • Schedule a Demo: Book time with our team - We'd love to help! Schedule a call for anything related to Fraim (debugging, new integrations, customizing workflows, or even just to chat)
  • Slack Community: Join our Slack - Get help, share ideas, and connect with other security minded people looking to use AI to help their team succeed
  • Issues: Report bugs and request features via GitHub Issues
  • Contributing: See the contributing guide for more information.

๐Ÿ› ๏ธ "Fraim"-work Development

Building Custom Workflows

Fraim makes it easy to create custom security workflows tailored to your organization's specific needs:

Key Framework Components

  • Workflow Engine: Orchestrates AI agents and tools in flexible, composable patterns
  • LLM Integrations: Support for multiple AI providers with seamless switching
  • Tool System: Extensible security analysis tools that can be combined and customized
  • Input Connectors: Git repositories, file systems, APIs, and custom data sources
  • Output Formatters: JSON, SARIF, HTML reports, and custom output formats

Configuration System

Fraim uses a flexible configuration system that allows you to:

  • Customize AI model parameters for optimal performance
  • Configure workflow-specific settings and thresholds
  • Set up custom data sources and input methods
  • Define custom output formats and destinations
  • Manage API keys and authentication

See the fraim/config/ directory for configuration options.

1. Define Input and Output Types

# workflows/<name>/workflow.py
@dataclass
class MyWorkflowInput:
    """Input for the custom workflow."""
    code: Contextual[str]
    config: Config

type MyWorkflowOutput = List[sarif.Result]

2. Create Workflow Class

# workflows/<name>/workflow.py

# Define file patterns for your workflow
FILE_PATTERNS = [
    '*.config', '*.ini', '*.yaml', '*.yml', '*.json'
]

# Load prompts from YAML files
PROMPTS = PromptTemplate.from_yaml(os.path.join(os.path.dirname(__file__), "my_prompts.yaml"))

@workflow('my_custom_workflow')
class MyCustomWorkflow(Workflow[MyWorkflowInput, MyWorkflowOutput]):
    """Analyzes custom configuration files for security issues"""

    def __init__(self, config: Config, *args, **kwargs):
        super().__init__(config, *args, **kwargs)

        # Construct an LLM instance
        llm = LiteLLM.from_config(config)

        # Construct the analysis step
        parser = PydanticOutputParser(sarif.RunResults)
        self.analysis_step = LLMStep(llm, PROMPTS["system"], PROMPTS["user"], parser)

    async def workflow(self, input: MyWorkflowInput) -> MyWorkflowOutput:
        """Main workflow execution"""

        # 1. Analyze the configuration file
        analysis_results = await self.analysis_step.run({"code": input.code})

        # 2. Filter results by confidence threshold
        filtered_results = self.filter_results_by_confidence(
            analysis_results.results, input.config.confidence
        )

        return filtered_results

    def filter_results_by_confidence(self, results: List[sarif.Result], confidence_threshold: int) -> List[sarif.Result]:
        """Filter results by confidence."""
        return [result for result in results if result.properties.confidence > confidence_threshold]

3. Create Prompt Files

Create my_prompts.yaml in the same directory:

system: |
  You are a configuration security analyzer.

  Your job is to analyze configuration files for security misconfigurations and vulnerabilities.

  <vulnerability_types>
    Valid vulnerability types (use EXACTLY as shown):

    - Hardcoded Credentials
    - Insecure Defaults
    - Excessive Permissions
    - Unencrypted Storage
    - Weak Cryptography
    - Missing Security Headers
    - Debug Mode Enabled
    - Exposed Secrets
    - Insecure Protocols
    - Missing Access Controls
  </vulnerability_types>

  {{ output_format }}

user: |
  Analyze the following configuration file for security issues:

  {{ code }}

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


Fraim is built by security teams, for security teams. Help us make AI-powered security accessible to everyone.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for fraim

Similar Open Source Tools

For similar tasks

For similar jobs