
awesome-ai-cybersecurity
Welcome to the ultimate list of resources for AI in cybersecurity. This repository aims to provide an organized collection of high-quality resources to help professionals, researchers, and enthusiasts stay updated and advance their knowledge in the field.
Stars: 76

This repository is a comprehensive collection of resources for utilizing AI in cybersecurity. It covers various aspects such as prediction, prevention, detection, response, monitoring, and more. The resources include tools, frameworks, case studies, best practices, tutorials, and research papers. The repository aims to assist professionals, researchers, and enthusiasts in staying updated and advancing their knowledge in the field of AI cybersecurity.
README:
Welcome to the ultimate list of resources for AI in cybersecurity. This repository aims to provide an organized collection of high-quality resources to help professionals, researchers, and enthusiasts stay updated and advance their knowledge in the field.
AI applications in cybersecurity can be categorized using Gartner's PPDR model:
- Prediction
- Prevention
- Detection
- Response
- Monitoring
Additionally, AI applications can be divided by technical layers:
- Network (network traffic analysis and intrusion detection)
- Endpoint (anti-malware)
- Application (WAF or database firewalls)
- User (UBA)
- Process behavior (anti-fraud)
- DeepExploit - Fully automated penetration testing framework using machine learning. It uses reinforcement learning to improve its attack strategies over time.
- open-appsec - Open-appsec is an open source machine-learning security engine that preemptively and automatically prevents threats against Web Application & APIs.
- OpenVAS - An open-source vulnerability scanner and vulnerability management solution. AI can be used to improve the identification and prioritization of vulnerabilities based on their potential impact and likelihood of exploitation.
- SEMA - ToolChain using Symbolic Execution for Malware Analysis. SEMA provides a framework for symbolic execution to extract execution traces and build system call dependency graphs (SCDGs). These graphs are used for malware classification and analysis, enabling the detection of malware based on symbolic execution and machine learning techniques.
- Malware environment for OpenAI Gym - Create an AI that learns through reinforcement learning which functionality-preserving transformations to make on a malware sample to break through / bypass machine learning static-analysis malware detection.
- Snort IDS - An open-source network IDS and IPS capable of real-time traffic analysis and packet logging. Snort can leverage AI for anomaly detection and to enhance its pattern matching algorithms for better intrusion detection.
- PANTHER - PANTHER combines advanced techniques in network protocol verification, integrating the Shadow network simulator with the Ivy formal verification tool. This framework allows for detailed examination of time properties in network protocols and identifies real-world implementation errors. It supports multiple protocols and can simulate advanced persistent threats (APTs) in network protocols.
- OSSEC - An open-source host-based intrusion detection system (HIDS). AI can enhance OSSEC by providing advanced anomaly detection and predictive analysis to identify potential threats before they materialize.
- Zeek - A powerful network analysis framework focused on security monitoring. AI can be integrated to analyze network traffic patterns and detect anomalies indicative of security threats.
- AIEngine - Next-generation interactive/programmable packet inspection engine with IDS functionality. AIEngine uses machine learning to improve packet inspection and anomaly detection, adapting to new threats over time.
- Sophos Intercept X - Advanced endpoint protection combining traditional signature-based detection with AI-powered behavioral analysis to detect and prevent malware and ransomware attacks.
- MARK - The multi-agent ranking framework (MARK) aims to provide all the building blocks required to build large-scale detection and ranking systems. It includes distributed storage suited for BigData applications, a web-based visualization and management interface, a distributed execution framework for detection algorithms, and an easy-to-configure triggering mechanism. This allows data scientists to focus on developing effective detection algorithms.
- Metasploit - A tool for developing and executing exploit code against a remote target machine. AI can be used to automate the selection of exploits and optimize the attack vectors based on target vulnerabilities.
- PentestGPT - PentestGPT provides advanced AI and integrated tools to help security teams conduct comprehensive penetration tests effortlessly. Scan, exploit, and analyze web applications, networks, and cloud environments with ease and precision, without needing expert skills.
- Cortex - A powerful and flexible observable analysis and active response engine. AI can be used in Cortex to automate the analysis of observables and enhance threat detection capabilities.
- Nmap - A free and open-source network scanner used to discover hosts and services on a computer network. AI can enhance Nmap's capabilities by automating the analysis of scan results and suggesting potential security weaknesses.
- Burp Suite - A leading range of cybersecurity tools, brought to you by PortSwigger. Burp Suite can integrate AI to automate vulnerability detection and improve the efficiency of web application security testing.
- Nikto - An open-source web server scanner which performs comprehensive tests against web servers for multiple items. AI can help Nikto by automating the identification of complex vulnerabilities and enhancing detection accuracy.
- MISP - Open source threat intelligence platform for gathering, sharing, storing, and correlating Indicators of Compromise (IoCs). AI can enhance the efficiency of threat detection and response by automating data analysis and correlation.
- Scammer-List - A free open source AI based Scam and Spam Finder with a free API.
- Review - machine learning techniques applied to cybersecurity
- Cybersecurity data science - an overview from machine learning perspective
- Machine learning approaches to IoT security - A systematic literature review
- AI infosec - first strikes, zero-day markets, hardware supply chains, adoption barriers
- AI Safety in a World of Vulnerable Machine Learning Systems
- IBM Cybersecurity Analyst - Get ready to launch your career in cybersecurity. Build job-ready skills for an in-demand role in the field, no degree or prior experience required.
- NIST AI RMF - A framework for managing risks associated with AI in SaaS. It provides guidelines on how to implement AI securely, focusing on risk assessment, mitigation, and governance.
- Microsoft AI Security - Case studies on securing AI applications in SaaS environments. These case studies demonstrate how AI can be used to enhance security and protect against evolving threats.
- Google AI Security - Insights and case studies from Google on how to secure AI applications in the cloud.
- IBM Watson - Tools and solutions for securing AI applications. Watson uses AI to analyze vast amounts of security data and identify potential threats, providing actionable insights for cybersecurity professionals.
- Azure Security Center - Comprehensive security management system for cloud environments. AI and machine learning are used to identify threats and vulnerabilities in real-time.
Machine learning in network security focuses on Network Traffic Analytics (NTA) to analyze traffic and detect anomalies and attacks.
Examples of ML techniques:
- Regression to predict network packet parameters and compare them with normal values.
- Classification to identify different classes of network attacks.
- Clustering for forensic analysis.
Research Papers:
- Machine Learning Techniques for Intrusion Detection - A comprehensive survey on various ML techniques used for intrusion detection.
- A Survey of Network Anomaly Detection Techniques - Discusses various techniques and methods for detecting anomalies in network traffic.
- Shallow and Deep Networks Intrusion Detection System - A Taxonomy and Survey - A taxonomy and survey of shallow and deep learning techniques for intrusion detection.
- A Taxonomy and Survey of Intrusion Detection System Design Techniques, Network Threats and Datasets - An in-depth review of IDS design techniques and relevant datasets.
Machine learning applications for endpoint protection can vary depending on the type of endpoint.
Common tasks:
- Regression to predict the next system call for executable processes.
- Classification to categorize programs into malware, spyware, or ransomware.
- Clustering for malware detection on secure email gateways.
Research Papers:
- Deep Learning at the Shallow End - Malware Classification for Non-Domain Experts - Discusses deep learning techniques for malware classification.
- Malware Detection by Eating a Whole EXE - Presents a method for detecting malware by analyzing entire executable files.
Machine learning can be applied to secure web applications, databases, ERP systems, and SaaS applications.
Examples:
- Regression to detect anomalies in HTTP requests.
- Classification to identify known attack types.
- Clustering user activity to detect DDOS attacks.
Research Papers:
- Adaptively Detecting Malicious Queries in Web Attacks - Proposes methods for detecting malicious web queries.
LLMs:
- garak - NVIDIA LLM vulnerability scanner.
User behavior analysis involves detecting anomalies in user actions, which is often an unsupervised learning problem.
Tasks:
- Regression to detect anomalies in user actions.
- Classification for peer-group analysis.
- Clustering to identify outlier user groups.
Research Papers:
- Detecting Anomalous User Behavior Using an Extended Isolation Forest Algorithm - Discusses an extended isolation forest algorithm for detecting anomalous user behavior.
Process behavior monitoring involves detecting anomalies in business processes to identify fraud.
Tasks:
- Regression to predict user actions and detect outliers.
- Classification to identify known fraud types.
- Clustering to compare business processes and detect outliers.
Research Papers:
- A Survey of Credit Card Fraud Detection Techniques - A survey on various techniques for credit card fraud detection.
- Anomaly Detection in Industrial Control Systems Using CNNs - Discusses the use of convolutional neural networks for anomaly detection in industrial control systems.
IDS/IPS systems detect and prevent malicious network activities using machine learning to reduce false positives and improve accuracy.
Research Papers:
- Next-Generation Intrusion Detection Systems - Discusses advancements in intrusion detection systems.
- AI for Cybersecurity by Cylance (2017) - An introduction to AI for cybersecurity by Cylance.
- Machine Learning and Security - Discusses the application of machine learning in security.
- Mastering Machine Learning for Penetration Testing - A guide on using machine learning for penetration testing.
- Malware Data Science - Covers data science techniques for malware analysis.
- AI for Cybersecurity - A Handbook of Use Cases - A handbook on various use cases of AI in cybersecurity.
- Deep Learning Algorithms for Cybersecurity Applications - A Technological and Status Review - Reviews the state of deep learning algorithms in cybersecurity applications.
- Machine Learning and Cybersecurity - Hype and Reality - Discusses the real-world applications and limitations of machine learning in cybersecurity.
- Deep-pwning - A lightweight framework for evaluating machine learning model robustness against adversarial attacks.
- Counterfit - An automation layer for assessing the security of machine learning systems.
- DeepFool - A method to fool deep neural networks.
- garak - A security probing tool for large language models (LLMs).
- Snaike-MLflow - A suite of red team tools for MLflow.
- HackGPT - A tool leveraging ChatGPT for hacking purposes.
- HackingBuddyGPT - An automated penetration tester.
- Charcuterie - Code execution techniques for machine learning libraries.
- Exploring the Space of Adversarial Images - A tool to experiment with adversarial images.
- Adversarial Machine Learning Library (Ad-lib) - A game-theoretic library for adversarial machine learning.
- EasyEdit - A tool to modify the ground truths of large language models (LLMs).
- BadDiffusion - Official repository to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023.
- PrivacyRaven - A privacy testing library for deep learning systems.
- Guardrail.ai - A Python package to add structure, type, and quality guarantees to the outputs of large language models (LLMs).
- CircleGuardBench - A full-fledged benchmark for evaluating protection capabilities of AI models.
- ProtectAI's model scanner - A security scanner for detecting suspicious actions in serialized ML models.
- rebuff - A prompt injection detector.
- langkit - A toolkit for monitoring language models and detecting attacks.
- StringSifter - A tool that ranks strings based on their relevance for malware analysis.
- Python Differential Privacy Library - A library for implementing differential privacy.
- Diffprivlib - IBM's differential privacy library.
- PLOT4ai - A threat modeling library for building responsible AI.
- TenSEAL - A library for performing homomorphic encryption operations on tensors.
- SyMPC - A secure multiparty computation library.
- PyVertical - Privacy-preserving vertical federated learning.
- Cloaked AI - Open source property-preserving encryption for vector embeddings.
- MLSecOps podcast - A podcast dedicated to the intersection of machine learning and security operations.
- OWASP ML TOP 10 - The top 10 machine learning security risks identified by OWASP.
- OWASP LLM TOP 10 - The top 10 security risks for large language models as identified by OWASP.
- OWASP AI Security and Privacy Guide - A guide to securing AI systems and ensuring privacy.
- OWASP WrongSecrets LLM exercise - An exercise for testing AI model security.
- NIST AIRC - NIST Trustworthy & Responsible AI Resource Center.
- ENISA Multilayer Framework for Good Cybersecurity Practices for AI - A framework for good cybersecurity practices in AI.
- The MLSecOps Top 10 - Top 10 security practices for machine learning operations.
- High Dimensional Spaces, Deep Learning and Adversarial Examples - Discusses the challenges of adversarial examples in high-dimensional spaces.
- Adversarial Task Allocation - Explores adversarial task allocation in machine learning systems.
- Robust Physical-World Attacks on Deep Learning Models - Examines physical-world attacks on deep learning models.
- The Space of Transferable Adversarial Examples - Discusses transferable adversarial examples in deep learning.
- RHMD - Evasion-Resilient Hardware Malware Detectors - Explores hardware-based malware detectors resilient to evasion.
- Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks - Examines policy induction attacks on deep reinforcement learning models.
- Can you fool AI with adversarial examples on a visual Turing test? - Tests the robustness of AI models using a visual Turing test.
- Explaining and Harnessing Adversarial Examples - A foundational paper on adversarial examples in machine learning.
- Delving into Adversarial Attacks on Deep Policies - Analyzes adversarial attacks on deep policies.
- Crafting Adversarial Input Sequences for Recurrent Neural Networks - Discusses adversarial attacks on RNNs.
- Practical Black-Box Attacks against Machine Learning - Explores practical black-box attacks on machine learning models.
- Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN - Uses GANs to generate adversarial malware examples.
- Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains - Explores data-driven attacks on black-box classifiers.
- Fast Feature Fool - A Data-Independent Approach to Universal Adversarial Perturbations - Proposes a method for creating universal adversarial perturbations.
- Simple Black-Box Adversarial Perturbations for Deep Networks - Discusses simple methods for black-box adversarial perturbations.
- Wild Patterns - Ten Years After the Rise of Adversarial Machine Learning - A retrospective on the evolution of adversarial machine learning.
- One Pixel Attack for Fooling Deep Neural Networks - Demonstrates how a single-pixel modification can fool deep neural networks.
- FedMLSecurity - A Benchmark for Attacks and Defenses in Federated Learning and LLMs - A benchmark for evaluating the security of federated learning and LLMs.
- Jailbroken - How Does LLM Safety Training Fail? - Analyzes the failure modes of LLM safety training.
- Bad Characters - Imperceptible NLP Attacks - Discusses imperceptible adversarial attacks on NLP models.
- Universal and Transferable Adversarial Attacks on Aligned Language Models - Explores universal adversarial attacks on language models.
- Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial Texts - Investigates the vulnerability of NLP models to adversarial texts.
- Adversarial Examples Are Not Bugs, They Are Features - Argues that adversarial examples are inherent features of models.
- Adversarial Attacks on Tables with Entity Swap - Discusses adversarial attacks on tabular data.
- Here Comes the AI Worm - Unleashing Zero-click Worms that Target GenAI-Powered Applications - Explores zero-click worms targeting AI-powered applications.
- Stealing Machine Learning Models via Prediction APIs - Discusses methods for extracting machine learning models via prediction APIs.
- On the Risks of Stealing the Decoding Algorithms of Language Models - Investigates the risks of extracting decoding algorithms from language models.
- Adversarial Demonstration Attacks on Large Language Models - Explores evasion attacks on large language models.
- Looking at the Bag is not Enough to Find the Bomb - An Evasion of Structural Methods for Malicious PDF Files Detection - Discusses evasion of PDF malware detection methods.
- Adversarial Generative Nets - Neural Network Attacks on State-of-the-Art Face Recognition - Investigates adversarial attacks on face recognition models.
- Query Strategies for Evading Convex-Inducing Classifiers - Discusses query strategies for evading convex-inducing classifiers.
- Adversarial Prompting for Black Box Foundation Models - Explores adversarial prompting for foundation models.
- Automatically Evading Classifiers - A Case Study on PDF Malware Classifiers - Case study on evading PDF malware classifiers.
- Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers - Investigates black-box attacks on RNNs and malware classifiers.
- GPTs Don't Keep Secrets - Searching for Backdoor Watermark Triggers in Autoregressive Language Models - Investigates backdoor triggers in autoregressive language models.
- Instructions as Backdoors - Backdoor Vulnerabilities of Instruction Tuning for Large Language Models - Discusses backdoor vulnerabilities in instruction-tuned language models.
- BadGPT - Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT - Explores backdoor attacks on ChatGPT.
- Towards Poisoning of Deep Learning Algorithms with Back-Gradient Optimization - Proposes back-gradient optimization for poisoning deep learning algorithms.
- Efficient Label Contamination Attacks Against Black-Box Learning Models - Discusses efficient label contamination attacks on black-box models.
- Text-to-Image Diffusion Models Can be Easily Backdoored through Multimodal Data Poisoning - Explores backdooring diffusion models through data poisoning.
- UOR - Universal Backdoor Attacks on Pre-Trained Language Models - Discusses universal backdoor attacks on language models.
- Analyzing And Editing Inner Mechanisms of Backdoored Language Models - Investigates the inner mechanisms of backdoored language models.
- How to Backdoor Diffusion Models? - Explores methods for backdooring diffusion models.
- On the Exploitability of Instruction Tuning - Discusses the exploitability of instruction tuning.
- Defending against Insertion-based Textual Backdoor Attacks via Attribution - Proposes defenses against textual backdoor attacks.
- A Gradient Control Method for Backdoor Attacks on Parameter-Efficient Tuning - Discusses gradient control methods for backdoor attacks.
- BadNL - Backdoor Attacks Against NLP Models with Semantic-Preserving Improvements - Explores semantic-preserving backdoor attacks on NLP models.
- Be Careful About Poisoned Word Embeddings - Exploring the Vulnerability of the Embedding Layers in NLP Models - Discusses the vulnerability of word embeddings to poisoning.
- BadPrompt - Backdoor Attacks on Continuous Prompts - Investigates backdoor attacks on continuous prompts.
- Extracting Training Data from Diffusion Models - Discusses the extraction of training data from diffusion models.
- Prompt Stealing Attacks Against Text-to-Image Generation Models - Explores prompt stealing attacks on text-to-image generation models.
- Are Diffusion Models Vulnerable to Membership Inference Attacks? - Investigates the vulnerability of diffusion models to membership inference attacks.
- Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures - Discusses model inversion attacks and countermeasures.
- Multi-Step Jailbreaking Privacy Attacks on ChatGPT - Explores multi-step jailbreaking privacy attacks on ChatGPT.
- Flocks of Stochastic Parrots - Differentially Private Prompt Learning for Large Language Models - Discusses differentially private prompt learning for language models.
- ProPILE - Probing Privacy Leakage in Large Language Models - Investigates privacy leakage in large language models.
- Sentence Embedding Leaks More Information than You Expect - Generative Embedding Inversion Attack to Recover the Whole Sentence - Discusses embedding inversion attacks on sentence embeddings.
- Text Embeddings Reveal (Almost) As Much As Text - Explores the information leakage of text embeddings.
- Vec2Face - Unveil Human Faces from Their Blackbox Features in Face Recognition - Discusses the reconstruction of human faces from face recognition features.
- Realistic Face Reconstruction from Deep Embeddings - Explores face reconstruction from deep embeddings.
- DeepPayload - Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection - Discusses backdoor attacks on deep learning models through neural payload injection.
- Not What You've Signed Up For - Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection - Discusses indirect prompt injection attacks on LLM-integrated applications.
- Latent Jailbreak - A Benchmark for Evaluating Text Safety and Output Robustness of Large Language Models - Proposes a benchmark for evaluating the safety and robustness of large language models.
- Jailbreaker - Automated Jailbreak Across Multiple Large Language Model Chatbots - Discusses automated jailbreak attacks on multiple large language model chatbots.
- (Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs - Explores indirect instruction injection using images and sounds in multi-modal LLMs.
- Summoning Demons - The Pursuit of Exploitable Bugs in Machine Learning - Discusses the pursuit of exploitable bugs in machine learning.
- capAI - A Procedure for Conducting Conformity Assessment of AI Systems in Line with the EU Artificial Intelligence Act - Proposes a procedure for AI system conformity assessment.
- A Study on Robustness and Reliability of Large Language Model Code Generation - Investigates the robustness and reliability of LLM code generation.
- Getting pwn'd by AI - Penetration Testing with Large Language Models - Explores penetration testing with large language models.
- Evaluating LLMs for Privilege-Escalation Scenarios - Evaluates LLMs for privilege-escalation scenarios.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for awesome-ai-cybersecurity
Similar Open Source Tools

awesome-ai-cybersecurity
This repository is a comprehensive collection of resources for utilizing AI in cybersecurity. It covers various aspects such as prediction, prevention, detection, response, monitoring, and more. The resources include tools, frameworks, case studies, best practices, tutorials, and research papers. The repository aims to assist professionals, researchers, and enthusiasts in staying updated and advancing their knowledge in the field of AI cybersecurity.

awesome-openvino
Awesome OpenVINO is a curated list of AI projects based on the OpenVINO toolkit, offering a rich assortment of projects, libraries, and tutorials covering various topics like model optimization, deployment, and real-world applications across industries. It serves as a valuable resource continuously updated to maximize the potential of OpenVINO in projects, featuring projects like Stable Diffusion web UI, Visioncom, FastSD CPU, OpenVINO AI Plugins for GIMP, and more.

agentUniverse
agentUniverse is a framework for developing applications powered by multi-agent based on large language model. It provides essential components for building single agent and multi-agent collaboration mechanism for customizing collaboration patterns. Developers can easily construct multi-agent applications and share pattern practices from different fields. The framework includes pre-installed collaboration patterns like PEER and DOE for complex task breakdown and data-intensive tasks.

ianvs
Ianvs is a distributed synergy AI benchmarking project incubated in KubeEdge SIG AI. It aims to test the performance of distributed synergy AI solutions following recognized standards, providing end-to-end benchmark toolkits, test environment management tools, test case control tools, and benchmark presentation tools. It also collaborates with other organizations to establish comprehensive benchmarks and related applications. The architecture includes critical components like Test Environment Manager, Test Case Controller, Generation Assistant, Simulation Controller, and Story Manager. Ianvs documentation covers quick start, guides, dataset descriptions, algorithms, user interfaces, stories, and roadmap.

agentUniverse
agentUniverse is a multi-agent framework based on large language models, providing flexible capabilities for building individual agents. It focuses on collaborative pattern components to solve problems in various fields and integrates domain experience. The framework supports LLM model integration and offers various pattern components like PEER and DOE. Users can easily configure models and set up agents for tasks. agentUniverse aims to assist developers and enterprises in constructing domain-expert-level intelligent agents for seamless collaboration.

awesome-algorand
Awesome Algorand is a curated list of resources related to the Algorand Blockchain, including official resources, wallets, blockchain explorers, portfolio trackers, learning resources, development tools, DeFi platforms, nodes & consensus participation, subscription management, security auditing services, blockchain bridges, oracles, name services, community resources, Algorand Request for Comments, metrics and analytics services, decentralized voting tools, and NFT marketplaces. The repository provides a comprehensive collection of tools, tutorials, protocols, and platforms for developers, users, and enthusiasts interested in the Algorand ecosystem.

agentUniverse
agentUniverse is a multi-agent framework based on large language models, providing flexible capabilities for building individual agents. It focuses on multi-agent collaborative patterns, integrating domain experience to help agents solve problems in various fields. The framework includes pattern components like PEER and DOE for event interpretation, industry analysis, and financial report generation. It offers features for agent construction, multi-agent collaboration, and domain expertise integration, aiming to create intelligent applications with professional know-how.

SuperKnowa
SuperKnowa is a fast framework to build Enterprise RAG (Retriever Augmented Generation) Pipelines at Scale, powered by watsonx. It accelerates Enterprise Generative AI applications to get prod-ready solutions quickly on private data. The framework provides pluggable components for tackling various Generative AI use cases using Large Language Models (LLMs), allowing users to assemble building blocks to address challenges in AI-driven text generation. SuperKnowa is battle-tested from 1M to 200M private knowledge base & scaled to billions of retriever tokens.

TI-Mindmap-GPT
TI MINDMAP GPT is an AI-powered tool designed to assist cyber threat intelligence teams in quickly synthesizing and visualizing key information from various Threat Intelligence sources. The tool utilizes Large Language Models (LLMs) to transform lengthy content into concise, actionable summaries, going beyond mere text reduction to provide insightful encapsulations of crucial points and themes. Users can leverage their own LLM keys for personalized and efficient information processing, streamlining data analysis and enabling teams to focus on strategic decision-making.

ai-tutor-rag-system
The AI Tutor RAG System repository contains Jupyter notebooks supporting the RAG course, focusing on enhancing AI models with retrieval-based methods. It covers foundational and advanced concepts in retrieval-augmented generation, including data retrieval techniques, model integration with retrieval systems, and practical applications of RAG in real-world scenarios.

AgentConnect
AgentConnect is an open-source implementation of the Agent Network Protocol (ANP) aiming to define how agents connect with each other and build an open, secure, and efficient collaboration network for billions of agents. It addresses challenges like interconnectivity, native interfaces, and efficient collaboration. The architecture includes authentication, end-to-end encryption modules, meta-protocol module, and application layer protocol integration framework. AgentConnect focuses on performance and multi-platform support, with plans to rewrite core components in Rust and support mobile platforms and browsers. The project aims to establish ANP as an industry standard and form an ANP Standardization Committee. Installation is done via 'pip install agent-connect' and demos can be run after cloning the repository. Features include decentralized authentication based on did:wba and HTTP, and meta-protocol negotiation examples.

GenAI_Agents
GenAI Agents is a comprehensive repository for developing and implementing Generative AI (GenAI) agents, ranging from simple conversational bots to complex multi-agent systems. It serves as a valuable resource for learning, building, and sharing GenAI agents, offering tutorials, implementations, and a platform for showcasing innovative agent creations. The repository covers a wide range of agent architectures and applications, providing step-by-step tutorials, ready-to-use implementations, and regular updates on advancements in GenAI technology.

AgentConnect
AgentConnect is an open-source implementation of the Agent Network Protocol (ANP) aiming to define how agents connect with each other and build an open, secure, and efficient collaboration network for billions of agents. It addresses challenges like interconnectivity, native interfaces, and efficient collaboration by providing authentication, end-to-end encryption, meta-protocol handling, and application layer protocol integration. The project focuses on performance and multi-platform support, with plans to rewrite core components in Rust and support Mac, Linux, Windows, mobile platforms, and browsers. AgentConnect aims to establish ANP as an industry standard through protocol development and forming a standardization committee.

awesome-generative-ai
Awesome Generative AI is a curated list of modern Generative Artificial Intelligence projects and services. Generative AI technology creates original content like images, sounds, and texts using machine learning algorithms trained on large data sets. It can produce unique and realistic outputs such as photorealistic images, digital art, music, and writing. The repo covers a wide range of applications in art, entertainment, marketing, academia, and computer science.

llmariner
LLMariner is an extensible open source platform built on Kubernetes to simplify the management of generative AI workloads. It enables efficient handling of training and inference data within clusters, with OpenAI-compatible APIs for seamless integration with a wide range of AI-driven applications.

awesome-mlops
Awesome MLOps is a curated list of tools related to Machine Learning Operations, covering areas such as AutoML, CI/CD for Machine Learning, Data Cataloging, Data Enrichment, Data Exploration, Data Management, Data Processing, Data Validation, Data Visualization, Drift Detection, Feature Engineering, Feature Store, Hyperparameter Tuning, Knowledge Sharing, Machine Learning Platforms, Model Fairness and Privacy, Model Interpretability, Model Lifecycle, Model Serving, Model Testing & Validation, Optimization Tools, Simplification Tools, Visual Analysis and Debugging, and Workflow Tools. The repository provides a comprehensive collection of tools and resources for individuals and teams working in the field of MLOps.
For similar tasks

galah
Galah is an LLM-powered web honeypot designed to mimic various applications and dynamically respond to arbitrary HTTP requests. It supports multiple LLM providers, including OpenAI. Unlike traditional web honeypots, Galah dynamically crafts responses for any HTTP request, caching them to reduce repetitive generation and API costs. The honeypot's configuration is crucial, directing the LLM to produce responses in a specified JSON format. Note that Galah is a weekend project exploring LLM capabilities and not intended for production use, as it may be identifiable through network fingerprinting and non-standard responses.

StratosphereLinuxIPS
Slips is a powerful endpoint behavioral intrusion prevention and detection system that uses machine learning to detect malicious behaviors in network traffic. It can work with network traffic in real-time, PCAP files, and network flows from tools like Suricata, Zeek/Bro, and Argus. Slips threat detection is based on machine learning models, threat intelligence feeds, and expert heuristics. It gathers evidence of malicious behavior and triggers alerts when enough evidence is accumulated. The tool is Python-based and supported on Linux and MacOS, with blocking features only on Linux. Slips relies on Zeek network analysis framework and Redis for interprocess communication. It offers a graphical user interface for easy monitoring and analysis.

awsome_kali_MCPServers
awsome-kali-MCPServers is a repository containing Model Context Protocol (MCP) servers tailored for Kali Linux environments. It aims to optimize reverse engineering, security testing, and automation tasks by incorporating powerful tools and flexible features. The collection includes network analysis tools, support for binary understanding, and automation scripts to streamline repetitive tasks. The repository is continuously evolving with new features and integrations based on the FastMCP framework, such as network scanning, symbol analysis, binary analysis, string extraction, network traffic analysis, and sandbox support using Docker containers.

dev3000
dev3000 captures your web app's complete development timeline including server logs, browser events, console messages, network requests, and automatic screenshots in a unified, timestamped feed for AI debugging. It creates a comprehensive log of your development session that AI assistants can easily understand, monitoring your app in a real browser and capturing server logs, console output, browser console messages and errors, network requests and responses, and automatic screenshots on navigation, errors, and key events. Logs are saved with timestamps and rotated to keep the 10 most recent per project, with the current session symlinked for easy access. The tool integrates with AI assistants for instant debugging and provides advanced querying options through the MCP server.

awesome-ai-cybersecurity
This repository is a comprehensive collection of resources for utilizing AI in cybersecurity. It covers various aspects such as prediction, prevention, detection, response, monitoring, and more. The resources include tools, frameworks, case studies, best practices, tutorials, and research papers. The repository aims to assist professionals, researchers, and enthusiasts in staying updated and advancing their knowledge in the field of AI cybersecurity.

airgeddon
Airgeddon is a versatile bash script designed for Linux systems to conduct wireless network audits. It provides a comprehensive set of features and tools for auditing and securing wireless networks. The script is user-friendly and offers functionalities such as scanning, capturing handshakes, deauth attacks, and more. Airgeddon is regularly updated and supported, making it a valuable tool for both security professionals and enthusiasts.

sploitcraft
SploitCraft is a curated collection of security exploits, penetration testing techniques, and vulnerability demonstrations intended to help professionals and enthusiasts understand and demonstrate the latest in cybersecurity threats and offensive techniques. The repository is organized into folders based on specific topics, each containing directories and detailed READMEs with step-by-step instructions. Contributions from the community are welcome, with a focus on adding new proof of concepts or expanding existing ones while adhering to the current structure and format of the repository.

PentestGPT
PentestGPT provides advanced AI and integrated tools to help security teams conduct comprehensive penetration tests effortlessly. Scan, exploit, and analyze web applications, networks, and cloud environments with ease and precision, without needing expert skills. The tool utilizes Supabase for data storage and management, and Vercel for hosting the frontend. It offers a local quickstart guide for running the tool locally and a hosted quickstart guide for deploying it in the cloud. PentestGPT aims to simplify the penetration testing process for security professionals and enthusiasts alike.
For similar jobs

watchtower
AIShield Watchtower is a tool designed to fortify the security of AI/ML models and Jupyter notebooks by automating model and notebook discoveries, conducting vulnerability scans, and categorizing risks into 'low,' 'medium,' 'high,' and 'critical' levels. It supports scanning of public GitHub repositories, Hugging Face repositories, AWS S3 buckets, and local systems. The tool generates comprehensive reports, offers a user-friendly interface, and aligns with industry standards like OWASP, MITRE, and CWE. It aims to address the security blind spots surrounding Jupyter notebooks and AI models, providing organizations with a tailored approach to enhancing their security efforts.

awesome-ai-cybersecurity
This repository is a comprehensive collection of resources for utilizing AI in cybersecurity. It covers various aspects such as prediction, prevention, detection, response, monitoring, and more. The resources include tools, frameworks, case studies, best practices, tutorials, and research papers. The repository aims to assist professionals, researchers, and enthusiasts in staying updated and advancing their knowledge in the field of AI cybersecurity.

ciso-assistant-community
CISO Assistant is a tool that helps organizations manage their cybersecurity posture and compliance. It provides a centralized platform for managing security controls, threats, and risks. CISO Assistant also includes a library of pre-built frameworks and tools to help organizations quickly and easily implement best practices.

PurpleLlama
Purple Llama is an umbrella project that aims to provide tools and evaluations to support responsible development and usage of generative AI models. It encompasses components for cybersecurity and input/output safeguards, with plans to expand in the future. The project emphasizes a collaborative approach, borrowing the concept of purple teaming from cybersecurity, to address potential risks and challenges posed by generative AI. Components within Purple Llama are licensed permissively to foster community collaboration and standardize the development of trust and safety tools for generative AI.

vpnfast.github.io
VPNFast is a lightweight and fast VPN service provider that offers secure and private internet access. With VPNFast, users can protect their online privacy, bypass geo-restrictions, and secure their internet connection from hackers and snoopers. The service provides high-speed servers in multiple locations worldwide, ensuring a reliable and seamless VPN experience for users. VPNFast is easy to use, with a user-friendly interface and simple setup process. Whether you're browsing the web, streaming content, or accessing sensitive information, VPNFast helps you stay safe and anonymous online.

taranis-ai
Taranis AI is an advanced Open-Source Intelligence (OSINT) tool that leverages Artificial Intelligence to revolutionize information gathering and situational analysis. It navigates through diverse data sources like websites to collect unstructured news articles, utilizing Natural Language Processing and Artificial Intelligence to enhance content quality. Analysts then refine these AI-augmented articles into structured reports that serve as the foundation for deliverables such as PDF files, which are ultimately published.

NightshadeAntidote
Nightshade Antidote is an image forensics tool used to analyze digital images for signs of manipulation or forgery. It implements several common techniques used in image forensics including metadata analysis, copy-move forgery detection, frequency domain analysis, and JPEG compression artifacts analysis. The tool takes an input image, performs analysis using the above techniques, and outputs a report summarizing the findings.

h4cker
This repository is a comprehensive collection of cybersecurity-related references, scripts, tools, code, and other resources. It is carefully curated and maintained by Omar Santos. The repository serves as a supplemental material provider to several books, video courses, and live training created by Omar Santos. It encompasses over 10,000 references that are instrumental for both offensive and defensive security professionals in honing their skills.