offensive-ai-compilation

offensive-ai-compilation

A curated list of useful resources that cover Offensive AI.

Stars: 1094

Visit
 screenshot

README:

Offensive AI Compilation

A curated list of useful resources that cover Offensive AI.

πŸ“ Contents πŸ“

🚫 Abuse 🚫

Exploiting the vulnerabilities of AI models.

🧠 Adversarial Machine Learning 🧠

Adversarial Machine Learning is responsible for assessing their weaknesses and providing countermeasures.

⚑ Attacks ⚑

It is organized in four types of attacks: extraction, inversion, poisoning and evasion.

Adversarial Machine Learning attacks

πŸ”’ Extraction πŸ”’

It tries to steal the parameters and hyperparameters of a model by making requests that maximize the extraction of information.

Extraction attack

Depending on the knowledge of the adversary's model, white-box and black-box attacks can be performed.

In the simplest white-box case (when the adversary has full knowledge of the model, e.g., a sigmoid function), one can create a system of linear equations that can be easily solved.

In the generic case, where there is insufficient knowledge of the model, the substitute model is used. This model is trained with the requests made to the original model in order to imitate the same functionality as the original one.

White-box and black-box extraction attacks

⚠️ Limitations ⚠️
  • Training a substitute model is equivalent (in many cases) to training a model from scratch.

  • Very computationally intensive.

  • The adversary has limitations on the number of requests before being detected.

πŸ›‘οΈ Defensive actions πŸ›‘οΈ
πŸ”— Useful links πŸ”—
⬅️ Inversion (or inference) ⬅️

They are intended to reverse the information flow of a machine learning model.

Inference attack

They enable an adversary to have knowledge of the model that was not explicitly intended to be shared.

They allow to know the training data or information as statistical properties of the model.

Three types are possible:

  • Membership Inference Attack (MIA): An adversary attempts to determine whether a sample was employed as part of the training.

  • Property Inference Attack (PIA): An adversary aims to extract statistical properties that were not explicitly encoded as features during the training phase.

  • Reconstruction: An adversary tries to reconstruct one or more samples from the training set and/or their corresponding labels. Also called inversion.

πŸ›‘οΈ Defensive actions πŸ›‘οΈ
πŸ”— Useful links πŸ”—
πŸ’‰ Poisoning πŸ’‰

They aim to corrupt the training set by causing a machine learning model to reduce its accuracy.

Poisoning attack

This attack is difficult to detect when performed on the training data, since the attack can propagate among different models using the same training data.

The adversary seeks to destroy the availability of the model by modifying the decision boundary and, as a result, producing incorrect predictions or, create a backdoor in a model. In the latter, the model behaves correctly (returning the desired predictions) in most cases, except for certain inputs specially created by the adversary that produce undesired results. The adversary can manipulate the results of the predictions and launch future attacks.

πŸ”“ Backdoors πŸ”“

BadNets are the simplest type of backdoor in a machine learning model. Moreover, BadNets are able to be preserved in a model, even if they are retrained again for a different task than the original model (transfer learning).

It is important to note that public pre-trained models may contain backdoors.

πŸ›‘οΈ Defensive actions πŸ›‘οΈ
πŸ”— Useful links πŸ”—
πŸƒβ€β™‚οΈ Evasion πŸƒβ€β™‚οΈ

An adversary adds a small perturbation (in the form of noise) to the input of a machine learning model to make it classify incorrectly (example adversary).

Evasion attack

They are similar to poisoning attacks, but their main difference is that evasion attacks try to exploit weaknesses of the model in the inference phase.

The goal of the adversary is for adversarial examples to be imperceptible to a human.

Two types of attack can be performed depending on the output desired by the opponent:

  • Targeted: the adversary aims to obtain a prediction of his choice.

    Targeted attack

  • Untargeted: the adversary intends to achieve a misclassification.

    Untargeted attack

The most common attacks are white-box attacks:

πŸ›‘οΈ Defensive actions πŸ›‘οΈ
  • Adversarial training, which consists of crafting adversarial examples during training so as to allow the model to learn features of the adversarial examples, making the model more robust to this type of attack.

  • Transformations on inputs.

  • Gradient masking/regularization. Not very effective.

  • Weak defenses.

  • Prompt Injection Defenses: Every practical and proposed defense against prompt injection. stars

  • Lakera PINT Benchmark: The Prompt Injection Test (PINT) Benchmark provides a neutral way to evaluate the performance of a prompt injection detection system, like Lakera Guard, without relying on known public datasets that these tools can use to optimize for evaluation performance. stars

  • Devil's Inference: A method to adversarially assess the Phi-3 Instruct model by observing the attention distribution across its heads when exposed to specific inputs. This approach prompts the model to adopt the 'devil's mindset’, enabling it to generate outputs of a violent nature. stars

πŸ”— Useful links πŸ”—

πŸ› οΈ Tools πŸ› οΈ

Name Type Supported algorithms Supported attack types Attack/Defence Supported frameworks Popularity
Cleverhans Image Deep Learning Evasion Attack Tensorflow, Keras, JAX stars
Foolbox Image Deep Learning Evasion Attack Tensorflow, PyTorch, JAX stars
ART Any type (image, tabular data, audio,...) Deep Learning, SVM, LR, etc. Any (extraction, inference, poisoning, evasion) Both Tensorflow, Keras, Pytorch, Scikit Learn stars
TextAttack Text Deep Learning Evasion Attack Keras, HuggingFace stars
Advertorch Image Deep Learning Evasion Both --- stars
AdvBox Image Deep Learning Evasion Both PyTorch, Tensorflow, MxNet stars
DeepRobust Image, graph Deep Learning Evasion Both PyTorch stars
Counterfit Any Any Evasion Attack --- stars
Adversarial Audio Examples Audio DeepSpeech Evasion Attack --- stars
ART

Adversarial Robustness Toolbox, abbreviated as ART, is an open-source Adversarial Machine Learning library for testing the robustness of machine learning models.

ART logo

It is developed in Python and implements extraction, inversion, poisoning and evasion attacks and defenses.

ART supports the most popular frameworks: Tensorflow, Keras, PyTorch, MxNet, ScikitLearn, among many others.

It is not limited to the use of models that use images as input, but also supports other types of data, such as audio, video, tabular data, etc.

Workshop to learn Adversarial Machine Learning with ART πŸ‡ͺπŸ‡Έ

Cleverhans

Cleverhans is a library for performing evasion attacks and testing the robustness of a deep learning model on image models.

Cleverhans logo

It is developed in Python and integrates with the Tensorflow, Torch and JAX frameworks.

It implements numerous attacks such as L-BFGS, FGSM, JSMA, C&W, among others.

πŸ”§ Use πŸ”§

The use of AI to accomplish a malicious task and boost classic attacks.

πŸ•΅οΈβ€β™‚οΈ Pentesting πŸ•΅οΈβ€β™‚οΈ

  • GyoiThon: Next generation penetration test tool, intelligence gathering tool for web server. stars
  • Deep Exploit: Fully automatic penetration test tool using Deep Reinforcement Learning. stars
  • AutoPentest-DRL: Automated penetration testing using deep reinforcement learning. stars
  • DeepGenerator: Fully automatically generate injection codes for web application assessment using Genetic Algorithm and Generative Adversarial Networks.
  • Eyeballer: Eyeballer is meant for large-scope network penetration tests where you need to find "interesting" targets from a huge set of web-based hosts. stars
  • Nebula: AI-Powered Ethical Hacking Assistant. stars
  • Teams of LLM Agents can Exploit Zero-Day Vulnerabilities

🦠 Malware 🦠

πŸ—ΊοΈΒ OSINT πŸ—ΊοΈ

  • SNAP_R: Generate automatically spear-phishing posts on social media. stars
  • SpyScrap: SpyScrap combines facial recognition methods to filter the results and uses natural language processing for obtaining important entities from the website the user appears. stars

πŸ“§Β Phishing πŸ“§

  • DeepDGA: Implementation of DeepDGA: Adversarially-Tuned Domain Generation and Detection. stars

πŸ•΅ Threat Intelligence πŸ•΅

πŸ‘¨β€πŸŽ€ Generative AI πŸ‘¨β€πŸŽ€

πŸ”Š Audio πŸ”Š

πŸ› οΈ Tools πŸ› οΈ
  • deep-voice-conversion: Deep neural networks for voice conversion (voice style transfer) in Tensorflow. stars
  • tacotron: A TensorFlow implementation of Google's Tacotron speech synthesis with pre-trained model (unofficial). stars
  • Real-Time-Voice-Cloning: Clone a voice in 5 seconds to generate arbitrary speech in real-time. stars
  • mimic2: Text to Speech engine based on the Tacotron architecture, initially implemented by Keith Ito. stars
  • Neural-Voice-Cloning-with-Few-Samples: Implementation of Neural Voice Cloning with Few Samples Research Paper by Baidu. stars
  • Vall-E: An unofficial PyTorch implementation of the audio LM VALL-E. stars
  • voice-changer: Realtime Voice Changer. stars
  • Retrieval-based-Voice-Conversion-WebUI: An easy-to-use Voice Conversion framework based on VITS. stars
  • Audiocraft: Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning. stars
  • VALL-E-X: An open source implementation of Microsoft's VALL-E X zero-shot TTS model. stars
  • OpenVoice: Instant voice cloning by MyShell. stars
  • MeloTTS: High-quality multi-lingual text-to-speech library by MyShell.ai. Support English, Spanish, French, Chinese, Japanese and Korean. stars
  • VoiceCraft: Zero-Shot Speech Editing and Text-to-Speech in the Wild. stars
  • Parler-TTS: Inference and training library for high-quality TTS models. stars
  • ChatTTS: A generative speech model for daily dialogue. stars
πŸ’‘ Applications πŸ’‘
πŸ”Ž Detection πŸ”Ž

πŸ“· Image πŸ“·

πŸ› οΈ Tools πŸ› οΈ
  • StyleGAN: StyleGAN - Official TensorFlow Implementation. stars
  • StyleGAN2: StyleGAN2 - Official TensorFlow Implementation. stars
  • stylegan2-ada-pytorch: StyleGAN2-ADA - Official PyTorch implementation. stars
  • StyleGAN-nada: CLIP-Guided Domain Adaptation of Image Generators. stars
  • StyleGAN3: Official PyTorch implementation of StyleGAN3. stars
  • Imaginaire: Imaginaire is a pytorch library that contains optimized implementation of several image and video synthesis methods developed at NVIDIA. stars
  • ffhq-dataset: Flickr-Faces-HQ Dataset (FFHQ). stars
  • DALLE2-pytorch: Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch. stars
  • ImaginAIry: AI imagined images. Pythonic generation of stable diffusion images. stars
  • Lama Cleaner: Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures. stars
  • Invertible-Image-Rescaling: This is the PyTorch implementation of paper: Invertible Image Rescaling. stars
  • DifFace: Blind Face Restoration with Diffused Error Contraction (PyTorch). stars
  • CodeFormer: Towards Robust Blind Face Restoration with Codebook Lookup Transformer. stars
  • Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion. stars
  • Diffusers: πŸ€— Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch. stars
  • Stable Diffusion: High-Resolution Image Synthesis with Latent Diffusion Models. stars
  • InvokeAI: InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products. stars
  • Stable Diffusion web UI: Stable Diffusion web UI. stars
  • Stable Diffusion Infinity: Outpainting with Stable Diffusion on an infinite canvas. stars
  • Fast Stable Diffusion: fast-stable-diffusion + DreamBooth. stars
  • GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images. stars
  • Awesome AI Art Image Synthesis: A list of awesome tools, ideas, prompt engineering tools, colabs, models, and helpers for the prompt designer playing with aiArt and image synthesis. Covers Dalle2, MidJourney, StableDiffusion, and open source tools. stars
  • Stable Diffusion: A latent text-to-image diffusion model. stars
  • Weather Diffusion: Code for "Restoring Vision in Adverse Weather Conditions with Patch-Based Denoising Diffusion Models". stars
  • DF-GAN: A Simple and Effective Baseline for Text-to-Image Synthesis. stars
  • Dall-E Playground: A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini). stars
  • MM-CelebA-HQ-Dataset: A large-scale face image dataset that allows text-to-image generation, text-guided image manipulation, sketch-to-image generation, GANs for face generation and editing, image caption, and VQA. stars
  • Deep Daze: Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). stars
  • StyleMapGAN: Exploiting Spatial Dimensions of Latent in GAN for Real-time Image Editing. stars
  • Kandinsky-2: Multilingual text2image latent diffusion model. stars
  • DragGAN: Interactive Point-based Manipulation on the Generative Image Manifold. stars
  • Segment Anything: The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model. stars
  • Segment Anything 2: The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model. stars
  • MobileSAM: This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond! stars
  • FastSAM: Fast Segment Anything stars
  • Infinigen: Infinite Photorealistic Worlds using Procedural Generation. stars
  • DALLΒ·E 3
  • StreamDiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation. stars
  • AnyDoor: Zero-shot Object-level Image Customization. stars
  • DiT: Scalable Diffusion Models with Transformers. stars
  • BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion. stars
  • OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on. stars
  • VAR: Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". stars
  • Imagine Flash: Accelerating Emu Diffusion Models with Backward Distillation
πŸ’‘ Applications πŸ’‘
πŸ”Ž Detection πŸ”Ž

πŸŽ₯ Video πŸŽ₯

πŸ› οΈ Tools πŸ› οΈ
  • DeepFaceLab: DeepFaceLab is the leading software for creating deepfakes. stars
  • faceswap: Deepfakes Software For All. stars
  • dot: The Deepfake Offensive Toolkit. stars
  • SimSwap: An arbitrary face-swapping framework on images and videos with one single trained model! stars
  • faceswap-GAN: A denoising autoencoder + adversarial losses and attention mechanisms for face swapping. stars
  • Celeb DeepFakeForensics: A Large-scale Challenging Dataset for DeepFake Forensics. stars
  • VGen: A holistic video generation ecosystem for video generation building on diffusion models. stars
  • MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising. stars
  • GLEE: General Object Foundation Model for Images and Videos at Scale. stars
  • T-Rex: Towards Generic Object Detection via Text-Visual Prompt Synergy. stars
  • DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors. stars
  • Mora: More like Sora for Generalist Video Generation. stars
πŸ’‘ Applications πŸ’‘
  • face2face-demo: pix2pix demo that learns from facial landmarks and translates this into a face. stars
  • Faceswap-Deepfake-Pytorch: Faceswap with Pytorch or DeepFake with Pytorch. stars
  • Point-E: Point cloud diffusion for 3D model synthesis. stars
  • EGVSR: Efficient & Generic Video Super-Resolution. stars
  • STIT: Stitch it in Time: GAN-Based Facial Editing of Real Videos. stars
  • BackgroundMattingV2: Real-Time High-Resolution Background Matting. stars
  • MODNet: A Trimap-Free Portrait Matting Solution in Real Time. stars
  • Background-Matting: Background Matting: The World is Your Green Screen. stars
  • First Order Model: This repository contains the source code for the paper First Order Motion Model for Image Animation. stars
  • Articulated Animation: This repository contains the source code for the CVPR'2021 paper Motion Representations for Articulated Animation. stars
  • Real Time Person Removal: Removing people from complex backgrounds in real time using TensorFlow.js in the web browser. stars
  • AdaIN-style: Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. stars
  • Frame Interpolation: Frame Interpolation for Large Motion. stars
  • Awesome-Image-Colorization: πŸ“š A collection of Deep Learning based Image Colorization and Video Colorization papers. stars
  • SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation. stars
  • roop: One-click deepfake (face swap). stars
  • StableVideo: Text-driven Consistency-aware Diffusion Video Editing. stars
  • MagicEdit: High-Fidelity Temporally Coherent Video Editing. stars
  • Rerender_A_Video: Zero-Shot Text-Guided Video-to-Video Translation. stars
  • DreamEditor: Text-Driven 3D Scene Editing with Neural Fields. stars
  • DreamEditor: Real-Time 4D View Synthesis at 4K Resolution. stars
  • AnimateAnyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation. stars
  • Moore-AnimateAnyone: This repository reproduces AnimateAnyone. stars
  • audio2photoreal: From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations. stars
  • MagicVideo-V2: Multi-Stage High-Aesthetic Video Generation
  • LWM: A general-purpose large-context multimodal autoregressive model. It is trained on a large dataset of diverse long videos and books using RingAttention, and can perform language, image, and video understanding and generation. stars
  • AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation. stars
  • Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance. stars
  • Streamv2v: Streaming Video-to-Video Translation with Feature Banks. stars
  • Deep-Live-Cam: Real time face swap and one-click video deepfake with only a single image. stars
  • Sapiens: Foundation for Human Vision Models. stars
πŸ”Ž Detection πŸ”Ž

πŸ“„ Text πŸ“„

πŸ› οΈ Tools πŸ› οΈ
  • GLM-130B: An Open Bilingual Pre-Trained Model. stars
  • LongtermChatExternalSources: GPT-3 chatbot with long-term memory and external sources. stars
  • sketch: AI code-writing assistant that understands data content. stars
  • LangChain: ⚑ Building applications with LLMs through composability ⚑. stars
  • ChatGPT Wrapper: API for interacting with ChatGPT using Python and from Shell. stars
  • openai-python: The OpenAI Python library provides convenient access to the OpenAI API from applications written in the Python language. stars
  • Beto: Spanish version of the BERT model. stars
  • GPT-Code-Clippy: GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex. stars
  • GPT Neo: An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library. stars
  • ctrl: Conditional Transformer Language Model for Controllable Generation. stars
  • Llama: Inference code for LLaMA models. stars
  • Llama2
  • Llama Guard 3
  • UL2 20B: An Open Source Unified Language Learner
  • burgpt: A Burp Suite extension that integrates OpenAI's GPT to perform an additional passive scan for discovering highly bespoke vulnerabilities, and enables running traffic-based analysis of any type. stars
  • Ollama: Get up and running with Llama 2 and other large language models locally. stars
  • SneakyPrompt: Jailbreaking Text-to-image Generative Models. stars
    • Copilot-For-Security: A generative AI-powered security solution that helps increase the efficiency and capabilities of defenders to improve security outcomes at machine speed and scale, while remaining compliant to responsible AI principles. stars
  • LM Studio: Discover, download, and run local LLMs
  • Bypass GPT: Convert AI Text to Human-like Content
  • MGM: The framework supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B with image understanding, reasoning, and generation simultaneously. stars
  • Secret Llama: Fully private LLM chatbot that runs entirely with a browser with no server needed. Supports Mistral and LLama 3. stars
  • Llama3: The official Meta Llama 3 GitHub site. stars
πŸ”Ž Detection πŸ”Ž
πŸ’‘ Applications πŸ’‘

πŸ“š Misc πŸ“š

πŸ“Š Surveys πŸ“Š

πŸ—£ Maintainers πŸ—£


Miguel HernΓ‘ndez

JosΓ© Ignacio Escribano

©️ License ©️

License: CC BY-SA 4.0

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for offensive-ai-compilation

Similar Open Source Tools

For similar tasks

No tools available

For similar jobs

No tools available