Awesome-LLM4Cybersecurity
An overview of LLMs for cybersecurity.
Stars: 273
The repository 'Awesome-LLM4Cybersecurity' provides a comprehensive overview of the applications of Large Language Models (LLMs) in cybersecurity. It includes a systematic literature review covering topics such as constructing cybersecurity-oriented domain LLMs, potential applications of LLMs in cybersecurity, and research directions in the field. The repository analyzes various benchmarks, datasets, and applications of LLMs in cybersecurity tasks like threat intelligence, fuzzing, vulnerabilities detection, insecure code generation, program repair, anomaly detection, and LLM-assisted attacks.
README:
When LLMs Meet Cybersecurity: A Systematic Literature Review
π₯ Updates
π[2024-09-21] We have updated the related papers up to Aug 31st, with 75 new papers added (2024.06.01-2024.08.31).
π[2024-06-14] We have updated the related papers up to May 31st, with 37 new papers added (2024.03.20-2024.05.31).
- When LLMs Meet Cybersecurity: A Systematic Literature Review
- π₯ Updates
- π Introduction
- π© Features
-
π Literatures
- πBibTeX
π Introduction
We are excited to present "When LLMs Meet Cybersecurity: A Systematic Literature Review," a comprehensive overview of LLM applications in cybersecurity.
We seek to address three key questions:
- RQ1: How to construct cyber security-oriented domain LLMs?
- RQ2: What are the potential applications of LLMs in cybersecurity?
- RQ3: What are the existing challenges and further research directions about the application of LLMs in cybersecurity?
π© Features
(2023.03.20) Our study encompasses an analysis of over 180 works, spanning across 25 LLMs and more than 10 downstream scenarios.
π Literatures
RQ1: How to construct cybersecurity-oriented domain LLMs?
Cybersecurity Evaluation Benchmarks
-
CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity [paper] 2024.02.12
-
SecEval: A Comprehensive Benchmark for Evaluating Cybersecurity Knowledge of Foundation Models [paper] 2023
-
SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security [paper] 2023.12.26
-
Securityeval dataset: mining vulnerability examples to evaluate machine learning-based code generation techniques. [paper] 2022.11.09
-
Can llms patch security issues? [paper] 2024.02.19
-
DebugBench: Evaluating Debugging Capability of Large Language Models [paper] 2024.01.11
-
An empirical study of netops capability of pre-trained large language models. [paper] 2023.09.19
-
OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models [paper] 2024.02.16
-
Purple Llama CyberSecEval: A Secure Coding Benchmark for Language Models [paper] 2023.12.07
-
LLMSecEval: A Dataset of Natural Language Prompts for Security Evaluations [paper] 2023.03.16
-
Can LLMs Understand Computer Networks? Towards a Virtual System Administrator [paper] 2024.04.22
-
Assessing Cybersecurity Vulnerabilities in Code Large Language Models [paper] 2024.04.29
-
SECURE: Benchmarking Generative Large Language Models for Cybersecurity Advisory [paper] 2024.05.30
-
NYU CTF Dataset: A Scalable Open-Source Benchmark Dataset for Evaluating LLMs in Offensive Security [paper] 2024.06.09
-
eyeballvul: a future-proof benchmark for vulnerability detection in the wild [paper] 2024.07.11
-
CYBERSECEVAL 3: Advancing the Evaluation of Cybersecurity Risks and Capabilities in Large Language Models [paper] 2024.08.03
-
AttackER: Towards Enhancing Cyber-Attack Attribution with a Named Entity Recognition Dataset [paper] 2024.08.09
Fine-tuned Domain LLMs for Cybersecurity
-
SecureFalcon: The Next Cyber Reasoning System for Cyber Security [paper] 2023.07.13
-
Owl: A Large Language Model for IT Operations [paper] 2023.09.17
-
HackMentor: Fine-tuning Large Language Models for Cybersecurity [paper] 2023.09
-
Large Language Models for Test-Free Fault Localization [paper] 2023.10.03
-
Finetuning Large Language Models for Vulnerability Detection [paper] 2024.02.29
-
RepairLLaMA: Efficient Representations and Fine-Tuned Adapters for Program Repair [paper] 2024.03.11
-
Efficient Avoidance of Vulnerabilities in Auto-completed Smart Contract Code Using Vulnerability-constrained Decoding [paper] 2023.10.06
-
Instruction Tuning for Secure Code Generation [paper] 2024.02.14
-
Nova+: Generative Language Models for Binaries [paper] 2023.11.27
-
Assessing LLMs in Malicious Code Deobfuscation of Real-world Malware Campaigns [paper] 2024.04.30
-
Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models [paper] 2024.06.02
-
Security Vulnerability Detection with Multitask Self-Instructed Fine-Tuning of Large Language Models [paper] 2024.06.09
-
A Comprehensive Evaluation of Parameter-Efficient Fine-Tuning on Automated Program Repair [paper] 2024.06.09
-
IoT-LM: Large Multisensory Language Models for the Internet of Things [paper] 2024.07.13
-
CyberPal.AI: Empowering LLMs with Expert-Driven Cybersecurity Instructions [paper] 2024.08.18
RQ2: What are the potential applications of LLMs in cybersecurity?
Threat Intelligence
-
LOCALINTEL: Generating Organizational Threat Intelligence from Global and Local Cyber Knowledge [paper] 2024.01.18
-
AGIR: Automating Cyber Threat Intelligence Reporting with Natural Language Generation [paper] 2023.10.04
-
On the Uses of Large Language Models to Interpret Ambiguous Cyberattack Descriptions [paper] 2023.08.22
-
Advancing TTP Analysis: Harnessing the Power of Encoder-Only and Decoder-Only Language Models with Retrieval Augmented Generation [paper] 2024.01.12
-
An Empirical Study on Using Large Language Models to Analyze Software Supply Chain Security Failures [paper] 2023.08.09
-
ChatGPT, Llama, can you write my report? An experiment on assisted digital forensics reports written using (Local) Large Language Models [paper] 2023.12.22
-
Time for aCTIon: Automated Analysis of Cyber Threat Intelligence in the Wild [paper] 2023.07.14
-
Cupid: Leveraging ChatGPT for More Accurate Duplicate Bug Report Detection [paper] 2023.08.27
-
HW-V2W-Map: Hardware Vulnerability to Weakness Mapping Framework for Root Cause Analysis with GPT-assisted Mitigation Suggestion [paper] 2023.12.21
-
Cyber Sentinel: Exploring Conversational Agents in Streamlining Security Tasks with GPT-4 [paper] 2023.09.28
-
Evaluation of LLM Chatbots for OSINT-based Cyber Threat Awareness [paper] 2024.03.13
-
Crimson: Empowering Strategic Reasoning in Cybersecurity through Large Language Models [paper] 2024.03.01
-
SEvenLLM: Benchmarking, Eliciting, and Enhancing Abilities of Large Language Models in Cyber Threat Intelligence [paper] 2024.05.06
-
AttacKG+:Boosting Attack Knowledge Graph Construction with Large Language Models [paper] 2024.05.08
-
Actionable Cyber Threat Intelligence using Knowledge Graphs and Large Language Models [paper] 2024.06.30
-
LLMCloudHunter: Harnessing LLMs for Automated Extraction of Detection Rules from Cloud-Based CTI [paper] 2024.07.06
-
Using LLMs to Automate Threat Intelligence Analysis Workflows in Security Operation Centers [paper] 2024.07.18
-
Psychological Profiling in Cybersecurity: A Look at LLMs and Psycholinguistic Features [paper] 2024.08.09
-
The Use of Large Language Models (LLM) for Cyber Threat Intelligence (CTI) in Cybercrime Forums [paper] 2024.08.08
-
A RAG-Based Question-Answering Solution for Cyber-Attack Investigation and Attribution [paper] 2024.08.12
-
Usefulness of data flow diagrams and large language models for security threat validation: a registered report [paper] 2024.08.14
-
KGV: Integrating Large Language Models with Knowledge Graphs for Cyber Threat Intelligence Credibility Assessment [paper] 2024.08.15
FUZZ
-
Augmenting Greybox Fuzzing with Generative AI [paper] 2023.06.11
-
How well does LLM generate security tests? [paper] 2023.10.03
-
Fuzz4All: Universal Fuzzing with Large Language Models [paper] 2024.01.15
-
CODAMOSA: Escaping Coverage Plateaus in Test Generation with Pre-trained Large Language Models [paper] 2023.07.26
-
Understanding Large Language Model Based Fuzz Driver Generation [paper] 2023.07.24
-
Large Language Models Are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models [paper] 2023.06.07
-
Large Language Models are Edge-Case Fuzzers: Testing Deep Learning Libraries via FuzzGPT [paper] 2023.04.04
-
Large language model guided protocol fuzzing [paper] 2024.02.26
-
Fuzzing BusyBox: Leveraging LLM and Crash Reuse for Embedded Bug Unearthing [paper] 2024.03.06
-
When Fuzzing Meets LLMs: Challenges and Opportunities [paper] 2024.04.25
-
An Exploratory Study on Using Large Language Models for Mutation Testing [paper] 2024.06.14
Vulnerabilities Detection
-
Evaluation of ChatGPT Model for Vulnerability Detection [paper] 2023.04.12
-
Detecting software vulnerabilities using Language Models [paper] 2023.02.23
-
Software Vulnerability Detection using Large Language Models [paper] 2023.09.02
-
Understanding the Effectiveness of Large Language Models in Detecting Security Vulnerabilities [paper] 2023.11.16
-
Software Vulnerability and Functionality Assessment using LLMs [paper] 2024.03.13
-
Finetuning Large Language Models for Vulnerability Detection [paper] 2024.03.01
-
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models [paper] 2023.11.15
-
DefectHunter: A Novel LLM-Driven Boosted-Conformer-based Code Vulnerability Detection Mechanism [paper] 2023.09.27
-
Prompt-Enhanced Software Vulnerability Detection Using ChatGPT [paper] 2023.08.24
-
Using ChatGPT as a Static Application Security Testing Tool [paper] 2023.08.28
-
LLbezpeky: Leveraging Large Language Models for Vulnerability Detection [paper] 2024.01.13
-
Large Language Model-Powered Smart Contract Vulnerability Detection: New Perspectives [paper] 2023.10.16
-
Software Vulnerability Detection with GPT and In-Context Learning [paper] 2024.01.08
-
GPTScan: Detecting Logic Vulnerabilities in Smart Contracts by Combining GPT with Program Analysis [paper] 2023.12.25
-
VulLibGen: Identifying Vulnerable Third-Party Libraries via Generative Pre-Trained Model [paper] 2023.08.09
-
LLM4Vuln: A Unified Evaluation Framework for Decoupling and Enhancing LLMs' Vulnerability Reasoning [paper] 2024.01.29
-
Large Language Models for Test-Free Fault Localization [paper] 2023.10.03
-
Multi-role Consensus through LLMs Discussions for Vulnerability Detection [paper] 2024.03.21
-
How ChatGPT is Solving Vulnerability Management Problem [paper] 2023.11.11
-
DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection [paper] 2023.08.09
-
The FormAI Dataset: Generative AI in Software Security through the Lens of Formal Verification [paper] 2023.09.02
-
How Far Have We Gone in Vulnerability Detection Using Large Language Models [paper] 2023.12.22
-
Large Language Model for Vulnerability Detection and Repair: Literature Review and Roadmap [paper] 2024.04.04
-
DLAP: A Deep Learning Augmented Large Language Model Prompting Framework for Software Vulnerability Detection [paper] 2024.05.02
-
Harnessing Large Language Models for Software Vulnerability Detection: A Comprehensive Benchmarking Study [paper] 2024.05.24
-
LLM-Assisted Static Analysis for Detecting Security Vulnerabilities [paper] 2024.05.27
-
Generalization-Enhanced Code Vulnerability Detection via Multi-Task Instruction Fine-Tuning [paper] 2024.06.06
-
Vul-RAG: Enhancing LLM-based Vulnerability Detection via Knowledge-level RAG [paper] 2024.06.19
-
MALSIGHT: Exploring Malicious Source Code and Benign Pseudocode for Iterative Binary Malware Summarization [paper] 2024.06.26
-
Assessing the Effectiveness of LLMs in Android Application Vulnerability Analysis [paper] 2024.06.27
-
Detect Llama -- Finding Vulnerabilities in Smart Contracts using Large Language Models [paper] 2024.07.12
-
Static Detection of Filesystem Vulnerabilities in Android Systems [paper] 2024.07.16
-
SCoPE: Evaluating LLMs for Software Vulnerability Detection [paper] 2024.07.19
-
Comparison of Static Application Security Testing Tools and Large Language Models for Repo-level Vulnerability Detection [paper] 2024.07.23
-
Towards Effectively Detecting and Explaining Vulnerabilities Using Large Language Models [paper] 2024.08.08
-
Harnessing the Power of LLMs in Source Code Vulnerability Detection [paper] 2024.08.07
-
Exploring RAG-based Vulnerability Augmentation with LLMs [paper] 2024.08.08
-
LLM-Enhanced Static Analysis for Precise Identification of Vulnerable OSS Versions [paper] 2024.08.14
-
ANVIL: Anomaly-based Vulnerability Identification without Labelled Training Data [paper] 2024.08.28
-
Outside the Comfort Zone: Analysing LLM Capabilities in Software Vulnerability Detection [paper] 2024.08.29
Insecure code Generation
-
Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants [paper] 2023.02.27
-
Bugs in Large Language Models Generated Code [paper] 2024.03.18
-
Asleep at the Keyboard? Assessing the Security of GitHub Copilotβs Code Contributions [paper] 2021.12.16
-
The Effectiveness of Large Language Models (ChatGPT and CodeBERT) for Security-Oriented Code Analysis [paper] 2023.08.29
-
No Need to Lift a Finger Anymore? Assessing the Quality of Code Generation by ChatGPT [paper] 2023.08.09
-
Generate and Pray: Using SALLMS to Evaluate the Security of LLM Generated Code [paper] 2023.11.01
-
Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation [paper] 2023.10.30
-
Can Large Language Models Identify And Reason About Security Vulnerabilities? Not Yet [paper] 2023.12.19
-
A Comparative Study of Code Generation using ChatGPT 3.5 across 10 Programming Languages [paper] 2023.08.08
-
How Secure is Code Generated by ChatGPT? [[paper]](How Secure is Code Generated by ChatGPT?) 2023.04.19
-
Large Language Models for Code: Security Hardening and Adversarial Testing [paper] 2023.09.29
-
Pop Quiz! Can a Large Language Model Help With Reverse Engineering? [paper] 2022.02.02
-
LLM4Decompile: Decompiling Binary Code with Large Language Models [paper] 2024.03.08
-
Large Language Models for Code Analysis: Do LLMs Really Do Their Job? [paper] 2024.03.05
-
Understanding Programs by Exploiting (Fuzzing) Test Cases [paper] 2023.01.12
-
Evaluating and Explaining Large Language Models for Code Using Syntactic Structures [paper] 2023.08.07
-
Prompt Engineering-assisted Malware Dynamic Analysis Using GPT-4 [paper] 2023.12.13
-
Using ChatGPT to Analyze Ransomware Messages and to Predict Ransomware Threats [paper] 2023.11.21
-
Shifting the Lens: Detecting Malware in npm Ecosystem with Large Language Models [paper] 2024.03.18
-
DebugBench: Evaluating Debugging Capability of Large Language Models [paper] 2024.01.11
-
Make LLM a Testing Expert: Bringing Human-like Interaction to Mobile GUI Testing via Functionality-aware Decisions [paper] 2023.10.24
-
FLAG: Finding Line Anomalies (in code) with Generative AI [paper] 2023.07.22
-
Evolutionary Large Language Models for Hardware Security: A Comparative Survey [paper] 2024.04.25
-
Do Neutral Prompts Produce Insecure Code? FormAI-v2 Dataset: Labelling Vulnerabilities in Code Generated by Large Language Models [paper] 2024.04.29
-
LLM Security Guard for Code [paper] 2024.05.03
-
Code Repair with LLMs gives an Exploration-Exploitation Tradeoff [paper] 2024.05.30
-
DistiLRR: Transferring Code Repair for Low-Resource Programming Languages [paper] 2024.06.20
-
Is Your AI-Generated Code Really Safe? Evaluating Large Language Models on Secure Code Generation with CodeSecEval [paper] 2024.07.04
-
An Exploratory Study on Fine-Tuning Large Language Models for Secure Code Generation [paper] 2024.08.17
Program Repair
-
Automatic Program Repair with OpenAI's Codex: Evaluating QuixBugs [paper] 2023.11.06
-
An Analysis of the Automatic Bug Fixing Performance of ChatGPT [paper] 2023.01.20
-
AI-powered patching: the future of automated vulnerability fixes [paper] 2024.01.31
-
Practical Program Repair in the Era of Large Pre-trained Language Models [paper] 2022.10.25
-
Security Code Review by LLMs: A Deep Dive into Responses [paper] 2024.01.29
-
Examining Zero-Shot Vulnerability Repair with Large Language Models [paper] 2022.08.15
-
How Effective Are Neural Networks for Fixing Security Vulnerabilities [paper] 2023.05.29
-
Can LLMs Patch Security Issues? [paper] 2024.02.19
-
InferFix: End-to-End Program Repair with LLMs [paper] 2023.03.13
-
ZeroLeak: Using LLMs for Scalable and Cost Effective Side-Channel Patching [paper] 2023.08.24
-
DIVAS: An LLM-based End-to-End Framework for SoC Security Analysis and Policy-based Protection [paper] 2023.08.14
-
Fixing Hardware Security Bugs with Large Language Models [paper] 2023.02.02
-
A Study of Vulnerability Repair in JavaScript Programs with Large Language Models [paper] 2023.03.19
-
Enhanced Automated Code Vulnerability Repair using Large Language Models [paper] 2024.01.08
-
Teaching Large Language Models to Self-Debug [paper] 2023.10.05
-
Better Patching Using LLM Prompting, via Self-Consistency [paper] 2023.08.16
-
Copiloting the Copilots: Fusing Large Language Models with Completion Engines for Automated Program Repair [paper] 2023.11.08
-
LLM-Powered Code Vulnerability Repair with Reinforcement Learning and Semantic Reward [paper] 2024.02.22
-
ContrastRepair: Enhancing Conversation-Based Automated Program Repair via Contrastive Test Case Pairs [paper] 2024.03.07
-
When Large Language Models Confront Repository-Level Automatic Program Repair: How Well They Done? [paper] 2023.03.01
-
Aligning LLMs for FL-free Program Repair [paper] 2024.04.13
-
Multi-Objective Fine-Tuning for Enhanced Program Repair with LLMs [paper] 2024.04.22
-
How Far Can We Go with Practical Function-Level Program Repair? [paper] 2024.04.19
-
Revisiting Unnaturalness for Automated Program Repair in the Era of Large Language Models [paper] 2024.03.23
-
A Systematic Literature Review on Large Language Models for Automated Program Repair [paper] 2024.05.12
-
Automated Repair of AI Code with Large Language Models and Formal Verification [paper] 2024.05.14
-
A Case Study of LLM for Automated Vulnerability Repair: Assessing Impact of Reasoning and Patch Validation Feedback [paper] 2024.05.24
-
Hybrid Automated Program Repair by Combining Large Language Models and Program Analysis [paper] 2024.06.04
-
Automated C/C++ Program Repair for High-Level Synthesis via Large Language Models [paper] 2024.07.04
-
ThinkRepair: Self-Directed Automated Program Repair [paper] 2024.07.30
-
Revisiting Evolutionary Program Repair via Code Language Model [paper] 2024.08.20
-
RePair: Automated Program Repair with Process-based Feedback [paper] 2024.08.21
-
Enhancing LLM-Based Automated Program Repair with Design Rationales [paper] 2024.08.22
-
Automated Software Vulnerability Patching using Large Language Models [paper] 2024.08.24
-
MergeRepair: An Exploratory Study on Merging Task-Specific Adapters in Code LLMs for Automated Program Repair [paper] 2024.08.26
Anomaly Detection
-
Benchmarking Large Language Models for Log Analysis, Security, and Interpretation [paper] 2023.11.24
-
Log-based Anomaly Detection based on EVT Theory with feedback [paper] 2023.09.30
-
LogGPT: Exploring ChatGPT for Log-Based Anomaly Detection [paper] 2023.09.14
-
LogGPT: Log Anomaly Detection via GPT [paper] 2023.12.11
-
Interpretable Online Log Analysis Using Large Language Models with Prompt Strategies [paper] 2024.01.26
-
Lemur: Log Parsing with Entropy Sampling and Chain-of-Thought Merging [paper] 2024.03.02
-
Web Content Filtering through knowledge distillation of Large Language Models [paper] 2023.05.10
-
Application of Large Language Models to DDoS Attack Detection [paper] 2024.02.05
-
An Improved Transformer-based Model for Detecting Phishing, Spam, and Ham: A Large Language Model Approach [paper] 2023.11.12
-
Evaluating the Performance of ChatGPT for Spam Email Detection [paper] 2024.02.23
-
Prompted Contextual Vectors for Spear-Phishing Detection [paper] 2024.02.14
-
Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models [paper] 2023.11.30
-
Explaining Tree Model Decisions in Natural Language for Network Intrusion Detection [paper] 2023.10.30
-
Revolutionizing Cyber Threat Detection with Large Language Models: A privacy-preserving BERT-based Lightweight Model for IoT/IIoT Devices [paper] 2024.02.08
-
HuntGPT: Integrating Machine Learning-Based Anomaly Detection and Explainable AI with Large Language Models (LLMs) [paper] 2023.09.27
-
ChatGPT for digital forensic investigation: The good, the bad, and the unknown [paper] 2023.07.10
-
Large Language Models Spot Phishing Emails with Surprising Accuracy: A Comparative Analysis of Performance [paper] 2024.04.23
-
LLMParser: An Exploratory Study on Using Large Language Models for Log Parsing [paper] 2024.04.27
-
DoLLM: How Large Language Models Understanding Network Flow Data to Detect Carpet Bombing DDoS [paper] 2024.05.12
-
Large Language Models in Wireless Application Design: In-Context Learning-enhanced Automatic Network Intrusion Detection [paper] 2024.05.17
-
Log Parsing with Self-Generated In-Context Learning and Self-Correction [paper] 2024.06.05
-
Generative AI-in-the-loop: Integrating LLMs and GPTs into the Next Generation Networks [paper] 2024.06.06
-
ULog: Unsupervised Log Parsing with Large Language Models through Log Contrastive Units [paper] 2024.06.11
-
Anomaly Detection on Unstable Logs with GPT Models [paper] 2024.06.11
-
Defending Against Social Engineering Attacks in the Age of LLMs [paper] 2024.06.18
-
LogEval: A Comprehensive Benchmark Suite for Large Language Models In Log Analysis [paper] 2024.07.02
-
Audit-LLM: Multi-Agent Collaboration for Log-based Insider Threat Detection [paper] 2024.07.12
-
Towards Explainable Network Intrusion Detection using Large Language Models [paper] 2024.08.08
-
Utilizing Large Language Models to Optimize the Detection and Explainability of Phishing Websites [paper] 2024.08.11
-
Multimodal Large Language Models for Phishing Webpage Detection and Identification [paper] 2024.08.12
-
Transformers and Large Language Models for Efficient Intrusion Detection Systems: A Comprehensive Survey [paper] 2024.08.14
-
Automated Phishing Detection Using URLs and Webpages [paper] 2024.08.16
-
LogParser-LLM: Advancing Efficient Log Parsing with Large Language Models [paper] 2024.08.25
-
XG-NID: Dual-Modality Network Intrusion Detection using a Heterogeneous Graph Neural Network and Large Language Model [paper] 2024.08.27
LLM Assisted Attack
-
Identifying and mitigating the security risks of generative ai [paper] 2023.12.29
-
Impact of Big Data Analytics and ChatGPT on Cybersecurity [paper] 2023.05.22
-
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy [paper] 2023.07.03
-
LLMs Killed the Script Kiddie: How Agents Supported by Large Language Models Change the Landscape of Network Threat Testing [paper] 2023.10.10
-
Malla: Demystifying Real-world Large Language Model Integrated Malicious Services [paper] 2024.01.06
-
Evaluating LLMs for Privilege-Escalation Scenarios [paper] 2023.10.23
-
Using Large Language Models for Cybersecurity Capture-The-Flag Challenges and Certification Questions [paper] 2023.08.21
-
Exploring the Dark Side of AI: Advanced Phishing Attack Design and Deployment Using ChatGPT [paper] 2023.09.19
-
From Chatbots to PhishBots? - Preventing Phishing scams created using ChatGPT, Google Bard and Claude [paper] 2024.03.10
-
From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads [paper] 2023.05.24
-
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool [paper] 2023.08.13
-
AutoAttacker: A Large Language Model Guided System to Implement Automatic Cyber-attacks [paper] 2024.03.02
-
RatGPT: Turning online LLMs into Proxies for Malware Attacks [paper] 2023.09.07
-
Getting pwnβd by AI: Penetration Testing with Large Language Models [paper] 2023.08.17
-
Assessing AI vs Human-Authored Spear Phishing SMS Attacks: An Empirical Study Using the TRAPD Method [paper] 2024.06.18
-
Tactics, Techniques, and Procedures (TTPs) in Interpreted Malware: A Zero-Shot Generation with Large Language Models [paper] 2024.07.11
-
The Shadow of Fraud: The Emerging Danger of AI-powered Social Engineering and its Possible Cure [paper] 2024.07.22
-
From Sands to Mansions: Enabling Automatic Full-Life-Cycle Cyberattack Construction with LLM [paper] 2024.07.24
-
PenHeal: A Two-Stage LLM Framework for Automated Pentesting and Optimal Remediation [paper] 2024.07.25
-
Practical Attacks against Black-box Code Completion Engines [paper] 2024.08.05
-
Using Retriever Augmented Large Language Models for Attack Graph Generation [paper] 2024.08.11
-
CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher [paper] 2024.08.21
-
Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks [paper] 2024.08.23
Others
-
An LLM-based Framework for Fingerprinting Internet-connected Devices [paper] 2023.10.24
-
Anatomy of an AI-powered malicious social botnet [paper] 2023.07.30
-
Just-in-Time Security Patch Detection -- LLM At the Rescue for Data Augmentation [paper] 2023.12.12
-
LLM for SoC Security: A Paradigm Shift [paper] 2023.10.09
-
Harnessing the Power of LLM to Support Binary Taint Analysis [paper] 2023.10.12
-
Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations [paper] 2023.12.07
-
LLM in the Shell: Generative Honeypots [paper] 2024.02.09
-
Employing LLMs for Incident Response Planning and Review [paper] 2024.03.02
-
Enhancing Network Management Using Code Generated by Large Language Models [[paper]] (https://arxiv.org/abs/2308.06261) 2023.08.11
-
Prompting Is All You Need: Automated Android Bug Replay with Large Language Models [paper] 2023.07.18
-
Is Stack Overflow Obsolete? An Empirical Study of the Characteristics of ChatGPT Answers to Stack Overflow Questions [paper] 2024.02.07
-
How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models [paper] 2024.04.16
-
Act as a Honeytoken Generator! An Investigation into Honeytoken Generation with Large Language Models [paper] 2024.04.24
-
AppPoet: Large Language Model based Android malware detection via multi-view prompt engineering [paper] 2024.04.29
-
Large Language Models for Cyber Security: A Systematic Literature Review [paper] 2024.05.08
-
Critical Infrastructure Protection: Generative AI, Challenges, and Opportunities [paper] 2024.05.08
-
LLMPot: Automated LLM-based Industrial Protocol and Physical Process Emulation for ICS Honeypots [paper] 2024.05.10
-
A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions [paper] 2024.05.23
-
Exploring the Efficacy of Large Language Models (GPT-4) in Binary Reverse Engineering [paper] 2024.06.09
-
Threat Modelling and Risk Analysis for Large Language Model (LLM)-Powered Applications [paper] 2024.06.16
-
On Large Language Models in National Security Applications [paper] 2024.07.03
-
Disassembling Obfuscated Executables with LLM [paper] 2024.07.12
-
MoRSE: Bridging the Gap in Cybersecurity Expertise with Retrieval Augmented Generation [paper] 2024.07.22
-
MistralBSM: Leveraging Mistral-7B for Vehicular Networks Misbehavior Detection [paper] 2024.07.26
-
Beyond Detection: Leveraging Large Language Models for Cyber Attack Prediction in IoT Networks [paper] 2024.08.26
RQ3: What are further research directions about the application of LLMs in cybersecurity?
Further Research: Agent4Cybersecurity
-
Cybersecurity Issues and Challenges [paper] 2022.08
-
A unified cybersecurity framework for complex environments [paper] 2018.09.26
-
LLMind: Orchestrating AI and IoT with LLM for Complex Task Execution [paper] 2024.02.20
-
Out of the Cage: How Stochastic Parrots Win in Cyber Security Environments [paper] 2023.08.28
-
Llm agents can autonomously hack websites. [paper] 2024.02.16
-
Nissist: An Incident Mitigation Copilot based on Troubleshooting Guides [paper] 2024.02.27
-
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage [paper] 2023.11.07
-
The Rise and Potential of Large Language Model Based Agents: A Survey [paper] 2023.09.19
-
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs [paper] 2023.10.03
-
From Summary to Action: Enhancing Large Language Models for Complex Tasks with Open World APIs [paper] 2024.02.28
-
If llm is the wizard, then code is the wand: A survey on how code empowers large language models to serve as intelligent agents. [paper] 2024.01.08
-
TaskWeaver: A Code-First Agent Framework [paper] 2023.12.01
-
Large Language Models for Networking: Applications, Enabling Techniques, and Challenges [paper] 2023.11.29
-
R-Judge: Benchmarking Safety Risk Awareness for LLM Agents [paper] 2024.02.18
-
WIPI: A New Web Threat for LLM-Driven Web Agents [paper] 2024.02.26
-
InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents [paper] 2024.03.25
-
LLM Agents can Autonomously Exploit One-day Vulnerabilities [paper] 2024.04.17
-
Large Language Models for Networking: Workflow, Advances and Challenges [paper] 2024.04.29
-
Generative AI in Cybersecurity [paper] 2024.05.02
-
Generative AI and Large Language Models for Cyber Security: All Insights You Need [paper] 2024.05.21
-
Teams of LLM Agents can Exploit Zero-Day Vulnerabilities [paper] 2024.06.02
-
Using LLMs to Automate Threat Intelligence Analysis Workflows in Security Operation Centers [paper] 2024.07.18
-
PhishAgent: A Robust Multimodal Agent for Phishing Webpage Detection [paper] 2024.08.20
πBibTeX
@misc{zhang2024llms,
title={When LLMs Meet Cybersecurity: A Systematic Literature Review},
author={Jie Zhang and Haoyu Bu and Hui Wen and Yu Chen and Lun Li and Hongsong Zhu},
year={2024},
eprint={2405.03644},
archivePrefix={arXiv},
primaryClass={cs.CR}
}
π₯ Updates
π[2024-09-21] We have updated the related papers up to Aug 31st, with 75 new papers added (2024.06.01-2024.08.31).
π[2024-06-14] We have updated the related papers up to May 31st, with 37 new papers added (2024.03.20-2024.05.31).
- When LLMs Meet Cybersecurity: A Systematic Literature Review
- π₯ Updates
- π Introduction
- π© Features
- π Literatures
- πBibTeX
π Introduction
We are excited to present "When LLMs Meet Cybersecurity: A Systematic Literature Review," a comprehensive overview of LLM applications in cybersecurity.
We seek to address three key questions:
- RQ1: How to construct cyber security-oriented domain LLMs?
- RQ2: What are the potential applications of LLMs in cybersecurity?
- RQ3: What are the existing challenges and further research directions about the application of LLMs in cybersecurity?
π© Features
(2023.03.20) Our study encompasses an analysis of over 180 works, spanning across 25 LLMs and more than 10 downstream scenarios.
π Literatures
RQ1: How to construct cybersecurity-oriented domain LLMs?
Cybersecurity Evaluation Benchmarks
-
CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity [paper] 2024.02.12
-
SecEval: A Comprehensive Benchmark for Evaluating Cybersecurity Knowledge of Foundation Models [paper] 2023
-
SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security [paper] 2023.12.26
-
Securityeval dataset: mining vulnerability examples to evaluate machine learning-based code generation techniques. [paper] 2022.11.09
-
Can llms patch security issues? [paper] 2024.02.19
-
DebugBench: Evaluating Debugging Capability of Large Language Models [paper] 2024.01.11
-
An empirical study of netops capability of pre-trained large language models. [paper] 2023.09.19
-
OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models [paper] 2024.02.16
-
Purple Llama CyberSecEval: A Secure Coding Benchmark for Language Models [paper] 2023.12.07
-
LLMSecEval: A Dataset of Natural Language Prompts for Security Evaluations [paper] 2023.03.16
-
Can LLMs Understand Computer Networks? Towards a Virtual System Administrator [paper] 2024.04.22
-
Assessing Cybersecurity Vulnerabilities in Code Large Language Models [paper] 2024.04.29
-
SECURE: Benchmarking Generative Large Language Models for Cybersecurity Advisory [paper] 2024.05.30
-
NYU CTF Dataset: A Scalable Open-Source Benchmark Dataset for Evaluating LLMs in Offensive Security [paper] 2024.06.09
-
eyeballvul: a future-proof benchmark for vulnerability detection in the wild [paper] 2024.07.11
-
CYBERSECEVAL 3: Advancing the Evaluation of Cybersecurity Risks and Capabilities in Large Language Models [paper] 2024.08.03
-
AttackER: Towards Enhancing Cyber-Attack Attribution with a Named Entity Recognition Dataset [paper] 2024.08.09
Fine-tuned Domain LLMs for Cybersecurity
-
SecureFalcon: The Next Cyber Reasoning System for Cyber Security [paper] 2023.07.13
-
Owl: A Large Language Model for IT Operations [paper] 2023.09.17
-
HackMentor: Fine-tuning Large Language Models for Cybersecurity [paper] 2023.09
-
Large Language Models for Test-Free Fault Localization [paper] 2023.10.03
-
Finetuning Large Language Models for Vulnerability Detection [paper] 2024.02.29
-
RepairLLaMA: Efficient Representations and Fine-Tuned Adapters for Program Repair [paper] 2024.03.11
-
Efficient Avoidance of Vulnerabilities in Auto-completed Smart Contract Code Using Vulnerability-constrained Decoding [paper] 2023.10.06
-
Instruction Tuning for Secure Code Generation [paper] 2024.02.14
-
Nova+: Generative Language Models for Binaries [paper] 2023.11.27
-
Assessing LLMs in Malicious Code Deobfuscation of Real-world Malware Campaigns [paper] 2024.04.30
-
Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models [paper] 2024.06.02
-
Security Vulnerability Detection with Multitask Self-Instructed Fine-Tuning of Large Language Models [paper] 2024.06.09
-
A Comprehensive Evaluation of Parameter-Efficient Fine-Tuning on Automated Program Repair [paper] 2024.06.09
-
IoT-LM: Large Multisensory Language Models for the Internet of Things [paper] 2024.07.13
-
CyberPal.AI: Empowering LLMs with Expert-Driven Cybersecurity Instructions [paper] 2024.08.18
RQ2: What are the potential applications of LLMs in cybersecurity?
Threat Intelligence
-
LOCALINTEL: Generating Organizational Threat Intelligence from Global and Local Cyber Knowledge [paper] 2024.01.18
-
AGIR: Automating Cyber Threat Intelligence Reporting with Natural Language Generation [paper] 2023.10.04
-
On the Uses of Large Language Models to Interpret Ambiguous Cyberattack Descriptions [paper] 2023.08.22
-
Advancing TTP Analysis: Harnessing the Power of Encoder-Only and Decoder-Only Language Models with Retrieval Augmented Generation [paper] 2024.01.12
-
An Empirical Study on Using Large Language Models to Analyze Software Supply Chain Security Failures [paper] 2023.08.09
-
ChatGPT, Llama, can you write my report? An experiment on assisted digital forensics reports written using (Local) Large Language Models [paper] 2023.12.22
-
Time for aCTIon: Automated Analysis of Cyber Threat Intelligence in the Wild [paper] 2023.07.14
-
Cupid: Leveraging ChatGPT for More Accurate Duplicate Bug Report Detection [paper] 2023.08.27
-
HW-V2W-Map: Hardware Vulnerability to Weakness Mapping Framework for Root Cause Analysis with GPT-assisted Mitigation Suggestion [paper] 2023.12.21
-
Cyber Sentinel: Exploring Conversational Agents in Streamlining Security Tasks with GPT-4 [paper] 2023.09.28
-
Evaluation of LLM Chatbots for OSINT-based Cyber Threat Awareness [paper] 2024.03.13
-
Crimson: Empowering Strategic Reasoning in Cybersecurity through Large Language Models [paper] 2024.03.01
-
SEvenLLM: Benchmarking, Eliciting, and Enhancing Abilities of Large Language Models in Cyber Threat Intelligence [paper] 2024.05.06
-
AttacKG+:Boosting Attack Knowledge Graph Construction with Large Language Models [paper] 2024.05.08
-
Actionable Cyber Threat Intelligence using Knowledge Graphs and Large Language Models [paper] 2024.06.30
-
LLMCloudHunter: Harnessing LLMs for Automated Extraction of Detection Rules from Cloud-Based CTI [paper] 2024.07.06
-
Using LLMs to Automate Threat Intelligence Analysis Workflows in Security Operation Centers [paper] 2024.07.18
-
Psychological Profiling in Cybersecurity: A Look at LLMs and Psycholinguistic Features [paper] 2024.08.09
-
The Use of Large Language Models (LLM) for Cyber Threat Intelligence (CTI) in Cybercrime Forums [paper] 2024.08.08
-
A RAG-Based Question-Answering Solution for Cyber-Attack Investigation and Attribution [paper] 2024.08.12
-
Usefulness of data flow diagrams and large language models for security threat validation: a registered report [paper] 2024.08.14
-
KGV: Integrating Large Language Models with Knowledge Graphs for Cyber Threat Intelligence Credibility Assessment [paper] 2024.08.15
FUZZ
-
Augmenting Greybox Fuzzing with Generative AI [paper] 2023.06.11
-
How well does LLM generate security tests? [paper] 2023.10.03
-
Fuzz4All: Universal Fuzzing with Large Language Models [paper] 2024.01.15
-
CODAMOSA: Escaping Coverage Plateaus in Test Generation with Pre-trained Large Language Models [paper] 2023.07.26
-
Understanding Large Language Model Based Fuzz Driver Generation [paper] 2023.07.24
-
Large Language Models Are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models [paper] 2023.06.07
-
Large Language Models are Edge-Case Fuzzers: Testing Deep Learning Libraries via FuzzGPT [paper] 2023.04.04
-
Large language model guided protocol fuzzing [paper] 2024.02.26
-
Fuzzing BusyBox: Leveraging LLM and Crash Reuse for Embedded Bug Unearthing [paper] 2024.03.06
-
When Fuzzing Meets LLMs: Challenges and Opportunities [paper] 2024.04.25
-
An Exploratory Study on Using Large Language Models for Mutation Testing [paper] 2024.06.14
Vulnerabilities Detection
-
Evaluation of ChatGPT Model for Vulnerability Detection [paper] 2023.04.12
-
Detecting software vulnerabilities using Language Models [paper] 2023.02.23
-
Software Vulnerability Detection using Large Language Models [paper] 2023.09.02
-
Understanding the Effectiveness of Large Language Models in Detecting Security Vulnerabilities [paper] 2023.11.16
-
Software Vulnerability and Functionality Assessment using LLMs [paper] 2024.03.13
-
Finetuning Large Language Models for Vulnerability Detection [paper] 2024.03.01
-
The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models [paper] 2023.11.15
-
DefectHunter: A Novel LLM-Driven Boosted-Conformer-based Code Vulnerability Detection Mechanism [paper] 2023.09.27
-
Prompt-Enhanced Software Vulnerability Detection Using ChatGPT [paper] 2023.08.24
-
Using ChatGPT as a Static Application Security Testing Tool [paper] 2023.08.28
-
LLbezpeky: Leveraging Large Language Models for Vulnerability Detection [paper] 2024.01.13
-
Large Language Model-Powered Smart Contract Vulnerability Detection: New Perspectives [paper] 2023.10.16
-
Software Vulnerability Detection with GPT and In-Context Learning [paper] 2024.01.08
-
GPTScan: Detecting Logic Vulnerabilities in Smart Contracts by Combining GPT with Program Analysis [paper] 2023.12.25
-
VulLibGen: Identifying Vulnerable Third-Party Libraries via Generative Pre-Trained Model [paper] 2023.08.09
-
LLM4Vuln: A Unified Evaluation Framework for Decoupling and Enhancing LLMs' Vulnerability Reasoning [paper] 2024.01.29
-
Large Language Models for Test-Free Fault Localization [paper] 2023.10.03
-
Multi-role Consensus through LLMs Discussions for Vulnerability Detection [paper] 2024.03.21
-
How ChatGPT is Solving Vulnerability Management Problem [paper] 2023.11.11
-
DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection [paper] 2023.08.09
-
The FormAI Dataset: Generative AI in Software Security through the Lens of Formal Verification [paper] 2023.09.02
-
How Far Have We Gone in Vulnerability Detection Using Large Language Models [paper] 2023.12.22
-
Large Language Model for Vulnerability Detection and Repair: Literature Review and Roadmap [paper] 2024.04.04
-
DLAP: A Deep Learning Augmented Large Language Model Prompting Framework for Software Vulnerability Detection [paper] 2024.05.02
-
Harnessing Large Language Models for Software Vulnerability Detection: A Comprehensive Benchmarking Study [paper] 2024.05.24
-
LLM-Assisted Static Analysis for Detecting Security Vulnerabilities [paper] 2024.05.27
-
Generalization-Enhanced Code Vulnerability Detection via Multi-Task Instruction Fine-Tuning [paper] 2024.06.06
-
Vul-RAG: Enhancing LLM-based Vulnerability Detection via Knowledge-level RAG [paper] 2024.06.19
-
MALSIGHT: Exploring Malicious Source Code and Benign Pseudocode for Iterative Binary Malware Summarization [paper] 2024.06.26
-
Assessing the Effectiveness of LLMs in Android Application Vulnerability Analysis [paper] 2024.06.27
-
Detect Llama -- Finding Vulnerabilities in Smart Contracts using Large Language Models [paper] 2024.07.12
-
Static Detection of Filesystem Vulnerabilities in Android Systems [paper] 2024.07.16
-
SCoPE: Evaluating LLMs for Software Vulnerability Detection [paper] 2024.07.19
-
Comparison of Static Application Security Testing Tools and Large Language Models for Repo-level Vulnerability Detection [paper] 2024.07.23
-
Towards Effectively Detecting and Explaining Vulnerabilities Using Large Language Models [paper] 2024.08.08
-
Harnessing the Power of LLMs in Source Code Vulnerability Detection [paper] 2024.08.07
-
Exploring RAG-based Vulnerability Augmentation with LLMs [paper] 2024.08.08
-
LLM-Enhanced Static Analysis for Precise Identification of Vulnerable OSS Versions [paper] 2024.08.14
-
ANVIL: Anomaly-based Vulnerability Identification without Labelled Training Data [paper] 2024.08.28
-
Outside the Comfort Zone: Analysing LLM Capabilities in Software Vulnerability Detection [paper] 2024.08.29
Insecure code Generation
-
Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants [paper] 2023.02.27
-
Bugs in Large Language Models Generated Code [paper] 2024.03.18
-
Asleep at the Keyboard? Assessing the Security of GitHub Copilotβs Code Contributions [paper] 2021.12.16
-
The Effectiveness of Large Language Models (ChatGPT and CodeBERT) for Security-Oriented Code Analysis [paper] 2023.08.29
-
No Need to Lift a Finger Anymore? Assessing the Quality of Code Generation by ChatGPT [paper] 2023.08.09
-
Generate and Pray: Using SALLMS to Evaluate the Security of LLM Generated Code [paper] 2023.11.01
-
Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation [paper] 2023.10.30
-
Can Large Language Models Identify And Reason About Security Vulnerabilities? Not Yet [paper] 2023.12.19
-
A Comparative Study of Code Generation using ChatGPT 3.5 across 10 Programming Languages [paper] 2023.08.08
-
How Secure is Code Generated by ChatGPT? [[paper]](How Secure is Code Generated by ChatGPT?) 2023.04.19
-
Large Language Models for Code: Security Hardening and Adversarial Testing [paper] 2023.09.29
-
Pop Quiz! Can a Large Language Model Help With Reverse Engineering? [paper] 2022.02.02
-
LLM4Decompile: Decompiling Binary Code with Large Language Models [paper] 2024.03.08
-
Large Language Models for Code Analysis: Do LLMs Really Do Their Job? [paper] 2024.03.05
-
Understanding Programs by Exploiting (Fuzzing) Test Cases [paper] 2023.01.12
-
Evaluating and Explaining Large Language Models for Code Using Syntactic Structures [paper] 2023.08.07
-
Prompt Engineering-assisted Malware Dynamic Analysis Using GPT-4 [paper] 2023.12.13
-
Using ChatGPT to Analyze Ransomware Messages and to Predict Ransomware Threats [paper] 2023.11.21
-
Shifting the Lens: Detecting Malware in npm Ecosystem with Large Language Models [paper] 2024.03.18
-
DebugBench: Evaluating Debugging Capability of Large Language Models [paper] 2024.01.11
-
Make LLM a Testing Expert: Bringing Human-like Interaction to Mobile GUI Testing via Functionality-aware Decisions [paper] 2023.10.24
-
FLAG: Finding Line Anomalies (in code) with Generative AI [paper] 2023.07.22
-
Evolutionary Large Language Models for Hardware Security: A Comparative Survey [paper] 2024.04.25
-
Do Neutral Prompts Produce Insecure Code? FormAI-v2 Dataset: Labelling Vulnerabilities in Code Generated by Large Language Models [paper] 2024.04.29
-
LLM Security Guard for Code [paper] 2024.05.03
-
Code Repair with LLMs gives an Exploration-Exploitation Tradeoff [paper] 2024.05.30
-
DistiLRR: Transferring Code Repair for Low-Resource Programming Languages [paper] 2024.06.20
-
Is Your AI-Generated Code Really Safe? Evaluating Large Language Models on Secure Code Generation with CodeSecEval [paper] 2024.07.04
-
An Exploratory Study on Fine-Tuning Large Language Models for Secure Code Generation [paper] 2024.08.17
Program Repair
-
Automatic Program Repair with OpenAI's Codex: Evaluating QuixBugs [paper] 2023.11.06
-
An Analysis of the Automatic Bug Fixing Performance of ChatGPT [paper] 2023.01.20
-
AI-powered patching: the future of automated vulnerability fixes [paper] 2024.01.31
-
Practical Program Repair in the Era of Large Pre-trained Language Models [paper] 2022.10.25
-
Security Code Review by LLMs: A Deep Dive into Responses [paper] 2024.01.29
-
Examining Zero-Shot Vulnerability Repair with Large Language Models [paper] 2022.08.15
-
How Effective Are Neural Networks for Fixing Security Vulnerabilities [paper] 2023.05.29
-
Can LLMs Patch Security Issues? [paper] 2024.02.19
-
InferFix: End-to-End Program Repair with LLMs [paper] 2023.03.13
-
ZeroLeak: Using LLMs for Scalable and Cost Effective Side-Channel Patching [paper] 2023.08.24
-
DIVAS: An LLM-based End-to-End Framework for SoC Security Analysis and Policy-based Protection [paper] 2023.08.14
-
Fixing Hardware Security Bugs with Large Language Models [paper] 2023.02.02
-
A Study of Vulnerability Repair in JavaScript Programs with Large Language Models [paper] 2023.03.19
-
Enhanced Automated Code Vulnerability Repair using Large Language Models [paper] 2024.01.08
-
Teaching Large Language Models to Self-Debug [paper] 2023.10.05
-
Better Patching Using LLM Prompting, via Self-Consistency [paper] 2023.08.16
-
Copiloting the Copilots: Fusing Large Language Models with Completion Engines for Automated Program Repair [paper] 2023.11.08
-
LLM-Powered Code Vulnerability Repair with Reinforcement Learning and Semantic Reward [paper] 2024.02.22
-
ContrastRepair: Enhancing Conversation-Based Automated Program Repair via Contrastive Test Case Pairs [paper] 2024.03.07
-
When Large Language Models Confront Repository-Level Automatic Program Repair: How Well They Done? [paper] 2023.03.01
-
Aligning LLMs for FL-free Program Repair [paper] 2024.04.13
-
Multi-Objective Fine-Tuning for Enhanced Program Repair with LLMs [paper] 2024.04.22
-
How Far Can We Go with Practical Function-Level Program Repair? [paper] 2024.04.19
-
Revisiting Unnaturalness for Automated Program Repair in the Era of Large Language Models [paper] 2024.03.23
-
A Systematic Literature Review on Large Language Models for Automated Program Repair [paper] 2024.05.12
-
Automated Repair of AI Code with Large Language Models and Formal Verification [paper] 2024.05.14
-
A Case Study of LLM for Automated Vulnerability Repair: Assessing Impact of Reasoning and Patch Validation Feedback [paper] 2024.05.24
-
Hybrid Automated Program Repair by Combining Large Language Models and Program Analysis [paper] 2024.06.04
-
Automated C/C++ Program Repair for High-Level Synthesis via Large Language Models [paper] 2024.07.04
-
ThinkRepair: Self-Directed Automated Program Repair [paper] 2024.07.30
-
Revisiting Evolutionary Program Repair via Code Language Model [paper] 2024.08.20
-
RePair: Automated Program Repair with Process-based Feedback [paper] 2024.08.21
-
Enhancing LLM-Based Automated Program Repair with Design Rationales [paper] 2024.08.22
-
Automated Software Vulnerability Patching using Large Language Models [paper] 2024.08.24
-
MergeRepair: An Exploratory Study on Merging Task-Specific Adapters in Code LLMs for Automated Program Repair [paper] 2024.08.26
Anomaly Detection
-
Benchmarking Large Language Models for Log Analysis, Security, and Interpretation [paper] 2023.11.24
-
Log-based Anomaly Detection based on EVT Theory with feedback [paper] 2023.09.30
-
LogGPT: Exploring ChatGPT for Log-Based Anomaly Detection [paper] 2023.09.14
-
LogGPT: Log Anomaly Detection via GPT [paper] 2023.12.11
-
Interpretable Online Log Analysis Using Large Language Models with Prompt Strategies [paper] 2024.01.26
-
Lemur: Log Parsing with Entropy Sampling and Chain-of-Thought Merging [paper] 2024.03.02
-
Web Content Filtering through knowledge distillation of Large Language Models [paper] 2023.05.10
-
Application of Large Language Models to DDoS Attack Detection [paper] 2024.02.05
-
An Improved Transformer-based Model for Detecting Phishing, Spam, and Ham: A Large Language Model Approach [paper] 2023.11.12
-
Evaluating the Performance of ChatGPT for Spam Email Detection [paper] 2024.02.23
-
Prompted Contextual Vectors for Spear-Phishing Detection [paper] 2024.02.14
-
Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models [paper] 2023.11.30
-
Explaining Tree Model Decisions in Natural Language for Network Intrusion Detection [paper] 2023.10.30
-
Revolutionizing Cyber Threat Detection with Large Language Models: A privacy-preserving BERT-based Lightweight Model for IoT/IIoT Devices [paper] 2024.02.08
-
HuntGPT: Integrating Machine Learning-Based Anomaly Detection and Explainable AI with Large Language Models (LLMs) [paper] 2023.09.27
-
ChatGPT for digital forensic investigation: The good, the bad, and the unknown [paper] 2023.07.10
-
Large Language Models Spot Phishing Emails with Surprising Accuracy: A Comparative Analysis of Performance [paper] 2024.04.23
-
LLMParser: An Exploratory Study on Using Large Language Models for Log Parsing [paper] 2024.04.27
-
DoLLM: How Large Language Models Understanding Network Flow Data to Detect Carpet Bombing DDoS [paper] 2024.05.12
-
Large Language Models in Wireless Application Design: In-Context Learning-enhanced Automatic Network Intrusion Detection [paper] 2024.05.17
-
Log Parsing with Self-Generated In-Context Learning and Self-Correction [paper] 2024.06.05
-
Generative AI-in-the-loop: Integrating LLMs and GPTs into the Next Generation Networks [paper] 2024.06.06
-
ULog: Unsupervised Log Parsing with Large Language Models through Log Contrastive Units [paper] 2024.06.11
-
Anomaly Detection on Unstable Logs with GPT Models [paper] 2024.06.11
-
Defending Against Social Engineering Attacks in the Age of LLMs [paper] 2024.06.18
-
LogEval: A Comprehensive Benchmark Suite for Large Language Models In Log Analysis [paper] 2024.07.02
-
Audit-LLM: Multi-Agent Collaboration for Log-based Insider Threat Detection [paper] 2024.07.12
-
Towards Explainable Network Intrusion Detection using Large Language Models [paper] 2024.08.08
-
Utilizing Large Language Models to Optimize the Detection and Explainability of Phishing Websites [paper] 2024.08.11
-
Multimodal Large Language Models for Phishing Webpage Detection and Identification [paper] 2024.08.12
-
Transformers and Large Language Models for Efficient Intrusion Detection Systems: A Comprehensive Survey [paper] 2024.08.14
-
Automated Phishing Detection Using URLs and Webpages [paper] 2024.08.16
-
LogParser-LLM: Advancing Efficient Log Parsing with Large Language Models [paper] 2024.08.25
-
XG-NID: Dual-Modality Network Intrusion Detection using a Heterogeneous Graph Neural Network and Large Language Model [paper] 2024.08.27
LLM Assisted Attack
-
Identifying and mitigating the security risks of generative ai [paper] 2023.12.29
-
Impact of Big Data Analytics and ChatGPT on Cybersecurity [paper] 2023.05.22
-
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy [paper] 2023.07.03
-
LLMs Killed the Script Kiddie: How Agents Supported by Large Language Models Change the Landscape of Network Threat Testing [paper] 2023.10.10
-
Malla: Demystifying Real-world Large Language Model Integrated Malicious Services [paper] 2024.01.06
-
Evaluating LLMs for Privilege-Escalation Scenarios [paper] 2023.10.23
-
Using Large Language Models for Cybersecurity Capture-The-Flag Challenges and Certification Questions [paper] 2023.08.21
-
Exploring the Dark Side of AI: Advanced Phishing Attack Design and Deployment Using ChatGPT [paper] 2023.09.19
-
From Chatbots to PhishBots? - Preventing Phishing scams created using ChatGPT, Google Bard and Claude [paper] 2024.03.10
-
From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads [paper] 2023.05.24
-
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool [paper] 2023.08.13
-
AutoAttacker: A Large Language Model Guided System to Implement Automatic Cyber-attacks [paper] 2024.03.02
-
RatGPT: Turning online LLMs into Proxies for Malware Attacks [paper] 2023.09.07
-
Getting pwnβd by AI: Penetration Testing with Large Language Models [paper] 2023.08.17
-
Assessing AI vs Human-Authored Spear Phishing SMS Attacks: An Empirical Study Using the TRAPD Method [paper] 2024.06.18
-
Tactics, Techniques, and Procedures (TTPs) in Interpreted Malware: A Zero-Shot Generation with Large Language Models [paper] 2024.07.11
-
The Shadow of Fraud: The Emerging Danger of AI-powered Social Engineering and its Possible Cure [paper] 2024.07.22
-
From Sands to Mansions: Enabling Automatic Full-Life-Cycle Cyberattack Construction with LLM [paper] 2024.07.24
-
PenHeal: A Two-Stage LLM Framework for Automated Pentesting and Optimal Remediation [paper] 2024.07.25
-
Practical Attacks against Black-box Code Completion Engines [paper] 2024.08.05
-
Using Retriever Augmented Large Language Models for Attack Graph Generation [paper] 2024.08.11
-
CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher [paper] 2024.08.21
-
Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks [paper] 2024.08.23
Others
-
An LLM-based Framework for Fingerprinting Internet-connected Devices [paper] 2023.10.24
-
Anatomy of an AI-powered malicious social botnet [paper] 2023.07.30
-
Just-in-Time Security Patch Detection -- LLM At the Rescue for Data Augmentation [paper] 2023.12.12
-
LLM for SoC Security: A Paradigm Shift [paper] 2023.10.09
-
Harnessing the Power of LLM to Support Binary Taint Analysis [paper] 2023.10.12
-
Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations [paper] 2023.12.07
-
LLM in the Shell: Generative Honeypots [paper] 2024.02.09
-
Employing LLMs for Incident Response Planning and Review [paper] 2024.03.02
-
Enhancing Network Management Using Code Generated by Large Language Models [[paper]] (https://arxiv.org/abs/2308.06261) 2023.08.11
-
Prompting Is All You Need: Automated Android Bug Replay with Large Language Models [paper] 2023.07.18
-
Is Stack Overflow Obsolete? An Empirical Study of the Characteristics of ChatGPT Answers to Stack Overflow Questions [paper] 2024.02.07
-
How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models [paper] 2024.04.16
-
Act as a Honeytoken Generator! An Investigation into Honeytoken Generation with Large Language Models [paper] 2024.04.24
-
AppPoet: Large Language Model based Android malware detection via multi-view prompt engineering [paper] 2024.04.29
-
Large Language Models for Cyber Security: A Systematic Literature Review [paper] 2024.05.08
-
Critical Infrastructure Protection: Generative AI, Challenges, and Opportunities [paper] 2024.05.08
-
LLMPot: Automated LLM-based Industrial Protocol and Physical Process Emulation for ICS Honeypots [paper] 2024.05.10
-
A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions [paper] 2024.05.23
-
Exploring the Efficacy of Large Language Models (GPT-4) in Binary Reverse Engineering [paper] 2024.06.09
-
Threat Modelling and Risk Analysis for Large Language Model (LLM)-Powered Applications [paper] 2024.06.16
-
On Large Language Models in National Security Applications [paper] 2024.07.03
-
Disassembling Obfuscated Executables with LLM [paper] 2024.07.12
-
MoRSE: Bridging the Gap in Cybersecurity Expertise with Retrieval Augmented Generation [paper] 2024.07.22
-
MistralBSM: Leveraging Mistral-7B for Vehicular Networks Misbehavior Detection [paper] 2024.07.26
-
Beyond Detection: Leveraging Large Language Models for Cyber Attack Prediction in IoT Networks [paper] 2024.08.26
RQ3: What are further research directions about the application of LLMs in cybersecurity?
Further Research: Agent4Cybersecurity
-
Cybersecurity Issues and Challenges [paper] 2022.08
-
A unified cybersecurity framework for complex environments [paper] 2018.09.26
-
LLMind: Orchestrating AI and IoT with LLM for Complex Task Execution [paper] 2024.02.20
-
Out of the Cage: How Stochastic Parrots Win in Cyber Security Environments [paper] 2023.08.28
-
Llm agents can autonomously hack websites. [paper] 2024.02.16
-
Nissist: An Incident Mitigation Copilot based on Troubleshooting Guides [paper] 2024.02.27
-
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage [paper] 2023.11.07
-
The Rise and Potential of Large Language Model Based Agents: A Survey [paper] 2023.09.19
-
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs [paper] 2023.10.03
-
From Summary to Action: Enhancing Large Language Models for Complex Tasks with Open World APIs [paper] 2024.02.28
-
If llm is the wizard, then code is the wand: A survey on how code empowers large language models to serve as intelligent agents. [paper] 2024.01.08
-
TaskWeaver: A Code-First Agent Framework [paper] 2023.12.01
-
Large Language Models for Networking: Applications, Enabling Techniques, and Challenges [paper] 2023.11.29
-
R-Judge: Benchmarking Safety Risk Awareness for LLM Agents [paper] 2024.02.18
-
WIPI: A New Web Threat for LLM-Driven Web Agents [paper] 2024.02.26
-
InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents [paper] 2024.03.25
-
LLM Agents can Autonomously Exploit One-day Vulnerabilities [paper] 2024.04.17
-
Large Language Models for Networking: Workflow, Advances and Challenges [paper] 2024.04.29
-
Generative AI in Cybersecurity [paper] 2024.05.02
-
Generative AI and Large Language Models for Cyber Security: All Insights You Need [paper] 2024.05.21
-
Teams of LLM Agents can Exploit Zero-Day Vulnerabilities [paper] 2024.06.02
-
Using LLMs to Automate Threat Intelligence Analysis Workflows in Security Operation Centers [paper] 2024.07.18
-
PhishAgent: A Robust Multimodal Agent for Phishing Webpage Detection [paper] 2024.08.20
πBibTeX
@misc{zhang2024llms,
title={When LLMs Meet Cybersecurity: A Systematic Literature Review},
author={Jie Zhang and Haoyu Bu and Hui Wen and Yu Chen and Lun Li and Hongsong Zhu},
year={2024},
eprint={2405.03644},
archivePrefix={arXiv},
primaryClass={cs.CR}
}
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Awesome-LLM4Cybersecurity
Similar Open Source Tools
Awesome-LLM4Cybersecurity
The repository 'Awesome-LLM4Cybersecurity' provides a comprehensive overview of the applications of Large Language Models (LLMs) in cybersecurity. It includes a systematic literature review covering topics such as constructing cybersecurity-oriented domain LLMs, potential applications of LLMs in cybersecurity, and research directions in the field. The repository analyzes various benchmarks, datasets, and applications of LLMs in cybersecurity tasks like threat intelligence, fuzzing, vulnerabilities detection, insecure code generation, program repair, anomaly detection, and LLM-assisted attacks.
LLM-and-Law
This repository is dedicated to summarizing papers related to large language models with the field of law. It includes applications of large language models in legal tasks, legal agents, legal problems of large language models, data resources for large language models in law, law LLMs, and evaluation of large language models in the legal domain.
Awesome-LLM4RS-Papers
This paper list is about Large Language Model-enhanced Recommender System. It also contains some related works. Keywords: recommendation system, large language models
llm-continual-learning-survey
This repository is an updating survey for Continual Learning of Large Language Models (CL-LLMs), providing a comprehensive overview of various aspects related to the continual learning of large language models. It covers topics such as continual pre-training, domain-adaptive pre-training, continual fine-tuning, model refinement, model alignment, multimodal LLMs, and miscellaneous aspects. The survey includes a collection of relevant papers, each focusing on different areas within the field of continual learning of large language models.
Recommendation-Systems-without-Explicit-ID-Features-A-Literature-Review
This repository is a collection of papers and resources related to recommendation systems, focusing on foundation models, transferable recommender systems, large language models, and multimodal recommender systems. It explores questions such as the necessity of ID embeddings, the shift from matching to generating paradigms, and the future of multimodal recommender systems. The papers cover various aspects of recommendation systems, including pretraining, user representation, dataset benchmarks, and evaluation methods. The repository aims to provide insights and advancements in the field of recommendation systems through literature reviews, surveys, and empirical studies.
Awesome-LLM-Compression
Awesome LLM compression research papers and tools to accelerate LLM training and inference.
Efficient-LLMs-Survey
This repository provides a systematic and comprehensive review of efficient LLMs research. We organize the literature in a taxonomy consisting of three main categories, covering distinct yet interconnected efficient LLMs topics from **model-centric** , **data-centric** , and **framework-centric** perspective, respectively. We hope our survey and this GitHub repository can serve as valuable resources to help researchers and practitioners gain a systematic understanding of the research developments in efficient LLMs and inspire them to contribute to this important and exciting field.
AI-System-School
AI System School is a curated list of research in machine learning systems, focusing on ML/DL infra, LLM infra, domain-specific infra, ML/LLM conferences, and general resources. It provides resources such as data processing, training systems, video systems, autoML systems, and more. The repository aims to help users navigate the landscape of AI systems and machine learning infrastructure, offering insights into conferences, surveys, books, videos, courses, and blogs related to the field.
Awesome-LLM-Survey
This repository, Awesome-LLM-Survey, serves as a comprehensive collection of surveys related to Large Language Models (LLM). It covers various aspects of LLM, including instruction tuning, human alignment, LLM agents, hallucination, multi-modal capabilities, and more. Researchers are encouraged to contribute by updating information on their papers to benefit the LLM survey community.
Awesome_Mamba
Awesome Mamba is a curated collection of groundbreaking research papers and articles on Mamba Architecture, a pioneering framework in deep learning known for its selective state spaces and efficiency in processing complex data structures. The repository offers a comprehensive exploration of Mamba architecture through categorized research papers covering various domains like visual recognition, speech processing, remote sensing, video processing, activity recognition, image enhancement, medical imaging, reinforcement learning, natural language processing, 3D recognition, multi-modal understanding, time series analysis, graph neural networks, point cloud analysis, and tabular data handling.
awesome-AIOps
awesome-AIOps is a curated list of academic researches and industrial materials related to Artificial Intelligence for IT Operations (AIOps). It includes resources such as competitions, white papers, blogs, tutorials, benchmarks, tools, companies, academic materials, talks, workshops, papers, and courses covering various aspects of AIOps like anomaly detection, root cause analysis, incident management, microservices, dependency tracing, and more.
AwesomeLLM4APR
Awesome LLM for APR is a repository dedicated to exploring the capabilities of Large Language Models (LLMs) in Automated Program Repair (APR). It provides a comprehensive collection of research papers, tools, and resources related to using LLMs for various scenarios such as repairing semantic bugs, security vulnerabilities, syntax errors, programming problems, static warnings, self-debugging, type errors, web UI tests, smart contracts, hardware bugs, performance bugs, API misuses, crash bugs, test case repairs, formal proofs, GitHub issues, code reviews, motion planners, human studies, and patch correctness assessments. The repository serves as a valuable reference for researchers and practitioners interested in leveraging LLMs for automated program repair.
Awesome-LLMs-on-device
Welcome to the ultimate hub for on-device Large Language Models (LLMs)! This repository is your go-to resource for all things related to LLMs designed for on-device deployment. Whether you're a seasoned researcher, an innovative developer, or an enthusiastic learner, this comprehensive collection of cutting-edge knowledge is your gateway to understanding, leveraging, and contributing to the exciting world of on-device LLMs.
awesome-llm-security
Awesome LLM Security is a curated collection of tools, documents, and projects related to Large Language Model (LLM) security. It covers various aspects of LLM security including white-box, black-box, and backdoor attacks, defense mechanisms, platform security, and surveys. The repository provides resources for researchers and practitioners interested in understanding and safeguarding LLMs against adversarial attacks. It also includes a list of tools specifically designed for testing and enhancing LLM security.
awesome-deeplogic
Awesome deep logic is a curated list of papers and resources focusing on integrating symbolic logic into deep neural networks. It includes surveys, tutorials, and research papers that explore the intersection of logic and deep learning. The repository aims to provide valuable insights and knowledge on how logic can be used to enhance reasoning, knowledge regularization, weak supervision, and explainability in neural networks.
For similar tasks
Awesome-LLM4Cybersecurity
The repository 'Awesome-LLM4Cybersecurity' provides a comprehensive overview of the applications of Large Language Models (LLMs) in cybersecurity. It includes a systematic literature review covering topics such as constructing cybersecurity-oriented domain LLMs, potential applications of LLMs in cybersecurity, and research directions in the field. The repository analyzes various benchmarks, datasets, and applications of LLMs in cybersecurity tasks like threat intelligence, fuzzing, vulnerabilities detection, insecure code generation, program repair, anomaly detection, and LLM-assisted attacks.
watchtower
AIShield Watchtower is a tool designed to fortify the security of AI/ML models and Jupyter notebooks by automating model and notebook discoveries, conducting vulnerability scans, and categorizing risks into 'low,' 'medium,' 'high,' and 'critical' levels. It supports scanning of public GitHub repositories, Hugging Face repositories, AWS S3 buckets, and local systems. The tool generates comprehensive reports, offers a user-friendly interface, and aligns with industry standards like OWASP, MITRE, and CWE. It aims to address the security blind spots surrounding Jupyter notebooks and AI models, providing organizations with a tailored approach to enhancing their security efforts.
LLM-PLSE-paper
LLM-PLSE-paper is a repository focused on the applications of Large Language Models (LLMs) in Programming Language and Software Engineering (PL/SE) domains. It covers a wide range of topics including bug detection, specification inference and verification, code generation, fuzzing and testing, code model and reasoning, code understanding, IDE technologies, prompting for reasoning tasks, and agent/tool usage and planning. The repository provides a comprehensive collection of research papers, benchmarks, empirical studies, and frameworks related to the capabilities of LLMs in various PL/SE tasks.
invariant
Invariant Analyzer is an open-source scanner designed for LLM-based AI agents to find bugs, vulnerabilities, and security threats. It scans agent execution traces to identify issues like looping behavior, data leaks, prompt injections, and unsafe code execution. The tool offers a library of built-in checkers, an expressive policy language, data flow analysis, real-time monitoring, and extensible architecture for custom checkers. It helps developers debug AI agents, scan for security violations, and prevent security issues and data breaches during runtime. The analyzer leverages deep contextual understanding and a purpose-built rule matching engine for security policy enforcement.
OpenRedTeaming
OpenRedTeaming is a repository focused on red teaming for generative models, specifically large language models (LLMs). The repository provides a comprehensive survey on potential attacks on GenAI and robust safeguards. It covers attack strategies, evaluation metrics, benchmarks, and defensive approaches. The repository also implements over 30 auto red teaming methods. It includes surveys, taxonomies, attack strategies, and risks related to LLMs. The goal is to understand vulnerabilities and develop defenses against adversarial attacks on large language models.
For similar jobs
ciso-assistant-community
CISO Assistant is a tool that helps organizations manage their cybersecurity posture and compliance. It provides a centralized platform for managing security controls, threats, and risks. CISO Assistant also includes a library of pre-built frameworks and tools to help organizations quickly and easily implement best practices.
PurpleLlama
Purple Llama is an umbrella project that aims to provide tools and evaluations to support responsible development and usage of generative AI models. It encompasses components for cybersecurity and input/output safeguards, with plans to expand in the future. The project emphasizes a collaborative approach, borrowing the concept of purple teaming from cybersecurity, to address potential risks and challenges posed by generative AI. Components within Purple Llama are licensed permissively to foster community collaboration and standardize the development of trust and safety tools for generative AI.
vpnfast.github.io
VPNFast is a lightweight and fast VPN service provider that offers secure and private internet access. With VPNFast, users can protect their online privacy, bypass geo-restrictions, and secure their internet connection from hackers and snoopers. The service provides high-speed servers in multiple locations worldwide, ensuring a reliable and seamless VPN experience for users. VPNFast is easy to use, with a user-friendly interface and simple setup process. Whether you're browsing the web, streaming content, or accessing sensitive information, VPNFast helps you stay safe and anonymous online.
taranis-ai
Taranis AI is an advanced Open-Source Intelligence (OSINT) tool that leverages Artificial Intelligence to revolutionize information gathering and situational analysis. It navigates through diverse data sources like websites to collect unstructured news articles, utilizing Natural Language Processing and Artificial Intelligence to enhance content quality. Analysts then refine these AI-augmented articles into structured reports that serve as the foundation for deliverables such as PDF files, which are ultimately published.
NightshadeAntidote
Nightshade Antidote is an image forensics tool used to analyze digital images for signs of manipulation or forgery. It implements several common techniques used in image forensics including metadata analysis, copy-move forgery detection, frequency domain analysis, and JPEG compression artifacts analysis. The tool takes an input image, performs analysis using the above techniques, and outputs a report summarizing the findings.
h4cker
This repository is a comprehensive collection of cybersecurity-related references, scripts, tools, code, and other resources. It is carefully curated and maintained by Omar Santos. The repository serves as a supplemental material provider to several books, video courses, and live training created by Omar Santos. It encompasses over 10,000 references that are instrumental for both offensive and defensive security professionals in honing their skills.
AIMr
AIMr is an AI aimbot tool written in Python that leverages modern technologies to achieve an undetected system with a pleasing appearance. It works on any game that uses human-shaped models. To optimize its performance, users should build OpenCV with CUDA. For Valorant, additional perks in the Discord and an Arduino Leonardo R3 are required.
admyral
Admyral is an open-source Cybersecurity Automation & Investigation Assistant that provides a unified console for investigations and incident handling, workflow automation creation, automatic alert investigation, and next step suggestions for analysts. It aims to tackle alert fatigue and automate security workflows effectively by offering features like workflow actions, AI actions, case management, alert handling, and more. Admyral combines security automation and case management to streamline incident response processes and improve overall security posture. The tool is open-source, transparent, and community-driven, allowing users to self-host, contribute, and collaborate on integrations and features.