Academic_LLM_Sec_Papers

Academic_LLM_Sec_Papers

Academic Papers about LLM Application on Security

Stars: 54

Visit
 screenshot

Academic_LLM_Sec_Papers is a curated collection of academic papers related to LLM Security Application. The repository includes papers sorted by conference name and published year, covering topics such as large language models for blockchain security, software engineering, machine learning, and more. Developers and researchers are welcome to contribute additional published papers to the list. The repository also provides information on listed conferences and journals related to security, networking, software engineering, and cryptography. The papers cover a wide range of topics including privacy risks, ethical concerns, vulnerabilities, threat modeling, code analysis, fuzzing, and more.

README:

Academic Papers About LLM Application on Cyber Security.

A curated LLM Security Application related academic papers. All papers are sorted based on the conference name and published year.

Welcome developers or researchers to add more published papers to this list.

The cryptocurrency donation address: 0xCC28B05fE858CDbc8692E3272A4451111bDCf700.

Welcome to visit my homepage and Google Scholar.

Table of Listed Conferences

Security & Crypto Networking & Database Software Engineering & Programming Language Machine Learning
IEEE S&P SIGMETRICS ICSE AAAI
ACM CCS ICDE ESEC/FSE ACL
USENIX Security VLDB ASE ICML
NDSS ACM SIGMOD ACM PLDI NeurIPS
IEEE DSN IEEE INFOCOM ACM OOPSLA
SRCS IMC ISSTA
RAID WWW ACM POPL
CAV

Table of Listed Journals

Also including:


Literature Review

2024

Large Language Models for Blockchain Security: A Systematic Literature Review.

A survey on large language model (llm) security and privacy: The good, the bad, and the ugly.

Large language models for software engineering: A systematic literature review.

Securing Large Language Models: Threats, Vulnerabilities and Responsible Practices.

Unveiling security, privacy, and ethical concerns of chatgpt.


Conference

S&P

2024

On Large Language Models’ Resilience to Coercive Interrogation.

Combing for Credentials: Active Pattern Extraction from Smart Reply.

DrSec: Flexible Distributed Representations for Efficient Endpoint Security.

Moderating New Waves of Online Hate with Chain-of-Thought Reasoning in LargeLanguage Models.

Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models.

TROJANPUZZLE: Covertly Poisoning Code-Suggestion Models.

Transferable Multimoda!Attack on Vision-LanguagePre-Training Models.

You Only Prompt Once: On the Capabilities of PromptLearning on Large LanguageModels to Tackle ToxicContent.

SMARTINV: Multimodal Learning for Smart Contract Invariant Inference.

LLMIF: Augmented Large Language Model for Fuzzing IoT Devices.

2023

Examining zero-shot vulnerability repair with large language models.

Analyzing Leakage of Personally Identifiable Information in Language Models.

2022

Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code Contributions.

Spinning language models: Risks of propaganda-as-a-service and countermeasures.

2020

Privacy risks of general-purpose language models


CCS

2024

PromptFuzz: Prompt Fuzzing for Fuzz Driver Generation.

GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models.

2023

Stealing the Decoding Algorithms of Language Models.

Large Language Models for Code: Security Hardening and Adversarial Testing.

Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection.

Protecting intellectual property of large language model-based code generation apis via watermarks.

Dp-forward: Fine-tuning and inference on language models with differential privacy in forward pass.


USENIX Security

2024

Rapid Adoption, Hidden Risks: The Dual Impact of Large Language Model Customization.

PENTESTGPT: An LLM-empowered Automatic Penetration Testing Tool

Don't Listen To Me: Understanding and Exploring Jailbreak Prompts of Large Language Models.

Large Language Models for Code Analysis: Do LLMs Really Do Their Job?.

EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection.

Fuzzing BusyBox: Leveraging LLM and Crash Reuse for Embedded Bug Unearthing.

Prompt Stealing Attacks Against Text-to-Image Generation Models.

2023

Lost at c: A user study on the security implications of large language model code assistants.

CodexLeaks: Privacy Leaks from Code Generation Language Models in GitHub Copilot.

{Two-in-One}: A Model Hijacking Attack Against Text Generation Models.

2021

Extracting Training Data from Large Language Models.

You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion.


NDSS

2024

LMSanitator: Defending Prompt-Tuning Against Task-Agnostic Backdoors

Analysis of the Effect of the Difference between Japanese and English Input on ChatGPT-Generated Secure Codes.

MASTERKEY: Automated Jailbreaking of Large Language Model Chatbots.

DeGPT: Optimizing Decompiler Output with LLM.

DEMASQ: Unmasking the ChatGPT Wordsmith.

Large Language Model guided Protocol Fuzzing.

Facilitating Threat Modeling by Leveraging Large Language Models


OOPSLA

2024

Enhancing Static Analysis for Practical Bug Detection: An LLM-Integrated Approach.

PyDex: Repairing Bugs in Introductory Python Assignments using LLMs.


ICSE

2024

Large Language Models are Edge-Case Fuzzers: Testing Deep Learning Libraries via FuzzGPT

Fuzz4All: Universal Fuzzing with Large Language Models.

LLMParser: An Exploratory Study on Using Large Language Models for Log Parsing.

Exploring the Potential of ChatGPT in Automated Code Refinement: An Empirical Study.

Large Language Models are Edge-Case Fuzzers: Testing Deep Learning Libraries via FuzzGPT.

UniLog: Automatic Logging via LLM and In-Context Learning.

Prompting Is All You Need: Automated Android Bug Replay with Large Language Models.

Large Language Models for Test-Free Fault Localization.

Large language models are few-shot testers: Exploring llm-based general bug reproduction.

Large Language Models are Few-Shot Summarizers: Multi-Intent Comment Generation via In-Context Learning.

Large Language Models are Edge-Case Generators: Crafting Unusual Programs for Fuzzing Deep Learning Libraries.

GPTScan: Detecting Logic Vulnerabilities in Smart Contracts by Combining GPT with Program Analysis.

Automated Program Repair in the Era of Large Pre-trained Language Models.

2023

Does data sampling improve deep learning-based vulnerability detection? Yeas! and Nays!.

An Empirical Study of Deep Learning Models for Vulnerability Detection.

RepresentThemAll: A Universal Learning Representation of Bug Reports.

Contrabert: Enhancing code pre-trained models via contrastive learning.

On the robustness of code generation techniques: An empirical study on github copilot.

Two sides of the same coin: Exploiting the impact of identifiers in neural code comprehension.

Automated repair of programs from large language models.

Cctest: Testing and repairing code completion systems.

CodaMosa: Escaping Coverage Plateaus in Test Generation with Pre-trained Large Language Models.

Impact of Code Language Models on Automated Program Repair.

2022

ReCode: Robustness Evaluation of Code Generation Models.


CAV

2024

Enchanting Program Specification Synthesis by Large Language Models using Static Analysis and Program Verification.


ASE

2024

Better Patching Using LLM Prompting, via Self-Consistency.

Towards Autonomous Testing Agents via Conversational Large Language Models.

Let's Chat to Find the APIs: Connecting Human, LLM and Knowledge Graph through AI Chain.

Log Parsing: How Far Can ChatGPT Go?.

2022

Robust Learning of Deep Predictive Models from Noisy and Imbalanced Software Engineering Datasets.


ISSTA

2024

Large Language Models Are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models.

2023

How Effective Are Neural Networks for Fixing Security Vulnerabilities.


ESEC/FSE

2023

InferFix: End-to-End Program Repair with LLMs.

Getting pwn'd by ai: Penetration testing with large language models.

Llm-based code generation method for golang compiler testing.

Assisting static analysis with large language models: A chatgpt experiment.

Assess and Summarize: Improve Outage Understanding with Large Language Models.

2022

Generating realistic vulnerabilities via neural code editing: an empirical study.

You see what I want you to see: poisoning vulnerabilities in neural code search.

2021

Vulnerability detection with fine-grained interpretations.


ACL

2024

Not the end of story: An evaluation of chatgpt-driven vulnerability description mappings.

Understanding Programs by Exploiting (Fuzzing) Test Cases.

2023

Backdooring Neural Code Search.

Membership inference attacks against language models via neighbourhood comparison.

Are you copying my model? protecting the copyright of large language models for eaas via backdoor watermark.

2022

ReCode: Robustness Evaluation of Code Generation Models.

Knowledge unlearning for mitigating privacy risks in language models.

2018

Contamination attacks and mitigation in multi-party machine learning.


AAAI

2022

Adversarial Robustness of Deep Code Comment Generation.


ICML

2023

Bag of tricks for training data extraction from language models.

2022

Deduplicating training data mitigates privacy risks in language models.


NeurIPS

2022

Recovering private text in federated learning of language models.


WWW

2024

ZipZap: Efficient Training of Language Models for Large-Scale Fraud Detection on Blockchain.

2022

Coprotector: Protect open-source code against unauthorized training usage with data poisoning.


journal

TIFS

(Security) Assertions by Large Language Models.

A Performance-Sensitive Malware Detection System Using Deep Learning on Mobile DevicesA Performance-Sensitive Malware Detection System Using Deep Learning on Mobile Devices.

TDSC

PrivacyAsst: Safeguarding User Privacy in Tool-Using Large Language Model Agents.

CD-VulD: Cross-Domain Vulnerability Discovery Based on Deep Domain Adaptation.

TSE

Software Testing with Large Language Models: Survey, Landscape, and Vision.

An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation.

Deep Learning Based Vulnerability Detection: Are We There Yet?.

On the Value of Oversampling for Deep Learning in Software Defect Prediction.

TOSEM

Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains.

Adversarial Robustness of Deep Code Comment Generation .

Miscellaneous

LLM4Fuzz: Guided Fuzzing of Smart Contracts with Large Language Models

CHEMFUZZ: Large Language Models-assisted Fuzzing for Quantum Chemistry Software Bug Detection

Attack Prompt Generation for Red Teaming and Defending Large Language Models

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for Academic_LLM_Sec_Papers

Similar Open Source Tools

For similar tasks

For similar jobs