Knowledge-Conflicts-Survey

Knowledge-Conflicts-Survey

The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"

Stars: 64

Visit
 screenshot

Knowledge Conflicts for LLMs: A Survey is a repository containing a survey paper that investigates three types of knowledge conflicts: context-memory conflict, inter-context conflict, and intra-memory conflict within Large Language Models (LLMs). The survey reviews the causes, behaviors, and possible solutions to these conflicts, providing a comprehensive analysis of the literature in this area. The repository includes detailed information on the types of conflicts, their causes, behavior analysis, and mitigating solutions, offering insights into how conflicting knowledge affects LLMs and how to address these conflicts.

README:

💥Knowledge Conflicts for LLMs: A Survey

📢News 9/21/2024: Our paper got accepted by EMNLP 2024 as a Main Conference Paper!

📢We've missed your work? Contact Rongwu directly!

This is the repository for the survey paper: Knowledge Conflicts for LLMs: A Survey. ➡️ [中文介绍@机器之心]

🌟Star us for future lookups!🌟

Types-of-conflicts

Rongwu Xu1*, Zehan Qi1*, Zhijiang Guo2, Cunxiang Wang3, Hongru Wang4, Yue Zhang3 and Wei Xu1

1. Tsinghua University; 2. University of Cambridge; 3. Westlake University; 4. The Chinese University of Hong Kong
(* Equal Contribution)

📝 Citation

If you find our survey useful, please consider citing:

@article{xu2024knowledge,
  title={Knowledge Conflicts for LLMs: A Survey},
  author={Xu, Rongwu and Qi, Zehan and Wang, Cunxiang and Wang, Hongru and Zhang, Yue and Xu, Wei},
  journal={arXiv preprint arXiv:2403.08319},
  year={2024}
}

❤️ Recap

We investigate three types of knowledge conflicts: context-memory conflict, inter-context conflict, and intra-memory conflict.

  • Context-memory conflict: Contextual knowledge (context) can conflict with the parametric knowledge (memory) encapsulated within the LLM's parameters.
  • Inter-context conflict: Conflict among various pieces of contextual knowledge (e.g., noise, outdated information, misinformation, etc.).
  • Intra-memory conflict: LLM's parametric knowledge may yield divergent responses to differently phrased queries, which can be attributed to the conflicting knowledge embedded within the LLM's parameters.

This survey reviews the literature on the causes, behaviors, and possible solutions to knowledge conflicts.

Taxonomy
Taxonomy of knowledge conflicts: we consider three distinct types of conflicts and analysis causes, behaviors, and solutions.

🚀 Table of Contents

Type I: Context-memory conflict

I-i: Causes

Temporal Misalignment

  1. Mind the gap: Assessing temporal generalization in neural language models, Lazaridou et al., Neurips 2021.[Paper]
  2. Time Waits for No One! Analysis and Challenges of Temporal Misalignment, Luu et al., NAACL 2022. [Paper]
  3. Time-aware language models as temporal knowledge bases, Dhingra et al., TACL 2022.[Paper]
  4. Towards continual knowledge learning of language models, Jang et al., ICLR 2022, [Paper]
  5. Temporalwiki: A lifelong benchmark for training and evaluating ever-evolving language models, Jang et al., EMNLP 2023, [Paper]
  6. Streamingqa: A benchmark for adaptation to new knowledge over time in question answering models, Liska et al., ICML 2022. [Paper]
  7. Can LMs Generalize to Future Data? An Empirical Analysis on Text Summarization, Cheang et al., EMNLP 2023, [Paper]
  8. RealTime QA: What's the Answer Right Now?, Kasai et al., Neurips 2024, [Paper]

Misinformation Pollution

  1. Attacking open-domain question answering by injecting misinformation, Pan et al., AACL 2023, [Paper]
  2. On the risk of misinformation pollution with large language models, Pan et al., EMNLP 2023, [Paper]
  3. Defending against misinformation attacks in open-domain question answering, Weller et al., EACL 2024, [Paper]
  4. The earth is flat because...: Investigating llms’ belief towards misinformation via persuasive conversation, Xu et al., ACL 2024, [Paper]
  5. Prompt injection attack against llm-integrated applications, Liu et al., arXiv 2024, [Paper]
  6. Benchmarking and defending against indirect prompt injection attacks on large language models, Yi et al., arXiv 2024, [Paper]
  7. Adaptive chameleon or stubborn sloth: Unraveling the behavior of large language models in knowledge conflicts, Xie et al., ICLR 2024, [Paper]
  8. Poisoning web-scale training datasets is practical, Carlini et al., S&P 2024, [Paper]
  9. Can llm-generated misinformation be detected, Chen and Shu, ICLR 2024, [Paper]

I-ii: (Behavior) Analysis

ODQA

  1. Entity-Based Knowledge Conflicts in Question Answering, Longpre et al., EMNLP 2021, [Paper]
  2. Rich Knowledge Sources Bring Complex Knowledge Conflicts: Recalibrating Models to Reflect Conflicting Evidence, Chen et al., EMNLP 2022, [Paper]
  3. Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts When Knowledge Conflicts, Tan et al., arXiv 2024, [Paper]

General

  1. Adaptive chameleon or stubborn sloth: Unraveling the behavior of large language models in knowledge conflicts, Xie et al., ICLR 2024, [Paper]
  2. RESOLVING KNOWLEDGE CONFLICTS IN LARGE LANGUAGE MODELS, Wang et al., arXiv 2023, [Paper]
  3. Intuitive or Dependent? Investigating LLMs’ Behavior Style to Conflicting Prompts, Ying et al., arXiv 2024, [Paper]
  4. “Merge Conflicts!” Exploring the Impacts of External Distractors to Parametric Knowledge Graphs, Qian et al., arXiv 2023, [Paper]
  5. Studying Large Language Model Behaviors Under Realistic Knowledge Conflicts, arXiv 2024, [Paper]
  6. Characterizing mechanisms for factual recall in language models, EMNLP 2023, [Paper]
  7. Context versus Prior Knowledge in Language Models, ACL 2024, [Paper]
  8. Cutting Off the Head Ends the Conflict: A Mechanism for Interpreting and Mitigating Knowledge Conflicts in Language Models, arXiv 2024, [Paper]
  9. ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM, Su et al., arXiv 2024, [Paper]

I-iii: (Mitigating) Solutions

Faithful to context

Fine-tuning
  1. Large Language Models with Controllable Working Memory, Li et al., ACL 2023, [Paper]
  2. TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models, Gekhman et al., EMNLP 2023, [Paper]
  3. Improving Factual Consistency for Knowledge-Grounded Dialogue Systems via Knowledge Enhancement and Alignment, Xue et al., EMNLP 2023, [Paper]
  4. Improving Temporal Generalization of Pre-trained Language Models with Lexical Semantic Change, Su et al., EMNLP 2022, [Paper]
Prompting
  1. Context-faithful Prompting for Large Language Models, Zhou et al., EMNLP 2023, [Paper]
Decoding
  1. Trusting Your Evidence: Hallucinate Less with Context-aware Decoding, Shi et al., NAACL 2024, [Paper]
  2. Contrastive Decoding: Open-ended Text Generation as Optimization, Li et al., ACL 2023, [Paper]
  3. Tug-of-war between knowledge: Exploring and resolving knowledge conflicts in retrieval-augmented language models, COLING 2024, [Paper]
Inference-time intervention (e.g., tuning heads)
  1. Characterizing mechanisms for factual recall in language models, EMNLP 2023, [Paper]
  2. Cutting Off the Head Ends the Conflict: A Mechanism for Interpreting and Mitigating Knowledge Conflicts in Language Models, arXiv 2024, [Paper]

Type II: Inter-context conflict

II-i: Causes

Misinformation

  1. Synthetic lies: Understanding ai-generated misinformation and evaluating algorithmic and human solutions, Zhou et al., CHI 2023, [Paper]
  2. Comparing GPT-4 and Open-Source Language Models in Misinformation Mitigation, Vergho et al., arXiv 2024, [Paperhttps://arxiv.org/abs/2401.06920]

Outdated information

  1. A dataset for answering time-sensitive questions, Chen et al., Neurips 2021, [Paper]
  2. SituatedQA: Incorporating extra-linguistic contexts into QA, Zhang et al., EMNLP 2021, [Paper]
  3. Streamingqa: A benchmark for adaptation to new knowledge over time in question answering models, Liska et al., ICML 2022. [Paper]
  4. RealTime QA: What's the Answer Right Now?, Kasai et al., Neurips 2024, [Paper]

II-ii: (Behavior) Analysis

Performance impact

  1. SituatedQA: Incorporating extra-linguistic contexts into QA, Zhang et al., EMNLP 2021, [Paper]
  2. Synthetic Disinformation Attacks on Automated Fact Verification Systems Authors, AAAI 2022, [Paper]
  3. Attacking open-domain question answering by injecting misinformation, Pan et al., AACL 2023, [Paper]
  4. Rich Knowledge Sources Bring Complex Knowledge Conflicts: Recalibrating Models to Reflect Conflicting Evidence, Chen et al., EMNLP 2022, [Paper]
  5. Tug-of-war between knowledge: Exploring and resolving knowledge conflicts in retrieval-augmented language models, Jin et al., LREC-COLING 2024, [Paper]
  6. ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM, Su et al., arXiv 2024, [Paper]

Detection ability

  1. CDConv: A Benchmark for Contradiction Detection in Chinese Conversations, Zheng et al., EMNLP 2022, [Paper]
  2. ContraDoc: understanding self-contradictions in documents with large language models, Li et al., arXiv 2023, [Paperhttps://arxiv.org/abs/2311.09182]
  3. What Evidence Do Language Models Find Convincing?, Wan et al., ACL 2024, [Paper]
  4. Tug-of-war between knowledge: Exploring and resolving knowledge conflicts in retrieval-augmented language models, Jin et al., LREC-COLING 2024, [Paper]

II-III: (Mitigating) Solutions

Eliminating Conflict

  1. WikiContradiction: Detecting Self-Contradiction Articles on Wikipedia, Hsu et al., IEEE Big Data 2021, [Paper]
  2. Topological analysis of contradictions in text, Wu et al., SIGIR 2022, [Paper]
  3. FACTOOL: Factuality Detection in Generative AI-A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios, Chern et al., arXiv 2023, [Paper]
  4. Detecting Misinformation with LLM-Predicted Credibility Signals and Weak Supervision, Leite et al., arXiv 2023, [Paper]

Improving Robustness

  1. Why So Gullible? Enhancing the Robustness of Retrieval-Augmented Models against Counterfactual Noise, Hong et al., arXiv 2024, [Paper]
  2. Defending Against Disinformation Attacks in Open-Domain Question Answering, Weller et al., EACL 2024, [Paper]

Type III: Intra-memory conflict

III-i: Causes

Bias in Training Corpora

  1. On the dangers of stochastic parrots: Can language models be too big, Bender et al., FACCT 2021, [Paper]
  2. Ethical and social risks of harm from language models, Weidinger et al., arXiv 2021, [Paper]
  3. Measuring Causal Effects of Data Statistics on Language Model's 'Factual' Predictions, Elazar et al., arXiv 2023, [Paper]
  4. Studying large language model generalization with influence functions, Grosse et al., arXiv 2023, [Paper]
  5. How pre-trained language models capture factual knowledge? a causal-inspired analysis, Li et al., ACL 2022, [Paper]
  6. Impact of co-occurrence on factual knowledge of large language models, Kang and Choi, EMNLP 2023, [Paper]

Decoding Strategy

  1. Factuality enhanced language models for open-ended text generation, Lee et al., NeurIPS 2022, [Paper]
  2. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions, Huang et al., arXiv 2023, [Paper]

Knowledge Editing

  1. Unveiling the pitfalls of knowledge editing for large language models, Li et al., ICLR 2024, [Paper]
  2. Editing large language models: Problems, methods, and opportunities, Yao et al., EMNLP 2023, [Paper]

III-ii: (Behavior) Analysis

Self-Inconsistency

  1. Measuring and improving consistency in pretrained language models, Elazar et al., TACL 2021, [Paper]
  2. Methods for measuring, updating, and visualizing factual beliefs in language models, Hase et al., EACL 2023, [Paper]
  3. Knowing what llms do not know: A simple yet effective self-detection method, Zhao et al., NAACL 2024, [Paper]
  4. Statistical knowledge assessment for large language models, Dong et al., NeurIPS 2023, [Paper]
  5. Benchmarking and improving generator-validator consistency of language models, Li et al., ICLR 2024, [Paper]
  6. How pre-trained language models capture factual knowledge? a causal-inspired analysis, Li et al., ACL 2022, [Paper]
  7. Impact of co-occurrence on factual knowledge of large language models, Kang and Choi, EMNLP 2023, [Paper]
  8. ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM, Su et al., arXiv 2024, [Paper]

Latent Representation of Knowledge

  1. Dola: Decoding by contrasting layers improves factuality in large language models, Chuang et al., ICLR 2024, [Paper]
  2. Inferencetime intervention: Eliciting truthful answers from a language model, Li et al., NeurIPS 2023, [Paper]

Cross-lingual Inconsistency

  1. Cross-lingual knowledge editing in large language models, Wan et al., arXiv 2023, [Paper]
  2. Cross-lingual consistency of factual knowledge in multilingual language models, Qi et al., EMNLP 2023, [Paper]

III-iii: (Mitigating) Solutions

Improving Consistency

  1. Measuring and improving consistency in pretrained language models, Elazar et al., TACL 2021, [Paper]
  2. Benchmarking and improving generator-validator consistency of language models, Li et al., ICLR 2024, [Paper]
  3. Improving language models meaning understanding and consistency by learning conceptual roles from dictionary, Jang et al., EMNLP 2023, [Paper]
  4. Enhancing selfconsistency and performance of pre-trained language models through natural language inference, Mitchell et al., EMNLP 2022, [Paper]
  5. Knowing what llms do not know: A simple yet effective self-detection method, Zhao et al., NAACL 2024, [Paper]

Improving Factuality

  1. Decoding by contrasting layers improves factuality in large language models, Chuang et al., ICLR 2024, [Paper]
  2. Inferencetime intervention: Eliciting truthful answers from a language model, Li et al., NeurIPS 2023, [Paper]

Star History

Star History Chart

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for Knowledge-Conflicts-Survey

Similar Open Source Tools

For similar tasks

For similar jobs