LLM-for-misinformation-research
Paper list of misinformation research using (multi-modal) large language models, i.e., (M)LLMs.
Stars: 78
LLM-for-misinformation-research is a curated paper list of misinformation research using large language models (LLMs). The repository covers methods for detection and verification, tools for fact-checking complex claims, decision-making and explanation, claim matching, post-hoc explanation generation, and other tasks related to combating misinformation. It includes papers on fake news detection, rumor detection, fact verification, and more, showcasing the application of LLMs in various aspects of misinformation research.
README:
A curated paper list of misinformation research using (multi-modal) large language models, i.e., (M)LLMs.
An LLM can be seen as a (sometimes not reliable) knowledge provider, an experienced expert in specific areas, and a relatively cheap data generator (compared with collecting from the real world). For example, LLMs could be a good analyzer of social commonsense/conventions.
- Cheap-fake Detection with LLM using Prompt Engineering[paper]
- Faking Fake News for Real Fake News Detection: Propaganda-Loaded Training Data Generation[paper]
- Bad Actor, Good Advisor: Exploring the Role of Large Language Models in Fake News Detection[paper]
- Analysis of Disinformation and Fake News Detection Using Fine-Tuned Large Language Model[paper]
- Detecting Misinformation with LLM-Predicted Credibility Signals and Weak Supervision[paper]
- FakeGPT: Fake News Generation, Explanation and Detection of Large Language Model[paper]
- Fighting Fire with Fire: The Dual Role of LLMs in Crafting and Detecting Elusive Disinformation[paper]
- Language Models Hallucinate, but May Excel at Fact Verification[paper]
- Clean-label Poisoning Attack against Fake News Detection Models[paper]
- Rumor Detection on Social Media with Crowd Intelligence and ChatGPT-Assisted Networks[paper]
- LLMs are Superior Feedback Providers: Bootstrapping Reasoning for Lie Detection with Self-Generated Feedback[paper]
- Can Large Language Models Detect Rumors on Social Media?[paper]
- TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection[paper]
- DELL: Generating Reactions and Explanations for LLM-Based Misinformation Detection[paper]
- Enhancing large language model capabilities for rumor detection with Knowledge-Powered Prompting[paper]
- An Implicit Semantic Enhanced Fine-Grained Fake News Detection Method Based on Large Language Model[paper]
- RumorLLM: A Rumor Large Language Model-Based Fake-News-Detection Data-Augmentation Approach[paper]
- Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom[paper]
- Message Injection Attack on Rumor Detection under the Black-Box Evasion Setting Using Large Language Model[paper]
- Towards Robust Evidence-Aware Fake News Detection via Improving Semantic Perception[paper]
- Let Silence Speak: Enhancing Fake News Detection with Generated Comments from Large Language Models[paper]
- RAEmoLLM: Retrieval Augmented LLMs for Cross-Domain Misinformation Detection Using In-Context Learning based on Emotional Information[paper]
- Adversarial Style Augmentation via Large Language Model for Robust Fake News Detection[paper]
- Zero-Shot Fact Verification via Natural Logic and Large Language Models[paper]
- RAGAR, Your Falsehood Radar: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models[paper]
- FramedTruth: A Frame-Based Model Utilising Large Language Models for Misinformation Detection[paper]
Let an LLM be an agent having access to external tools like search engines, deepfake detectors, etc.
- Fact-Checking Complex Claims with Program-Guided Reasoning[paper]
- Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models[paper]
- FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios[paper]
- FactLLaMA: Optimizing Instruction-Following Language Models with External Knowledge for Automated Fact-Checking[paper]
- Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models[paper]
- Language Models Hallucinate, but May Excel at Fact Verification[paper]
- Towards LLM-based Fact Verification on News Claims with a Hierarchical Step-by-Step Prompting Method[paper]
- Evidence-based Interpretable Open-domain Fact-checking with Large Language Models[paper]
- TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection[paper]
- LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation[paper]
- Can Large Language Models Detect Misinformation in Scientific News Reporting?[paper]
- The Perils and Promises of Fact-Checking with Large Language Models[paper]
- SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection[paper]
- Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors[paper]
- MMIDR: Teaching Large Language Model to Interpret Multimodal Misinformation via Knowledge Distillation[paper]
- TrumorGPT: Query Optimization and Semantic Reasoning over Networks for Automated Fact-Checking[paper]
- Reinforcement Retrieval Leveraging Fine-grained Feedback for Fact Checking News Claims with Black-Box LLM[paper]
- Large Language Model Agent for Fake News Detection[paper]
- Argumentative Large Language Models for Explainable and Contestable Decision-Making[paper]
- RAEmoLLM: Retrieval Augmented LLMs for Cross-Domain Misinformation Detection Using In-Context Learning based on Emotional Information[paper]
- RAGAR, Your Falsehood Radar: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models[paper]
- Multimodal Misinformation Detection using Large Vision-Language Models[paper]
- LLM-Driven External Knowledge Integration Network for Rumor Detection[paper]
An LLM can directly output the final prediction and (optional) explanations.
- A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity[paper]
- Large Language Models Can Rate News Outlet Credibility[paper]
- Fact-Checking Complex Claims with Program-Guided Reasoning[paper]
- Towards Reliable Misinformation Mitigation: Generalization, Uncertainty, and GPT-4[paper]
- Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models[paper]
- News Verifiers Showdown: A Comparative Performance Evaluation of ChatGPT 3.5, ChatGPT 4.0, Bing AI, and Bard in News Fact-Checking[paper]
- Analysis of Disinformation and Fake News Detection Using Fine-Tuned Large Language Model[paper]
- Explainable Claim Verification via Knowledge-Grounded Reasoning withLarge Language Models[paper]
- Language Models Hallucinate, but May Excel at Fact Verification[paper]
- FakeGPT: Fake News Generation, Explanation and Detection of Large Language Model[paper]
- Can Large Language Models Understand Content and Propagation for Misinformation Detection: An Empirical Study[paper]
- Are Large Language Models Good Fact Checkers: A Preliminary Study[paper]
- A Revisit of Fake News Dataset with Augmented Fact-checking by ChatGPT[paper]
- Can Large Language Models Detect Rumors on Social Media?[paper]
- DELL: Generating Reactions and Explanations for LLM-Based Misinformation Detection[paper]
- Assessing the Reasoning Abilities of ChatGPT in the Context of Claim Verification[paper]
- LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation[paper]
- SoMeLVLM: A Large Vision Language Model for Social Media Processing[paper][project]
- Can Large Language Models Detect Misinformation in Scientific News Reporting?[paper]
- The Perils and Promises of Fact-Checking with Large Language Models[paper]
- Potential of Large Language Models as Tools Against Medical Disinformation[paper]
- FakeNewsGPT4: Advancing Multimodal Fake News Detection through Knowledge-Augmented LVLMs[paper]
- SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection[paper]
- Multimodal Large Language Models to Support Real-World Fact-Checking[paper]
- MMIDR: Teaching Large Language Model to Interpret Multimodal Misinformation via Knowledge Distillation[paper]
- An Implicit Semantic Enhanced Fine-Grained Fake News Detection Method Based on Large Language Model[paper]
- Explaining Misinformation Detection Using Large Language Models[paper]
- Rumour Evaluation with Very Large Language Models[paper]
- Argumentative Large Language Models for Explainable and Contestable Decision-Making[paper]
- Exploring the Potential of the Large Language Models (LLMs) in Identifying Misleading News Headlines[paper]
- Tell Me Why: Explainable Public Health Fact-Checking with Large Language Models[paper]
- Mining the Explainability and Generalization: Fact Verification Based on Self-Instruction[paper]
- Reinforcement Tuning for Detecting Stances and Debunking Rumors Jointly with Large Language Models[paper]
- RAEmoLLM: Retrieval Augmented LLMs for Cross-Domain Misinformation Detection Using In-Context Learning based on Emotional Information[paper]
- RAGAR, Your Falsehood Radar: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models[paper]
- Multilingual Fact-Checking using LLM[paper]
- Multimodal Misinformation Detection using Large Vision-Language Models[paper]
- Silver Lining in the Fake News Cloud: Can Large Language Models Help Detect Misinformation?[paper]
- Are Large Language Models Good Fact Checkers: A Preliminary Study[paper]
- Claim Check-Worthiness Detection: How Well do LLMs Grasp Annotation Guidelines?[paper]
- Automated Claim Matching with Large Language Models: Empowering Fact-Checkers in the Fight Against Misinformation[paper]
- SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation Tasks[paper]
- Are Large Language Models Good Fact Checkers: A Preliminary Study[paper]
- JustiLM: Few-shot Justification Generation for Explainable Fact-Checking of Real-world Claims[paper]
- Can LLMs Produce Faithful Explanations For Fact-checking? Towards Faithful Explainable Fact-Checking via Multi-Agent Debate[paper]
- [Fake News Propagation Simulation] From Skepticism to Acceptance: Simulating the Attitude Dynamics Toward Fake News[paper]
- [Misinformation Correction] Correcting Misinformation on Social Media with A Large Language Model[paper]
- [Fake News Data Annoatation] Enhancing Text Classification through LLM-Driven Active Learning and Human Annotation[paper]
- [Assisting Human Fact-Checking] On the Role of Large Language Models in Crowdsourcing Misinformation Assessment[paper]
- Preventing and Detecting Misinformation Generated by Large Language Models: A tutorial about prevention and detection techniques of LLM-generated misinformation, including an introduction of recent advances of LLM-based misinformation detection. [Webpage] [Slides]
- Large-Language-Model-Powered Agent-Based Framework for Misinformation and Disinformation Research: Opportunities and Open Challenges: A research framework to generate customized agent-based social networks for disinformation simulations that would enable understanding and evaluating the phenomena whilst discussing open challenges.[paper]
- Combating Misinformation in the Age of LLMs: Opportunities and Challenges: A survey of the opportunities (can we utilize LLMs to combat misinformation) and challenges (how to combat LLM-generated misinformation) of combating misinformation in the age of LLMs. [Project Webpage][paper]
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for LLM-for-misinformation-research
Similar Open Source Tools
LLM-for-misinformation-research
LLM-for-misinformation-research is a curated paper list of misinformation research using large language models (LLMs). The repository covers methods for detection and verification, tools for fact-checking complex claims, decision-making and explanation, claim matching, post-hoc explanation generation, and other tasks related to combating misinformation. It includes papers on fake news detection, rumor detection, fact verification, and more, showcasing the application of LLMs in various aspects of misinformation research.
awesome-LLM-AIOps
The 'awesome-LLM-AIOps' repository is a curated list of academic research and industrial materials related to Large Language Models (LLM) and Artificial Intelligence for IT Operations (AIOps). It covers various topics such as incident management, log analysis, root cause analysis, incident mitigation, and incident postmortem analysis. The repository provides a comprehensive collection of papers, projects, and tools related to the application of LLM and AI in IT operations, offering valuable insights and resources for researchers and practitioners in the field.
prompt-in-context-learning
An Open-Source Engineering Guide for Prompt-in-context-learning from EgoAlpha Lab. 📝 Papers | ⚡️ Playground | 🛠 Prompt Engineering | 🌍 ChatGPT Prompt | ⛳ LLMs Usage Guide > **⭐️ Shining ⭐️:** This is fresh, daily-updated resources for in-context learning and prompt engineering. As Artificial General Intelligence (AGI) is approaching, let’s take action and become a super learner so as to position ourselves at the forefront of this exciting era and strive for personal and professional greatness. The resources include: _🎉Papers🎉_: The latest papers about _In-Context Learning_ , _Prompt Engineering_ , _Agent_ , and _Foundation Models_. _🎉Playground🎉_: Large language models(LLMs)that enable prompt experimentation. _🎉Prompt Engineering🎉_: Prompt techniques for leveraging large language models. _🎉ChatGPT Prompt🎉_: Prompt examples that can be applied in our work and daily lives. _🎉LLMs Usage Guide🎉_: The method for quickly getting started with large language models by using LangChain. In the future, there will likely be two types of people on Earth (perhaps even on Mars, but that's a question for Musk): - Those who enhance their abilities through the use of AIGC; - Those whose jobs are replaced by AI automation. 💎EgoAlpha: Hello! human👤, are you ready?
Awesome-LLMs-in-Graph-tasks
This repository is a collection of papers on leveraging Large Language Models (LLMs) in Graph Tasks. It provides a comprehensive overview of how LLMs can enhance graph-related tasks by combining them with traditional Graph Neural Networks (GNNs). The integration of LLMs with GNNs allows for capturing both structural and contextual aspects of nodes in graph data, leading to more powerful graph learning. The repository includes summaries of various models that leverage LLMs to assist in graph-related tasks, along with links to papers and code repositories for further exploration.
milvus
Milvus is an open-source vector database built to power embedding similarity search and AI applications. Milvus makes unstructured data search more accessible, and provides a consistent user experience regardless of the deployment environment. Milvus 2.0 is a cloud-native vector database with storage and computation separated by design. All components in this refactored version of Milvus are stateless to enhance elasticity and flexibility. For more architecture details, see Milvus Architecture Overview. Milvus was released under the open-source Apache License 2.0 in October 2019. It is currently a graduate project under LF AI & Data Foundation.
EvalAI
EvalAI is an open-source platform for evaluating and comparing machine learning (ML) and artificial intelligence (AI) algorithms at scale. It provides a central leaderboard and submission interface, making it easier for researchers to reproduce results mentioned in papers and perform reliable & accurate quantitative analysis. EvalAI also offers features such as custom evaluation protocols and phases, remote evaluation, evaluation inside environments, CLI support, portability, and faster evaluation.
chatgpt-auto-refresh
ChatGPT Auto Refresh is a userscript that keeps ChatGPT sessions fresh by eliminating network errors and Cloudflare checks. It removes the 10-minute time limit from conversations when Chat History is disabled, ensuring a seamless experience. The tool is safe, lightweight, and a time-saver, allowing users to keep their sessions alive without constant copy/paste/refresh actions. It works even in background tabs, providing convenience and efficiency for users interacting with ChatGPT. The tool relies on the chatgpt.js library and is compatible with various browsers using Tampermonkey, making it accessible to a wide range of users.
Awesome-Graph-LLM
Awesome-Graph-LLM is a curated collection of research papers exploring the intersection of graph-based techniques with Large Language Models (LLMs). The repository aims to bridge the gap between LLMs and graph structures prevalent in real-world applications by providing a comprehensive list of papers covering various aspects of graph reasoning, node classification, graph classification/regression, knowledge graphs, multimodal models, applications, and tools. It serves as a valuable resource for researchers and practitioners interested in leveraging LLMs for graph-related tasks.
llmfarm_core.swift
LLMFarm_core.swift is a Swift library designed to work with large language models (LLM). It enables users to load different LLMs with specific parameters. The library supports MacOS (13+) and iOS (16+), offering various inferences and sampling methods. It includes features such as Metal support (not compatible with Intel Mac), model setting templates, LoRA adapters support, and LoRA train support. The library is based on ggml and llama.cpp by Georgi Gerganov, with additional sources from rwkv.cpp by saharNooby and Mia by byroneverson.
lobe-cli-toolbox
Lobe CLI Toolbox is an AI CLI Toolbox designed to enhance git commit and i18n workflow efficiency. It includes tools like Lobe Commit for generating Gitmoji-based commit messages and Lobe i18n for automating the i18n translation process. The toolbox also features Lobe label for automatically copying issues labels from a template repo. It supports features such as automatic splitting of large files, incremental updates, and customization options for the OpenAI model, API proxy, and temperature.
LLMFarm
LLMFarm is an iOS and MacOS app designed to work with large language models (LLM). It allows users to load different LLMs with specific parameters, test the performance of various LLMs on iOS and macOS, and identify the most suitable model for their projects. The tool is based on ggml and llama.cpp by Georgi Gerganov and incorporates sources from rwkv.cpp by saharNooby, Mia by byroneverson, and LlamaChat by alexrozanski. LLMFarm features support for MacOS (13+) and iOS (16+), various inferences and sampling methods, Metal compatibility (not supported on Intel Mac), model setting templates, LoRA adapters support, LoRA finetune support, LoRA export as model support, and more. It also offers a range of inferences including LLaMA, GPTNeoX, Replit, GPT2, Starcoder, RWKV, Falcon, MPT, Bloom, and others. Additionally, it supports multimodal models like LLaVA, Obsidian, and MobileVLM. Users can customize inference options through JSON files and access supported models for download.
bravegpt
BraveGPT is a userscript that brings the power of ChatGPT to Brave Search. It allows users to engage with a conversational AI assistant directly within their search results, providing instant and personalized responses to their queries. BraveGPT is powered by GPT-4, the latest and most advanced language model from OpenAI, ensuring accurate and comprehensive answers. With BraveGPT, users can ask questions, get summaries, generate creative content, and more, all without leaving the Brave Search interface. The tool is easy to install and use, making it accessible to users of all levels. BraveGPT is a valuable addition to the Brave Search experience, enhancing its capabilities and providing users with a more efficient and informative search experience.
chatgpt.js
chatgpt.js is a powerful JavaScript library that allows for super easy interaction w/ the ChatGPT DOM. * Feature-rich * Object-oriented * Easy-to-use * Lightweight (yet optimally performant)
For similar tasks
LLM-for-misinformation-research
LLM-for-misinformation-research is a curated paper list of misinformation research using large language models (LLMs). The repository covers methods for detection and verification, tools for fact-checking complex claims, decision-making and explanation, claim matching, post-hoc explanation generation, and other tasks related to combating misinformation. It includes papers on fake news detection, rumor detection, fact verification, and more, showcasing the application of LLMs in various aspects of misinformation research.
Awesome-Tabular-LLMs
This repository is a collection of papers on Tabular Large Language Models (LLMs) specialized for processing tabular data. It includes surveys, models, and applications related to table understanding tasks such as Table Question Answering, Table-to-Text, Text-to-SQL, and more. The repository categorizes the papers based on key ideas and provides insights into the advancements in using LLMs for processing diverse tables and fulfilling various tabular tasks based on natural language instructions.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.