llm-misinformation-survey
Paper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misinformation", accepted by AI Magazine 2024
Stars: 68
The 'llm-misinformation-survey' repository is dedicated to the survey on combating misinformation in the age of Large Language Models (LLMs). It explores the opportunities and challenges of utilizing LLMs to combat misinformation, providing insights into the history of combating misinformation, current efforts, and future outlook. The repository serves as a resource hub for the initiative 'LLMs Meet Misinformation' and welcomes contributions of relevant research papers and resources. The goal is to facilitate interdisciplinary efforts in combating LLM-generated misinformation and promoting the responsible use of LLMs in fighting misinformation.
README:
The repository for the survey Combating Misinformation in the Age of LLMs: Opportunities and Challenges
Authors : Canyu Chen, Kai Shu
Paper : [arXiv]
Project Website : llm-misinformation.github.io
TLDR : A survey of the opportunities (can we utilize LLMs to combat misinformation) and challenges (how to combat LLM-generated misinformation) of combating misinformation in the age of LLMs.We will maintain this list of papers and related resources for the initiative "LLMs Meet Misinformation", which aims to combat misinformation in the age of LLMs. We greatly appreciate any contributions via issues, PRs, emails or other methods if you have a paper or are aware of relevant research that should be incorporated.
More resources on "LLMs Meet Misinformation" are on the website: https://llm-misinformation.github.io/
Any suggestion, comment or related discussion is welcome. Please let us know by email ([email protected]) or wechat (ID: alexccychen)
If you find our survey or paper list useful, we will greatly appreacite it if you could consider citing our paper:
@article{chen2024combatingmisinformation,
author = {Chen, Canyu and Shu, Kai},
title = {Combating misinformation in the age of LLMs: Opportunities and challenges},
journal = {AI Magazine},
doi = {https://doi.org/10.1002/aaai.12188},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/aaai.12188},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1002/aaai.12188}
}
@inproceedings{chen2024llmgenerated,
title={Can {LLM}-Generated Misinformation Be Detected?},
author={Canyu Chen and Kai Shu},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=ccxD4mtkTU}
}
@article{chen2024canediting,
title = {Can Editing LLMs Inject Harm?},
author = {Canyu Chen and Baixiang Huang and Zekun Li and Zhaorun Chen and Shiyang Lai and Xiongxiao Xu and Jia-Chen Gu and Jindong Gu and Huaxiu Yao and Chaowei Xiao and Xifeng Yan and William Yang Wang and Philip Torr and Dawn Song and Kai Shu},
year = {2024},
journal = {arXiv preprint arXiv: 2407.20224}
}
- 🔥 [2024/07/31] Our new paper Can Editing LLMs Inject Harm? is on arXiv.
- 🔥 [2024/04/09] Our survey paper Combating Misinformation in the Age of LLMs: Opportunities and Challenges is accepted to AI Magazine.
- 🔥 [2023/11/12] Our survey paper Combating Misinformation in the Age of LLMs: Opportunities and Challenges is on arXiv.
- 🔥 [2023/10/18] We release the dataset and code for our paper Can LLM-Generated Misinformation Be Detected? [arXiv] [dataset and code]
Misinformation such as fake news and rumors is a serious threat to information ecosystems and public trust. The emergence of Large Language Models (LLMs) has great potential to reshape the landscape of combating misinformation. Generally, LLMs can be a double-edged sword in the fight. On the one hand, LLMs bring promising opportunities for combating misinformation due to their profound world knowledge and strong reasoning abilities. Thus, one emergent question is: can we utilize LLMs to combat misinformation? On the other hand, the critical challenge is that LLMs can be easily leveraged to generate deceptive misinformation at scale. Then, another important question is: how to combat LLM-generated misinformation? In this paper, we first systematically review the history of combating misinformation before the advent of LLMs. Then we illustrate the current efforts and present an outlook for these two fundamental questions respectively. The goal of this survey paper is to facilitate the progress of utilizing LLMs for fighting misinformation and call for interdisciplinary efforts from different stakeholders for combating LLM-generated misinformation.
- llm-misinformation-survey
- [2023/11] Combating Misinformation in the Age of LLMs: Opportunities and Challenges Canyu Chen, Kai Shu. preprint. [paper]
- [2023/10] Factuality Challenges in the Era of Large Language Models Isabelle Augenstein et al. arXiv. [paper]
- [2023/10] Combating Misinformation in the Era of Generative AI Models Danni Xu et al. arXiv. [paper]
- [2023/10] Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity Cunxiang Wang et al. arXiv. [paper]
-
[2023/10] Language Models Hallucinate, but May Excel at Fact Verification Jian Guan et al. arXiv. [paper]
-
[2023/10] The Perils & Promises of Fact-checking with Large Language Models Dorian Quelle, Alexandre Bovet. arXiv. [paper]
-
[2023/10] Automated Claim Matching with Large Language Models: Empowering Fact-Checkers in the Fight Against Misinformation Eun Cheol Choi, Emilio Ferrara. arXiv. [paper]
-
[2023/10] FakeGPT: Fake News Generation, Explanation and Detection of Large Language Models. Yue Huang, Lichao Sun. arXiv. [paper]
-
[2023/10] Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models Haoran Wang, Kai Shu. arXiv. [paper]
-
[2023/09] Can LLM-Generated Misinformation Be Detected? Canyu Chen, Kai Shu. arXiv. [paper]
-
[2023/09] Disinformation Detection: An Evolving Challenge in the Age of LLMs Bohan Jiang et al. arXiv. [paper]
-
[2023/09] Can Large Language Models Discern Evidence for Scientific Hypotheses? Case Studies in the Social Sciences. Sai Koneru et al. arXiv. [paper]
-
[2023/09] Towards LLM-based Fact Verification on News Claims with a Hierarchical Step-by-Step Prompting Method. Xuan Zhang and Wei Gao. AACL 2023. [paper]
-
[2023/09] Analysis of Disinformation and Fake News Detection Using Fine-Tuned Large Language Model. Bohdan M. Pavlyshenko. arXiv. [paper]
-
[2023/08] Cheap-fake Detection with LLM using Prompt Engineering. Guangyang Wu et al. IEEE ICMEW 2023. [paper]
-
[2023/07] Harnessing the Power of ChatGPT to Decimate Mis/Disinformation: Using ChatGPT for Fake News Detection. Kevin Matthe Caramancion. IEEE AIIoT. [paper]
-
[2023/07] Fact-Checking Complex Claims with Program-Guided Reasoning. Liangming Pan et al. ACL 2023. [paper]
-
[2023/06] Assessing the Effectiveness of GPT-3 in Detecting False Political Statements: A Case Study on the LIAR Dataset. Mars Gokturk Buchholz. arXiv. [paper]
-
[2023/06] A Preliminary Study of ChatGPT on News Recommendation: Personalization, Provider Fairness, Fake News. Xinyi Li et al. arXiv. [paper]
-
[2023/05] Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models. Miaoran Li et al. arXiv. [paper]
-
[2023/05] Towards Reliable Misinformation Mitigation: Generalization, Uncertainty, and GPT-4. Kellin Pelrine et al. arXiv. [paper]
-
[2023/04] Leveraging ChatGPT for Efficient Fact-Checking. Emma Hoes et al. psyarxiv. [paper]
-
[2023/04] Interpretable Unified Language Checking. Tianhua Zhang et al. arXiv. [paper]
-
[2023/02] A Multitask, Multi-lingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. Yejin Bang et al. arXiv. [paper]
- [2023/09] FactLLaMA: Optimizing Instruction-Following Language Models with External Knowledge for Automated Fact-Checking Tsun-Hin Cheung and Kin-Man Lam. APSIPA ASC 2023. [paper]
- [2023/07] FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios I-Chun Chern et al. arXiv. [paper]
- [2023/09] Bad Actor, Good Advisor: Exploring the Role of Large Language Models in Fake News Detection Beizhe Hu et al. arXiv. [paper]
- [2023/09] Detecting Misinformation with LLM-Predicted Credibility Signals and Weak Supervision João A. Leite et al. arxiv. [paper]
- [2023/09] Improving Multiclass Classification of Fake News Using BERT-Based Models and ChatGPT-Augmented Data Elena Shushkevich et al. MDPI Inventions. [paper]
- [2023/09] Can Large Language Models Enhance Fake News Detection?: Improving Fake News Detection With Data Augmentation Emil Ahlbäck, Max Dougly. [paper]
- [2022/03] Faking Fake News for Real Fake News Detection: Propaganda-loaded Training Data Generation Kung-Hsiang Huang et al. ACL 2023. [paper]
- [2023/09] Artificial intelligence is ineffective and potentially harmful for fact checking Matthew R. DeVerna et al. arxiv. [paper]
- [2023/04] Reinforcement Learning-Based Counter-Misinformation Response Generation: A Case Study of COVID-19 Vaccine Misinformation Bing He et al. WWW 2023. [paper]
- [2023/04] Working With AI to Persuade: Examining a Large Lan- guage Model’s Ability to Generate Pro-Vaccination Messages Elise Karinshak et al. CSCW 2023. [paper]
- [2023/05] Learning Interpretable Style Embeddings via Prompting LLMs Ajay Patel et al. arxiv. [paper]
-
[2023/11] Adapting Fake News Detection to the Era of Large Language Models. Jinyan Su et al. arXiv. [paper]
-
[2023/10] Fake News in Sheep’s Clothing: Robust Fake News Detection Against LLM-Empowered Style Attacks. Jiaying Wu, Bryan Hooi. arXiv. [paper]
-
[2023/10] LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples Jia-Yu Yao et al. arXiv. [paper]
-
[2023/10] FakeGPT: Fake News Generation, Explanation and Detection of Large Language Models. Yue Huang, Lichao Sun. arXiv. [paper]
-
[2023/09] Can LLM-Generated Misinformation Be Detected? Canyu Chen, Kai Shu. arXiv. [paper]
-
[2023/09] Disinformation Detection: An Evolving Challenge in the Age of LLMs Bohan Jiang et al. arXiv. [paper]
-
[2023/09] Fake News Detectors are Biased against Texts Generated by Large Language Models. Jinyan Su et al. arXiv. [paper]
-
[2023/08] Improving Detection of ChatGPT-Generated Fake Science Using Real Publication Text: Introducing xFakeBibs a Supervised Learning Network Algorithm Ahmed Abdeen Hamed, Xindong Wu. arXiv. [paper]
-
[2023/07] The Looming Threat of Fake and LLM-generated LinkedIn Profiles: Challenges and Opportunities for Detection and Prevention. Navid Ayoobi et al. ACM Conference on Hypertext and Social Media (HT 2023). [paper]
-
[2023/07] What label should be applied to content produced by generative AI? Ziv Epstein et al. psyarxiv. [paper]
-
[2023/07] Artifcial intelligence-friend or foe in fake news campaigns. Krzysztof Węcel et al. Economics and Business Review. [paper]
-
[2023/06] How AI can distort human beliefs. Celeste Kidd, Abeba Birhane. Science. [paper]
-
[2023/06] AI model GPT-3 (dis)informs us better than humans. Giovanni Spitale et al. Science Advances. [paper]
-
[2023/06] Med-MMHL: A Multi-Modal Dataset for Detecting Human- and LLM-Generated Misinformation in the Medical Domain. Yanshen Sun et al. arXiv. [paper]
-
[2023/06] Implementing BERT and fine-tuned RobertA to detect AI generated news by ChatGPT Zecong Wang et al. arXiv. [paper]
-
[2023/05] Machine-Made Media: Monitoring the Mobilization of Machine-Generated Articles on Misinformation and Mainstream News Websites. Hans W. A. Hanley, Zakir Durumeric. arXiv. [paper]
-
[2023/05] On the Risk of Misinformation Pollution with Large Language Models. Yikang Pan et al. arXiv. [paper]
-
[2023/04] Can AI Write Persuasive Propaganda? Josh A. Goldstein et al. socarxiv. [paper]
-
[2023/04] Synthetic Lies: Understanding AI-Generated Misinformation and Evaluating Algorithmic and Human Solutions. Jiawei Zhou et al. CHI 2023. [paper]
-
[2023/01] Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. Josh A. Goldstein et al. arXiv. [paper]
- [2023/09] Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models. Yue Zhang et al. arxiv. [paper]
- [2023/09] A Survey of Hallucination in Large Foundation Models. Vipula Rawte et al. arxiv. [paper]
- [2023/09] A Survey of Knowledge Enhanced Pre-Trained Language Models. Linmei Hu et al. TKDE 2023. [paper]
- [2023/06] Unifying Large Language Models and Knowledge Graphs: A Roadmap. Shirui Pan et al. arxiv. [paper]
- [2022/01] A Survey of Knowledge-Enhanced Text Generation. Wenhao Yu et al. ACM Computing Survey (CSUR) 2022. [paper]
- [2023/07] Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models. Yuheng Huang et al. arxiv. [paper]
- [2023/07] A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation. Neeraj Varshney et al. arxiv. [paper]
- [2023/06] Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs. Miao Xiong et al. arxiv. [paper]
- [2023/10] Retrieval-Generation Synergy Augmented Large Language Models. Zhangyin Feng et al. arxiv. [paper]
- [2023/05] Active Retrieval Augmented Generation. Zhengbao Jiang et al. arxiv. [paper]
- [2022/02] A Survey on Retrieval-Augmented Text Generation. Huayang Li et al. arxiv. [paper]
- [2023/07] Citation: A Key to Building Responsible and Accountable Large Language Models. Jie Huang, Kevin Chen-Chuan Chang. arxiv. [paper]
- [2023/05] Enabling Large Language Models to Generate Text with Citations. Tianyu Gao et al. arxiv. [paper]
- [2023/08] PMET: Precise Model Editing in a Transformer. Xiaopeng Li et al. arxiv. [paper]
- [2023/05] Editing Large Language Models: Problems, Methods, and Opportunities. Yunzhi Yao et al. EMNLP 2023. [paper]
- [2022/02] Locating and Editing Factual Associations in GPT. Kevin Meng et al. NeurIPS 2022. [paper]
- [2023/05] LM vs LM: Detecting Factual Errors via Cross Examination. Roi Cohen et al. arxiv. [paper]
- [2023/05] Improving Factuality and Reasoning in Language Models through Multiagent Debate. Yilun Du et al. arxiv. [paper]
- [2023/09] Chain-of-Verification Reduces Hallucination in Large Language Models. Shehzaad Dhuliawala et al. arxiv. [paper]
- [2023/05] "According to ..." Prompt- ing Language Models Improves Quoting from Pre-Training Data. Orion Weller et al. arxiv. [paper]
- [2023/09] DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models. Yung-Sung Chuang et al. arxiv. [paper]
- [2022/06] Factuality Enhanced Language Models for Open-Ended Text Generation. Nayeon Lee et al. NeurIPS 2022. [paper]
- [2023/08] Identifying and Mitigating the Security Risks of Generative AI. Clark Barrett et al. arxiv. [paper]
- [2023/06] Evaluating the Social Impact of Generative AI Systems in Systems and Society. Irene Solaiman et al. arxiv. [paper]
- [2021/08] On the Opportunities and Risks of Foundation Models. Rishi Bommasani et al. arxiv. [paper]
- [2023/06] TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models. Yue Huang et al. arxiv. [paper]
- [2023/06] DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models. Boxin Wang et al. arxiv. [paper]
- [2022/06] Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models. Maribeth Rauh et al. NeurIPS 2022. [paper]
- [2023/05] Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language Generation. Patrick Fernandes et al. arxiv. [paper]
- [2022/12] Constitutional AI: Harmlessness from AI Feedback. Yuntao Bai et al. arxiv. [paper]
- [2022/03] Training language models to follow instructions with human feedback. Long Ouyang et al. arxiv. [paper]
- [2023/06] Explore, Establish, Exploit: Red Teaming Language Models from Scratch. Stephen Casper et al. arxiv. [paper]
- [2023/07] Query-Efficient Black-Box Red Teaming via Bayesian Optimization. Deokjae Lee et al. ACL 2023. [paper]
- [2022/08] Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned. Deep Ganguli et al. arxiv. [paper]
- [2022/10] Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation. Yangsibo Huang et al. NeurIPS 2022. [paper]
- [2023/07] MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots. Gelei Deng et al. arxiv. [paper]
- [2022/07] Universal and Transferable Adversarial Attacks on Aligned Language Models. Andy Zou et al. arxiv. [paper]
- [2023/09] Certifying LLM Safety against Adversarial Prompting. Aounon Kumar et al. arxiv. [paper]
- [2023/08] LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked. Alec Helbling et al. arxiv. [paper]
- [2022/08] Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of Foundation Models. Peter Henderson et al. AIES 2023. [paper]
-
[2023/11] Adapting Fake News Detection to the Era of Large Language Models. Jinyan Su et al. arXiv. [paper]
-
[2023/10] Fake News in Sheep’s Clothing: Robust Fake News Detection Against LLM-Empowered Style Attacks. Jiaying Wu, Bryan Hooi. arXiv. [paper]
-
[2023/10] FakeGPT: Fake News Generation, Explanation and Detection of Large Language Models. Yue Huang, Lichao Sun. arXiv. [paper]
-
[2023/09] Can LLM-Generated Misinformation Be Detected? Canyu Chen, Kai Shu. arXiv. [paper]
-
[2023/09] Disinformation Detection: An Evolving Challenge in the Age of LLMs Bohan Jiang et al. arXiv. [paper]
-
[2023/08] Improving Detection of ChatGPT-Generated Fake Science Using Real Publication Text: Introducing xFakeBibs a Supervised Learning Network Algorithm Ahmed Abdeen Hamed, Xindong Wu. arXiv. [paper]
-
[2023/07] FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios I-Chun Chern et al. arXiv. [paper]
-
[2023/06] Implementing BERT and fine-tuned RobertA to detect AI generated news by ChatGPT Zecong Wang et al. arXiv. [paper]
-
[2023/04] Synthetic Lies: Understanding AI-Generated Misinformation and Evaluating Algorithmic and Human Solutions. Jiawei Zhou et al. CHI 2023. [paper]
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for llm-misinformation-survey
Similar Open Source Tools
llm-misinformation-survey
The 'llm-misinformation-survey' repository is dedicated to the survey on combating misinformation in the age of Large Language Models (LLMs). It explores the opportunities and challenges of utilizing LLMs to combat misinformation, providing insights into the history of combating misinformation, current efforts, and future outlook. The repository serves as a resource hub for the initiative 'LLMs Meet Misinformation' and welcomes contributions of relevant research papers and resources. The goal is to facilitate interdisciplinary efforts in combating LLM-generated misinformation and promoting the responsible use of LLMs in fighting misinformation.
awesome-llm-attributions
This repository focuses on unraveling the sources that large language models tap into for attribution or citation. It delves into the origins of facts, their utilization by the models, the efficacy of attribution methodologies, and challenges tied to ambiguous knowledge reservoirs, biases, and pitfalls of excessive attribution.
awesome-deeplogic
Awesome deep logic is a curated list of papers and resources focusing on integrating symbolic logic into deep neural networks. It includes surveys, tutorials, and research papers that explore the intersection of logic and deep learning. The repository aims to provide valuable insights and knowledge on how logic can be used to enhance reasoning, knowledge regularization, weak supervision, and explainability in neural networks.
LLM4IR-Survey
LLM4IR-Survey is a collection of papers related to large language models for information retrieval, organized according to the survey paper 'Large Language Models for Information Retrieval: A Survey'. It covers various aspects such as query rewriting, retrievers, rerankers, readers, search agents, and more, providing insights into the integration of large language models with information retrieval systems.
awesome-LLM-game-agent-papers
This repository provides a comprehensive survey of research papers on large language model (LLM)-based game agents. LLMs are powerful AI models that can understand and generate human language, and they have shown great promise for developing intelligent game agents. This survey covers a wide range of topics, including adventure games, crafting and exploration games, simulation games, competition games, cooperation games, communication games, and action games. For each topic, the survey provides an overview of the state-of-the-art research, as well as a discussion of the challenges and opportunities for future work.
ABigSurveyOfLLMs
ABigSurveyOfLLMs is a repository that compiles surveys on Large Language Models (LLMs) to provide a comprehensive overview of the field. It includes surveys on various aspects of LLMs such as transformers, alignment, prompt learning, data management, evaluation, societal issues, safety, misinformation, attributes of LLMs, efficient LLMs, learning methods for LLMs, multimodal LLMs, knowledge-based LLMs, extension of LLMs, LLMs applications, and more. The repository aims to help individuals quickly understand the advancements and challenges in the field of LLMs through a collection of recent surveys and research papers.
Call-for-Reviewers
The `Call-for-Reviewers` repository aims to collect the latest 'call for reviewers' links from various top CS/ML/AI conferences/journals. It provides an opportunity for individuals in the computer/ machine learning/ artificial intelligence fields to gain review experience for applying for NIW/H1B/EB1 or enhancing their CV. The repository helps users stay updated with the latest research trends and engage with the academic community.
Awesome-LLM-Robotics
This repository contains a curated list of **papers using Large Language/Multi-Modal Models for Robotics/RL**. Template from awesome-Implicit-NeRF-Robotics Please feel free to send me pull requests or email to add papers! If you find this repository useful, please consider citing and STARing this list. Feel free to share this list with others! ## Overview * Surveys * Reasoning * Planning * Manipulation * Instructions and Navigation * Simulation Frameworks * Citation
Awesome-Robotics-3D
Awesome-Robotics-3D is a curated list of 3D Vision papers related to Robotics domain, focusing on large models like LLMs/VLMs. It includes papers on Policy Learning, Pretraining, VLM and LLM, Representations, and Simulations, Datasets, and Benchmarks. The repository is maintained by Zubair Irshad and welcomes contributions and suggestions for adding papers. It serves as a valuable resource for researchers and practitioners in the field of Robotics and Computer Vision.
Everything-LLMs-And-Robotics
The Everything-LLMs-And-Robotics repository is the world's largest GitHub repository focusing on the intersection of Large Language Models (LLMs) and Robotics. It provides educational resources, research papers, project demos, and Twitter threads related to LLMs, Robotics, and their combination. The repository covers topics such as reasoning, planning, manipulation, instructions and navigation, simulation frameworks, perception, and more, showcasing the latest advancements in the field.
LLM-Tool-Survey
This repository contains a collection of papers related to tool learning with large language models (LLMs). The papers are organized according to the survey paper 'Tool Learning with Large Language Models: A Survey'. The survey focuses on the benefits and implementation of tool learning with LLMs, covering aspects such as task planning, tool selection, tool calling, response generation, benchmarks, evaluation, challenges, and future directions in the field. It aims to provide a comprehensive understanding of tool learning with LLMs and inspire further exploration in this emerging area.
MedLLMsPracticalGuide
This repository serves as a practical guide for Medical Large Language Models (Medical LLMs) and provides resources, surveys, and tools for building, fine-tuning, and utilizing LLMs in the medical domain. It covers a wide range of topics including pre-training, fine-tuning, downstream biomedical tasks, clinical applications, challenges, future directions, and more. The repository aims to provide insights into the opportunities and challenges of LLMs in medicine and serve as a practical resource for constructing effective medical LLMs.
AI-System-School
AI System School is a curated list of research in machine learning systems, focusing on ML/DL infra, LLM infra, domain-specific infra, ML/LLM conferences, and general resources. It provides resources such as data processing, training systems, video systems, autoML systems, and more. The repository aims to help users navigate the landscape of AI systems and machine learning infrastructure, offering insights into conferences, surveys, books, videos, courses, and blogs related to the field.
latentbox
Latent Box is a curated collection of resources for AI, creativity, and art. It aims to bridge the information gap with high-quality content, promote diversity and interdisciplinary collaboration, and maintain updates through community co-creation. The website features a wide range of resources, including articles, tutorials, tools, and datasets, covering various topics such as machine learning, computer vision, natural language processing, generative art, and creative coding.
For similar tasks
llm-misinformation-survey
The 'llm-misinformation-survey' repository is dedicated to the survey on combating misinformation in the age of Large Language Models (LLMs). It explores the opportunities and challenges of utilizing LLMs to combat misinformation, providing insights into the history of combating misinformation, current efforts, and future outlook. The repository serves as a resource hub for the initiative 'LLMs Meet Misinformation' and welcomes contributions of relevant research papers and resources. The goal is to facilitate interdisciplinary efforts in combating LLM-generated misinformation and promoting the responsible use of LLMs in fighting misinformation.
OpenFactVerification
Loki is an open-source tool designed to automate the process of verifying the factuality of information. It provides a comprehensive pipeline for dissecting long texts into individual claims, assessing their worthiness for verification, generating queries for evidence search, crawling for evidence, and ultimately verifying the claims. This tool is especially useful for journalists, researchers, and anyone interested in the factuality of information.
For similar jobs
responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment interfaces and libraries for understanding AI systems. It empowers developers and stakeholders to develop and monitor AI responsibly, enabling better data-driven actions. The toolbox includes visualization widgets for model assessment, error analysis, interpretability, fairness assessment, and mitigations library. It also offers a JupyterLab extension for managing machine learning experiments and a library for measuring gender bias in NLP datasets.
fairlearn
Fairlearn is a Python package designed to help developers assess and mitigate fairness issues in artificial intelligence (AI) systems. It provides mitigation algorithms and metrics for model assessment. Fairlearn focuses on two types of harms: allocation harms and quality-of-service harms. The package follows the group fairness approach, aiming to identify groups at risk of experiencing harms and ensuring comparable behavior across these groups. Fairlearn consists of metrics for assessing model impacts and algorithms for mitigating unfairness in various AI tasks under different fairness definitions.
Open-Prompt-Injection
OpenPromptInjection is an open-source toolkit for attacks and defenses in LLM-integrated applications, enabling easy implementation, evaluation, and extension of attacks, defenses, and LLMs. It supports various attack and defense strategies, including prompt injection, paraphrasing, retokenization, data prompt isolation, instructional prevention, sandwich prevention, perplexity-based detection, LLM-based detection, response-based detection, and know-answer detection. Users can create models, tasks, and apps to evaluate different scenarios. The toolkit currently supports PaLM2 and provides a demo for querying models with prompts. Users can also evaluate ASV for different scenarios by injecting tasks and querying models with attacked data prompts.
aws-machine-learning-university-responsible-ai
This repository contains slides, notebooks, and data for the Machine Learning University (MLU) Responsible AI class. The mission is to make Machine Learning accessible to everyone, covering widely used ML techniques and applying them to real-world problems. The class includes lectures, final projects, and interactive visuals to help users learn about Responsible AI and core ML concepts.
AIF360
The AI Fairness 360 toolkit is an open-source library designed to detect and mitigate bias in machine learning models. It provides a comprehensive set of metrics, explanations, and algorithms for bias mitigation in various domains such as finance, healthcare, and education. The toolkit supports multiple bias mitigation algorithms and fairness metrics, and is available in both Python and R. Users can leverage the toolkit to ensure fairness in AI applications and contribute to its development for extensibility.
Awesome-Interpretability-in-Large-Language-Models
This repository is a collection of resources focused on interpretability in large language models (LLMs). It aims to help beginners get started in the area and keep researchers updated on the latest progress. It includes libraries, blogs, tutorials, forums, tools, programs, papers, and more related to interpretability in LLMs.
hallucination-index
LLM Hallucination Index - RAG Special is a comprehensive evaluation of large language models (LLMs) focusing on context length and open vs. closed-source attributes. The index explores the impact of context length on model performance and tests the assumption that closed-source LLMs outperform open-source ones. It also investigates the effectiveness of prompting techniques like Chain-of-Note across different context lengths. The evaluation includes 22 models from various brands, analyzing major trends and declaring overall winners based on short, medium, and long context insights. Methodologies involve rigorous testing with different context lengths and prompting techniques to assess models' abilities in handling extensive texts and detecting hallucinations.
llm-misinformation-survey
The 'llm-misinformation-survey' repository is dedicated to the survey on combating misinformation in the age of Large Language Models (LLMs). It explores the opportunities and challenges of utilizing LLMs to combat misinformation, providing insights into the history of combating misinformation, current efforts, and future outlook. The repository serves as a resource hub for the initiative 'LLMs Meet Misinformation' and welcomes contributions of relevant research papers and resources. The goal is to facilitate interdisciplinary efforts in combating LLM-generated misinformation and promoting the responsible use of LLMs in fighting misinformation.