AwesomeResponsibleAI

AwesomeResponsibleAI

A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustworthy, and Human-Centered AI.

Stars: 52

Visit
 screenshot

Awesome Responsible AI is a curated list of academic research, books, code of ethics, courses, data sets, frameworks, institutes, newsletters, principles, podcasts, reports, tools, regulations, and standards related to Responsible, Trustworthy, and Human-Centered AI. It covers various concepts such as Responsible AI, Trustworthy AI, Human-Centered AI, Responsible AI frameworks, AI Governance, and more. The repository provides a comprehensive collection of resources for individuals interested in ethical, transparent, and accountable AI development and deployment.

README:

Awesome Maintenance GitHub GitHub GitHub GitHub

Awesome Responsible AI

A curated list of awesome academic research, books, code of ethics, courses, data sets, frameworks, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustworthy, and Human-Centered AI.

Main Concepts

What is Responsible AI?

Responsible AI (RAI) refers to the development, deployment, and use of artificial intelligence (AI) systems in ways that are ethical, transparent, accountable, and aligned with human values.

What is Trustworthy AI?

Trustworthy AI (TAI) refers to artificial intelligence systems designed and deployed to be transparent, robust and respectful of data privacy.

What is Human-Centered AI?

Human-Centered Artificial Intelligence (HCAI) is an approach to AI development that prioritizes human users' needs, experiences, and well-being.

What is a Responsible AI framework?

Responsible AI frameworks often encompass guidelines, principles, and practices that prioritize fairness, safety, and respect for individual rights.

What is AI Governance?

AI governance is a system of rules, processes, frameworks, and tools within an organization to ensure the ethical and responsible development of AI.

Content

Academic Research

Evaluation (of model explanations)

  • Agarwal, C., Krishna, S., Saxena, E., Pawelczyk, M., Johnson, N., Puri, I., ... & Lakkaraju, H. (2022). Openxai: Towards a transparent evaluation of model explanations. Advances in Neural Information Processing Systems, 35, 15784-15799. Article
  • Liesenfeld, A., and Dingemanse, M. (2024). Rethinking Open Source Generative AI: Open-Washing and the EU AI Act. In The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24). Rio de Janeiro, Brazil: ACM. Article Benchmark

Bias

  • Schwartz, R., Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a standard for identifying and managing bias in artificial intelligence (Vol. 3, p. 00). US Department of Commerce, National Institute of Standards and Technology. Article NIST

Challenges

  • D'Amour, A., Heller, K., Moldovan, D., Adlam, B., Alipanahi, B., Beutel, A., ... & Sculley, D. (2022). Underspecification presents challenges for credibility in modern machine learning. Journal of Machine Learning Research, 23(226), 1-61. Article Google

Drift

  • Ackerman, S., Dube, P., Farchi, E., Raz, O., & Zalmanovici, M. (2021, June). Machine learning model drift detection via weak data slices. In 2021 IEEE/ACM Third International Workshop on Deep Learning for Testing and Testing for Deep Learning (DeepTest) (pp. 1-8). IEEE. Article IBM
  • Ackerman, S., Raz, O., & Zalmanovici, M. (2020, February). FreaAI: Automated extraction of data slices to test machine learning models. In International Workshop on Engineering Dependable and Secure Machine Learning Systems (pp. 67-83). Cham: Springer International Publishing. Article IBM

Explainability

  • Dhurandhar, A., Chen, P. Y., Luss, R., Tu, C. C., Ting, P., Shanmugam, K., & Das, P. (2018). Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Advances in neural information processing systems, 31. Article University of Michigan IBM Research
  • Dhurandhar, A., Shanmugam, K., Luss, R., & Olsen, P. A. (2018). Improving simple models with confidence profiles. Advances in Neural Information Processing Systems, 31. Article IBM Research
  • Gurumoorthy, K. S., Dhurandhar, A., Cecchi, G., & Aggarwal, C. (2019, November). Efficient data representation by selecting prototypes with importance weights. In 2019 IEEE International Conference on Data Mining (ICDM) (pp. 260-269). IEEE. Article Amazon Development Center IBM Research
  • Hind, M., Wei, D., Campbell, M., Codella, N. C., Dhurandhar, A., Mojsilović, A., ... & Varshney, K. R. (2019, January). TED: Teaching AI to explain its decisions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 123-129)Article IBM Research
  • Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30. Article, Github University of Washington
  • Luss, R., Chen, P. Y., Dhurandhar, A., Sattigeri, P., Zhang, Y., Shanmugam, K., & Tu, C. C. (2021, August). Leveraging latent features for local explanations. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining (pp. 1139-1149). Article IBM Research University of Michigan
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). "Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144). Article, Github University of Washington
  • Wei, D., Dash, S., Gao, T., & Gunluk, O. (2019, May). Generalized linear rule models. In International conference on machine learning (pp. 6687-6696). PMLR. Article IBM Research
  • Contrastive Explanations Method with Monotonic Attribute Functions (Luss et al., 2019)
  • Boolean Decision Rules via Column Generation (Light Edition) (Dash et al., 2018) IBM Research
  • Towards Robust Interpretability with Self-Explaining Neural Networks (Alvarez-Melis et al., 2018) MIT

Fairness

  • Caton, S., & Haas, C. (2024). Fairness in machine learning: A survey. ACM Computing Surveys, 56(7), 1-38. Article
  • Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2), 153-163. Article
  • Coston, A., Mishler, A., Kennedy, E. H., & Chouldechova, A. (2020, January). Counterfactual risk assessments, evaluation, and fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 582-593). Article
  • Jesus, S., Saleiro, P., Jorge, B. M., Ribeiro, R. P., Gama, J., Bizarro, P., & Ghani, R. (2024). Aequitas Flow: Streamlining Fair ML Experimentation. arXiv preprint arXiv:2405.05809. Article
  • Saleiro, P., Kuester, B., Hinkson, L., London, J., Stevens, A., Anisfeld, A., ... & Ghani, R. (2018). Aequitas: A bias and fairness audit toolkit. arXiv preprint arXiv:1811.05577. Article
  • Vasudevan, S., & Kenthapadi, K. (2020, October). Lift: A scalable framework for measuring fairness in ml applications. In Proceedings of the 29th ACM international conference on information & knowledge management (pp. 2773-2780). Article LinkedIn

Ethical Data Products

  • Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Iii, H. D., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86-92. Article Google
  • Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., ... & Gebru, T. (2019, January). Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency (pp. 220-229). Article Google
  • Pushkarna, M., Zaldivar, A., & Kjartansson, O. (2022, June). Data cards: Purposeful and transparent dataset documentation for responsible ai. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1776-1826). Article Google
  • Rostamzadeh, N., Mincu, D., Roy, S., Smart, A., Wilcox, L., Pushkarna, M., ... & Heller, K. (2022, June). Healthsheet: development of a transparency artifact for health datasets. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1943-1961). Article Google
  • Saint-Jacques, G., Sepehri, A., Li, N., & Perisic, I. (2020). Fairness through Experimentation: Inequality in A/B testing as an approach to responsible design. arXiv preprint arXiv:2002.05819. Article LinkedIn

Sustainability

  • Lacoste, A., Luccioni, A., Schmidt, V., & Dandres, T. (2019). Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700. Article
  • P. Li, J. Yang, M. A. Islam, S. Ren, (2023) Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models, arXiv:2304.03271 Article
  • Parcollet, T., & Ravanelli, M. (2021). The energy and carbon footprint of training end-to-end speech recognizers. Article
  • Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L.M., Rothchild, D., So, D., Texier, M. and Dean, J. (2021). Carbon emissions and large neural network training. arXiv preprint arXiv:2104.10350. Article
  • Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., ... & Dennison, D. (2015). Hidden technical debt in machine learning systems. Advances in neural information processing systems, 28. Article Google
  • Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., ... & Young, M. (2014, December). Machine learning: The high interest credit card of technical debt. In SE4ML: software engineering for machine learning (NIPS 2014 Workshop) (Vol. 111, p. 112). Article Google
  • Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243. Article
  • Sustainable AI: AI for sustainability and the sustainability of AI (van Wynsberghe, A. 2021). AI and Ethics, 1-6
  • Green Algorithms: Quantifying the carbon emissions of computation (Lannelongue, L. et al. 2020)
  • C.-J. Wu, R. Raghavendra, U. Gupta, B. Acun, N. Ardalani, K. Maeng, G. Chang, F. Aga, J. Huang, C. Bai, M. Gschwind, A. Gupta, M. Ott, A. Melnikov, S. Candido, D. Brooks, G. Chauhan, B. Lee, H.-H. Lee, K. Hazelwood, Sustainable AI: Environmental implications, challenges and opportunities in Proceedings of the 5th Conference on Machine Learning and Systems (MLSys) (2022) vol. 4, pp. 795–813. Article

Collections

Reproducible/Non-Reproducible Research

Books

Open Access

  • Barrett, M., Gerke, T. & D’Agostino McGowa, L. (2024). Causal Inference in R Book Causal Inference R
  • Biecek, P., & Burzykowski, T. (2021). Explanatory model analysis: explore, explain, and examine predictive models. Chapman and Hall/CRC. Book Explainability Interpretability Transparency R
  • Biecek, P. (2024). Adversarial Model Analysis. Book Safety Red Teaming
  • Cunningham, Scott. (2021) Causal inference: The mixtape. Yale university press. Book Causal Inference
  • Matloff, N et all. (2204) Data Science Looks at Discrimination Book Fairness R
  • Molnar, C. (2020). Interpretable machine learning. Lulu. com. Interpretable Machine Learning Book Explainability Interpretability Transparency R
  • Huntington-Klein, Nick. The effect: An introduction to research design and causality. Chapman and Hall/CRC, 2021. Book Causal Inference

Commercial / Propietary / Closed Access

Code of Ethics

Courses

Explainability/Interpretability

Causality

Data/AI Ethics

Data Privacy

Ethical Design

Safety

Data Sets

Frameworks

Institutes

Newsletters

Principles

Additional:

Podcasts

Reports

(AI) Incidents databases

Market Analysis

Other

Tools

Assessments

Benchmarks

Bias

Causal Inference

Drift

Fairness

Interpretability/Explicability

Interpretable Models

LLM Regulation Compliance Regulation

  • COMPL-AI Python ETH Zurich Insait LaticeFlow AI

LLM Evaluation

Performance (& Automated ML)

(AI/Data) Poisoning

Privacy

Reliability Evaluation (of post hoc explanation methods)

Robustness

Safety

Security

For consumers:

Sustainability

(RAI) Toolkit

(AI) Watermaring

Regulations

Definition

What are regulations?

Regulations are requirements established by governments.

Interesting resources

Canada

European Union

Short Name Code Description Status Website Legal text
Data Act EU/2023/2854 It enables a fair distribution of the value of data by establishing clear and fair rules for accessing and using data within the European data economy. Published Website Source
Data Governance Act EU/2022/868 It supports the setup and development of Common European Data Spaces in strategic domains, involving both private and public players, in sectors such as health, environment, energy, agriculture, mobility, finance, manufacturing, public administration and skills. Published Website Source
Digital Market Act EU/2022/1925 It establishes a set of clearly defined objective criteria to identify “gatekeepers”. Gatekeepers are large digital platforms providing so called core platform services, such as for example online search engines, app stores, messenger services. Gatekeepers will have to comply with the do’s (i.e. obligations) and don’ts (i.e. prohibitions) listed in the DMA. Published Website Source
Digital Services Act EU/2022/2026 It regulates online intermediaries and platforms such as marketplaces, social networks, content-sharing platforms, app stores, and online travel and accommodation platforms. Its main goal is to prevent illegal and harmful activities online and the spread of disinformation. It ensures user safety, protects fundamental rights, and creates a fair and open online platform environment. Published Website Source
DMS Directive EU/2019/790 It is intended to ensure a well-functioning marketplace for copyright. Published Website Source
Energy Efficiency Directive EU/2023/1791 It establishes ‘energy efficiency first’ as a fundamental principle of EU energy policy, giving it legal-standing for the first time. In practical terms, this means that energy efficiency must be considered by EU countries in all relevant policy and major investment decisions taken in the energy and non-energy sectors. Published Website Source
EU AI ACT EU/2024/1689 It assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk are banned. Second, high-risk applications are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated. Published Website Source
General Data Protection Regulation (GDPR) EU/2016/679 It strengthens individuals' fundamental rights in the digital age and facilitate business by clarifying rules for companies and public bodies in the digital single market. Published Website Source

Singapore

United States

  • State consumer privacy laws: California (CCPA and its amendment, CPRA), Virginia (VCDPA), and Colorado (ColoPA).
  • Specific and limited privacy data laws: HIPAA, FCRA, FERPA, GLBA, ECPA, COPPA, VPPA and FTC.
  • EU-U.S. and Swiss-U.S. Privacy Shield Frameworks - The EU-U.S. and Swiss-U.S. Privacy Shield Frameworks were designed by the U.S. Department of Commerce and the European Commission and Swiss Administration to provide companies on both sides of the Atlantic with a mechanism to comply with data protection requirements when transferring personal data from the European Union and Switzerland to the United States in support of transatlantic commerce.
  • Executive Order on Maintaining American Leadership in AI - Official mandate by the President of the US to Privacy Act of 1974 - The privacy act of 1974 which establishes a code of fair information practices that governs the collection, maintenance, use and dissemination of information about individuals that is maintained in systems of records by federal agencies.
  • Privacy Protection Act of 1980 - The Privacy Protection Act of 1980 protects journalists from being required to turn over to law enforcement any work product and documentary materials, including sources, before it is disseminated to the public.
  • AI Bill of Rights - The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from IA threats based on five principles: Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Human Alternatives, Consideration, and Fallback.

Standards

Definition

What are standards?

Standards are voluntary, consensus solutions. They document an agreement on how a material, product, process, or service should be specified, performed or delivered. They keep people safe and ensure things work. They create confidence and provide security for investment.

Standards can be understood as formal specifications of best practices as well.

IEEE Standards

Domain Standard Status URL
IEEE Guide for an Architectural Framework for Explainable Artificial Intelligence IEEE 2894-2024 Published Source
IEEE Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems IEEE 7014-2024 Published Source

UNE/ISO Standards

Domain Standard Status URL
Calidad del dato UNE 0079:2023 Published Source
Gestión del dato UNE 0078:2023 Published Source
Gobierno del dato UNE 0077:2023 Published Source
Guía de evaluación de la Calidad de un Conjunto de Datos. UNE 0081:2023 Published Source
Guía de evaluación del Gobierno, Gestión y Gestión de la Calidad del Dato. UNE 0080:2023 Published Source

ISO/IEC Standards

Domain Standard Status URL
AI Concepts and Terminology ISO/IEC 22989:2022 Information technology — Artificial intelligence — Artificial intelligence concepts and terminology Published https://www.iso.org/standard/74296.html
AI Risk Management ISO/IEC 23894:2023 Information technology - Artificial intelligence - Guidance on risk management Published https://www.iso.org/standard/77304.html
AI Management System ISO/IEC DIS 42001 Information technology — Artificial intelligence — Management system Published https://www.iso.org/standard/81230.html
Biases in AI ISO/IEC TR 24027:2021 Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making Published https://www.iso.org/standard/77607.html
AI Performance ISO/IEC TS 4213:2022 Information technology — Artificial intelligence — Assessment of machine learning classification performance Published https://www.iso.org/standard/79799.html
Ethical and societal concerns ISO/IEC TR 24368:2022 Information technology — Artificial intelligence — Overview of ethical and societal concerns Published https://www.iso.org/standard/78507.html
Explainability ISO/IEC AWI TS 6254 Information technology — Artificial intelligence — Objectives and approaches for explainability of ML models and AI systems Under Development https://www.iso.org/standard/82148.html
AI Sustainability ISO/IEC AWI TR 20226 Information technology — Artificial intelligence — Environmental sustainability aspects of AI systems Under Development https://www.iso.org/standard/86177.html
AI Verification and Validation ISO/IEC AWI TS 17847 Information technology — Artificial intelligence — Verification and validation analysis of AI systems Under Development https://www.iso.org/standard/85072.html
AI Controllabitlity ISO/IEC CD TS 8200 Information technology — Artificial intelligence — Controllability of automated artificial intelligence systems Published https://www.iso.org/standard/83012.html
Biases in AI ISO/IEC CD TS 12791 Information technology — Artificial intelligence — Treatment of unwanted bias in classification and regression machine learning tasks Under Publication https://www.iso.org/standard/84110.html
AI Impact Assessment ISO/IEC AWI 42005 Information technology — Artificial intelligence — AI system impact assessment Under Development https://www.iso.org/standard/44545.html
Data Quality for AI/ML ISO/IEC DIS 5259 Artificial intelligence — Data quality for analytics and machine learning (ML) (1 to 6) Published https://www.iso.org/standard/81088.html
Data Lifecycle ISO/IEC FDIS 8183 Information technology — Artificial intelligence — Data life cycle framework Published https://www.iso.org/standard/83002.html
Audit and Certification ISO/IEC CD 42006 Information technology — Artificial intelligence — Requirements for bodies providing audit and certification of artificial intelligence management systems Under Development https://www.iso.org/standard/44546.html
Transparency ISO/IEC AWI 12792 Information technology — Artificial intelligence — Transparency taxonomy of AI systems Under Development https://www.iso.org/standard/84111.html
AI Quality ISO/IEC AWI TR 42106 Information technology — Artificial intelligence — Overview of differentiated benchmarking of AI system quality characteristics Under Development https://www.iso.org/standard/86903.html
Trustworthy AI ISO/IEC TR 24028:2020 Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence Published https://www.iso.org/standard/77608.html
Synthetic Data ISO/IEC AWI TR 42103 Information technology — Artificial intelligence — Overview of synthetic data in the context of AI systems Under Development https://www.iso.org/standard/86899.html
AI Security ISO/IEC AWI 27090 Cybersecurity — Artificial Intelligence — Guidance for addressing security threats and failures in artificial intelligence systems Under Development https://www.iso.org/standard/56581.html
AI Privacy ISO/IEC AWI 27091 Cybersecurity and Privacy — Artificial Intelligence — Privacy protection Under Development https://www.iso.org/standard/56582.html
AI Governance ISO/IEC 38507:2022 Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations Published https://www.iso.org/standard/56641.html
AI Safety ISO/IEC CD TR 5469 Artificial intelligence — Functional safety and AI systems Published https://www.iso.org/standard/81283.html
Beneficial AI Systems ISO/IEC AWI TR 21221 Information technology – Artificial intelligence – Beneficial AI systems Under Development https://www.iso.org/standard/86690.html

NIST Standards

Additional standards can be found using the Standards Database.

Citing this repository

Contributors with over 50 edits can be named coauthors in the citation of visible names. Otherwise, all contributors with fewer than 50 edits are included under "et al."

Bibtex

@misc{arai_repo,
  author={Josep Curto et al.},
  title={Awesome Responsible Artificial Intelligence},
  year={2024},
  note={\url{https://github.com/AthenaCore/AwesomeResponsibleAI}}
}

ACM, APA, Chicago, and MLA

ACM (Association for Computing Machinery)

Curto, J., et al. 2024. Awesome Responsible Artificial Intelligence. GitHub. https://github.com/AthenaCore/AwesomeResponsibleAI.

APA (American Psychological Association) 7th Edition

Curto, J., et al. (2024). Awesome Responsible Artificial Intelligence. GitHub. https://github.com/AthenaCore/AwesomeResponsibleAI.

Chicago Manual of Style 17th Edition

Curto, J., et al. "Awesome Responsible Artificial Intelligence." GitHub. Last modified 2024. https://github.com/AthenaCore/AwesomeResponsibleAI.

MLA (Modern Language Association) 9th Edition

Curto, J., et al. "Awesome Responsible Artificial Intelligence". GitHub, 2024, https://github.com/AthenaCore/AwesomeResponsibleAI. Accessed 15 Oct 2024.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for AwesomeResponsibleAI

Similar Open Source Tools

For similar tasks

For similar jobs