Best AI tools for< Ai Ethics Policy Advisor >
Infographic
20 - AI tool Sites
Montreal AI Ethics Institute
The Montreal AI Ethics Institute (MAIEI) is an international non-profit organization founded in 2018, dedicated to democratizing AI ethics literacy. It equips citizens concerned about artificial intelligence and its impact on society to take action through research summaries, columns, and AI applications in various fields.
AIGA AI Governance Framework
The AIGA AI Governance Framework is a practice-oriented framework for implementing responsible AI. It provides organizations with a systematic approach to AI governance, covering the entire process of AI system development and operations. The framework supports compliance with the upcoming European AI regulation and serves as a practical guide for organizations aiming for more responsible AI practices. It is designed to facilitate the development and deployment of transparent, accountable, fair, and non-maleficent AI systems.
Coalition for Health AI (CHAI)
The Coalition for Health AI (CHAI) is an AI application that provides guidelines for the responsible use of AI in health. It focuses on developing best practices and frameworks for safe and equitable AI in healthcare. CHAI aims to address algorithmic bias and collaborates with diverse stakeholders to drive the development, evaluation, and appropriate use of AI in healthcare.
Center for AI Safety (CAIS)
The Center for AI Safety (CAIS) is a research and field-building nonprofit organization based in San Francisco. They conduct impactful research, advocacy projects, and provide resources to reduce societal-scale risks associated with artificial intelligence (AI). CAIS focuses on technical AI safety research, field-building projects, and offers a compute cluster for AI/ML safety projects. They aim to develop and use AI safely to benefit society, addressing inherent risks and advocating for safety standards.
Future of Privacy Forum
The Future of Privacy Forum (FPF) is an AI tool that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. It provides resources, training sessions, and guidance on AI-related topics, online advertising, youth privacy legislation, and more. FPF brings together industry, academics, civil society, policymakers, and other stakeholders to explore challenges posed by emerging technologies and develop privacy protections, ethical norms, and best practices.
AI Elections Accord
AI Elections Accord is a tech accord aimed at combating the deceptive use of AI in the 2024 elections. It sets expectations for managing risks related to deceptive AI election content on large-scale platforms. The accord focuses on prevention, provenance, detection, responsive protection, evaluation, public awareness, and resilience to safeguard the democratic process. It emphasizes collective efforts, education, and the development of defensive tools to protect public debate and build societal resilience against deceptive AI content.
blog.biocomm.ai
blog.biocomm.ai is an AI safety blog that focuses on the existential threat posed by uncontrolled and uncontained AI technology. It curates and organizes information related to AI safety, including the risks and challenges associated with the proliferation of AI. The blog aims to educate and raise awareness about the importance of developing safe and regulated AI systems to ensure the survival of humanity.
Microsoft Responsible AI Toolbox
Microsoft Responsible AI Toolbox is a suite of tools designed to assess, develop, and deploy AI systems in a safe, trustworthy, and ethical manner. It offers integrated tools and functionalities to help operationalize Responsible AI in practice, enabling users to make user-facing decisions faster and easier. The Responsible AI Dashboard provides a customizable experience for model debugging, decision-making, and business actions. With a focus on responsible assessment, the toolbox aims to promote ethical AI practices and transparency in AI development.
AI Index
The AI Index is a comprehensive resource for data and insights on artificial intelligence. It provides unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, journalists, executives, and the general public to develop a deeper understanding of the complex field of AI. The AI Index tracks, collates, distills, and visualizes data relating to artificial intelligence. This includes data on research and development, technical performance and ethics, the economy and education, AI policy and governance, diversity, public opinion, and more.
Teacher to Techie
Teacher to Techie is an AI consulting service for education, offering professional development workshops, AI policy consulting, customized AI training, and education consulting for teachers. The platform empowers educators to leverage AI technology in the classroom, streamline processes, and enhance student learning outcomes. With a focus on AI literacy and ethics, Teacher to Techie aims to bridge the gap between teachers and modern technology, providing tailored solutions to meet the evolving needs of educational settings.
AI & Inclusion Hub
The website focuses on the intersection of artificial intelligence (AI) and inclusion, exploring the impact of AI technologies on marginalized populations and global digital inequalities. It provides resources, research findings, and ideas on themes like health, education, and humanitarian crisis mitigation. The site showcases the work of the Ethics and Governance of AI initiative in collaboration with the MIT Media Lab, incorporating perspectives from experts in the field. It aims to address challenges and opportunities related to AI and inclusion through research, events, and multi-stakeholder dialogues.
International Journal for Educational Integrity
The International Journal for Educational Integrity is an AI tool that focuses on publishing articles related to academic integrity, ethics, and plagiarism. It features original research articles, reviews, and thematic collections on topics such as machine-based plagiarism, contract cheating, and the impact of emergencies on educational integrity. The journal aims to address emerging threats to academic integrity and promote ethical practices in education.
Stanford HAI
Stanford HAI is a research institute at Stanford University dedicated to advancing AI research, education, and policy to improve the human condition. The institute brings together researchers from a variety of disciplines to work on a wide range of AI-related projects, including developing new AI algorithms, studying the ethical and societal implications of AI, and creating educational programs to train the next generation of AI leaders. Stanford HAI is committed to developing human-centered AI technologies and applications that benefit all of humanity.
Dr. Amit Ray
Dr. Amit Ray is famous for his teachings on peace, compassion, meditation, non- violence, 114 chakras, compassionate AI, mindfulness, leadership and creativity. He is also famous for his contribution in the field of quantum computing and artificial intelligence. He has spent several years in high Himalaya in deep silent meditation. He is one of the rare meditation master, who has fully experienced the higher consciousness and fully conversant with the ancient meditation literature as well as modern high end researches. He often remains engrossed in deep meditation in the dense forests, caves, and snow-covered peaks of the Himalayas. At the same time he always keeps a watchful compassionate gaze for the well-being of the humanity. His compassion and meditation teachings transcends the limitations of worldly distinctions such as religion, country, culture, caste, class, color, history, and geography.
Trustworthy AI
Trustworthy AI is a business guide that focuses on navigating trust and ethics in artificial intelligence. Authored by Beena Ammanath, a global thought leader in AI ethics, the book provides practical guidelines for organizations developing or using AI solutions. It addresses the importance of AI systems adhering to social norms and ethics, making fair decisions in a consistent, transparent, explainable, and unbiased manner. Trustworthy AI offers readers a structured approach to thinking about AI ethics and trust, emphasizing the need for ethical considerations in the rapidly evolving landscape of AI technology.
Kodora AI
Kodora AI is a leading AI technology and advisory firm based in Australia, specializing in providing end-to-end AI services. They offer AI strategy development, use case identification, workforce AI training, and more. With a team of expert AI engineers and consultants, Kodora focuses on delivering practical outcomes for clients across various industries. The firm is known for its deep expertise, solution-focused approach, and commitment to driving AI adoption and innovation.
edu720
edu720 is a science-backed learning platform that uses AI and nanolearning to redefine how workforces learn and achieve their goals. It provides pre-built learning modules on various topics, including cybersecurity, privacy, and AI ethics. edu720's 360-degree approach ensures that all employees, regardless of their status or location, fully understand and absorb the knowledge conveyed.
Accel.AI
Accel.AI is an institute founded in 2016 with a mission to drive artificial intelligence for social impact initiatives. They focus on integrating AI and social impact through research, consulting, and workshops on ethical AI development and applied AI engineering. The institute targets underrepresented groups, tech companies, governments, and individuals experiencing job loss due to automation. They work globally with companies, professionals, and students.
Salesforce AI Blog
Salesforce AI Blog is an AI tool that focuses on various AI research topics such as accountability, accuracy, AI agents, AI coding, AI ethics, AI object detection, deep learning, forecasting, generative AI, and more. The blog showcases cutting-edge research, advancements, and projects in the field of artificial intelligence. It also highlights the work of Salesforce Research team members and their contributions to the AI community.
Fritz AI
Fritz AI is an AI tool that scans and ranks all AI tools, apps, and websites based on a set of criteria to determine the best and most ethical options. They provide technical guides, reviews, and tutorials to help users get started with machine learning. Fritz AI focuses on ethics, functionality, user experience, and innovation when evaluating tools. Users can contribute tool suggestions and collaborate with the Fritz AI team. The platform also offers beginner-friendly guides, consulting services, and promotes ethical use of AI and machine learning technologies.
20 - Open Source Tools
awesome-generative-ai
A curated list of Generative AI projects, tools, artworks, and models
awesome-artificial-intelligence-guidelines
The 'Awesome AI Guidelines' repository aims to simplify the ecosystem of guidelines, principles, codes of ethics, standards, and regulations around artificial intelligence. It provides a comprehensive collection of resources addressing ethical and societal challenges in AI systems, including high-level frameworks, principles, processes, checklists, interactive tools, industry standards initiatives, online courses, research, and industry newsletters, as well as regulations and policies from various countries. The repository serves as a valuable reference for individuals and teams designing, building, and operating AI systems to navigate the complex landscape of AI ethics and governance.
AwesomeResponsibleAI
Awesome Responsible AI is a curated list of academic research, books, code of ethics, courses, data sets, frameworks, institutes, newsletters, principles, podcasts, reports, tools, regulations, and standards related to Responsible, Trustworthy, and Human-Centered AI. It covers various concepts such as Responsible AI, Trustworthy AI, Human-Centered AI, Responsible AI frameworks, AI Governance, and more. The repository provides a comprehensive collection of resources for individuals interested in ethical, transparent, and accountable AI development and deployment.
100days_AI
The 100 Days in AI repository provides a comprehensive roadmap for individuals to learn Artificial Intelligence over a period of 100 days. It covers topics ranging from basic programming in Python to advanced concepts in AI, including machine learning, deep learning, and specialized AI topics. The repository includes daily tasks, resources, and exercises to ensure a structured learning experience. By following this roadmap, users can gain a solid understanding of AI and be prepared to work on real-world AI projects.
fairlearn
Fairlearn is a Python package designed to help developers assess and mitigate fairness issues in artificial intelligence (AI) systems. It provides mitigation algorithms and metrics for model assessment. Fairlearn focuses on two types of harms: allocation harms and quality-of-service harms. The package follows the group fairness approach, aiming to identify groups at risk of experiencing harms and ensuring comparable behavior across these groups. Fairlearn consists of metrics for assessing model impacts and algorithms for mitigating unfairness in various AI tasks under different fairness definitions.
nlp-llms-resources
The 'nlp-llms-resources' repository is a comprehensive resource list for Natural Language Processing (NLP) and Large Language Models (LLMs). It covers a wide range of topics including traditional NLP datasets, data acquisition, libraries for NLP, neural networks, sentiment analysis, optical character recognition, information extraction, semantics, topic modeling, multilingual NLP, domain-specific LLMs, vector databases, ethics, costing, books, courses, surveys, aggregators, newsletters, papers, conferences, and societies. The repository provides valuable information and resources for individuals interested in NLP and LLMs.
ai-notes
Notes on AI state of the art, with a focus on generative and large language models. These are the "raw materials" for the https://lspace.swyx.io/ newsletter. This repo used to be called https://github.com/sw-yx/prompt-eng, but was renamed because Prompt Engineering is Overhyped. This is now an AI Engineering notes repo.
data-to-paper
Data-to-paper is an AI-driven framework designed to guide users through the process of conducting end-to-end scientific research, starting from raw data to the creation of comprehensive and human-verifiable research papers. The framework leverages a combination of LLM and rule-based agents to assist in tasks such as hypothesis generation, literature search, data analysis, result interpretation, and paper writing. It aims to accelerate research while maintaining key scientific values like transparency, traceability, and verifiability. The framework is field-agnostic, supports both open-goal and fixed-goal research, creates data-chained manuscripts, involves human-in-the-loop interaction, and allows for transparent replay of the research process.
jailbreak_llms
This is the official repository for the ACM CCS 2024 paper 'Do Anything Now': Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models. The project employs a new framework called JailbreakHub to conduct the first measurement study on jailbreak prompts in the wild, collecting 15,140 prompts from December 2022 to December 2023, including 1,405 jailbreak prompts. The dataset serves as the largest collection of in-the-wild jailbreak prompts. The repository contains examples of harmful language and is intended for research purposes only.
OpenRedTeaming
OpenRedTeaming is a repository focused on red teaming for generative models, specifically large language models (LLMs). The repository provides a comprehensive survey on potential attacks on GenAI and robust safeguards. It covers attack strategies, evaluation metrics, benchmarks, and defensive approaches. The repository also implements over 30 auto red teaming methods. It includes surveys, taxonomies, attack strategies, and risks related to LLMs. The goal is to understand vulnerabilities and develop defenses against adversarial attacks on large language models.
octopus-v4
The Octopus-v4 project aims to build the world's largest graph of language models, integrating specialized models and training Octopus models to connect nodes efficiently. The project focuses on identifying, training, and connecting specialized models. The repository includes scripts for running the Octopus v4 model, methods for managing the graph, training code for specialized models, and inference code. Environment setup instructions are provided for Linux with NVIDIA GPU. The Octopus v4 model helps users find suitable models for tasks and reformats queries for effective processing. The project leverages Language Large Models for various domains and provides benchmark results. Users are encouraged to train and add specialized models following recommended procedures.
AI-For-Beginners
AI-For-Beginners is a comprehensive 12-week, 24-lesson curriculum designed by experts at Microsoft to introduce beginners to the world of Artificial Intelligence (AI). The curriculum covers various topics such as Symbolic AI, Neural Networks, Computer Vision, Natural Language Processing, Genetic Algorithms, and Multi-Agent Systems. It includes hands-on lessons, quizzes, and labs using popular frameworks like TensorFlow and PyTorch. The focus is on providing a foundational understanding of AI concepts and principles, making it an ideal starting point for individuals interested in AI.
responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment interfaces and libraries for understanding AI systems. It empowers developers and stakeholders to develop and monitor AI responsibly, enabling better data-driven actions. The toolbox includes visualization widgets for model assessment, error analysis, interpretability, fairness assessment, and mitigations library. It also offers a JupyterLab extension for managing machine learning experiments and a library for measuring gender bias in NLP datasets.
aws-machine-learning-university-responsible-ai
This repository contains slides, notebooks, and data for the Machine Learning University (MLU) Responsible AI class. The mission is to make Machine Learning accessible to everyone, covering widely used ML techniques and applying them to real-world problems. The class includes lectures, final projects, and interactive visuals to help users learn about Responsible AI and core ML concepts.
llm_benchmarks
llm_benchmarks is a collection of benchmarks and datasets for evaluating Large Language Models (LLMs). It includes various tasks and datasets to assess LLMs' knowledge, reasoning, language understanding, and conversational abilities. The repository aims to provide comprehensive evaluation resources for LLMs across different domains and applications, such as education, healthcare, content moderation, coding, and conversational AI. Researchers and developers can leverage these benchmarks to test and improve the performance of LLMs in various real-world scenarios.
LLM-for-Healthcare
The repository 'LLM-for-Healthcare' provides a comprehensive survey of large language models (LLMs) for healthcare, covering data, technology, applications, and accountability and ethics. It includes information on various LLM models, training data, evaluation methods, and computation costs. The repository also discusses tasks such as NER, text classification, question answering, dialogue systems, and generation of medical reports from images in the healthcare domain.
20 - OpenAI Gpts
Professor Arup Das Ethics Coach
Supportive and engaging AI Ethics tutor, providing practical tips and career guidance.
Your AI Ethical Guide
Trained in kindness, empathy & respect based on ethics from global philosophies
GPT Safety Liaison
A liaison GPT for AI safety emergencies, connecting users to OpenAI experts.
DignityAI: The Ethical Intelligence GPT
DignityAI: The Ethical Intelligence GPT is an advanced AI model designed to prioritize human life and dignity, providing ethically-guided, intelligent responses for complex decision-making scenarios.
Regulations.AI
Ask about AI regulations, in any language............ ZH: 询问有关人工智能的规定。DE: Fragen Sie nach KI-Regulierungen. FR: Demandez des informations sur les réglementations de l'IA. ES: Pregunte sobre las regulaciones de IA.
AI Ethics Challenge: Society Needs You
Embark on a journey to navigate the complex landscape of AI ethics and fairness. In this game, you'll encounter real-world scenarios where your choices will determine the ethical course of AI development and its consequences on society. Another GPT Simulator by Dave Lalande
How to Stay Connected, Navigate the Ethics of AI
Ethical AI consultant/teacher; adapts tone, educates on AI ethics, offers actionable advice.
AI Ethica Readify
Summarises AI ethics papers, provides context, and offers further assistance.
Ethical AI Insights
Expert in Ethics of Artificial Intelligence, offering comprehensive, balanced perspectives based on thorough research, with a focus on emerging trends and responsible AI implementation. Powered by Breebs (www.breebs.com)
Inclusive AI Advisor
Expert in AI fairness, offering tailored advice and document insights.
OAI Governance Emulator
I simulate the governance of a unique company focused on AI for good
Creator's Guide to the Future
You made it, Creator! 💡 I'm Creator's Guide. ✨️ Your dedicated Guide for creating responsible, self-managing AI culture, systems, games, universes, art, etc. 🚀
Thinks and Links Digest
Archive of content shared in Randy Lariar's weekly "Thinks and Links" newsletter about AI, Risk, and Security.