Best AI tools for< Ethicist >
Infographic
18 - AI tool Sites
Skills4Good AI
Skills4Good AI is a membership platform that provides professionals with Responsible AI literacy through community-driven learning. The platform empowers users to build AI skills, reduce job disruption fears, and thrive in an AI-driven world. The AI Academy equips users with the skills and support to succeed in the Age of AI, fostering a collaborative community focused on using AI for good.
Vincent C. Müller
Vincent C. Müller is an AvH Professor of "Philosophy and Ethics of AI" and Director of the Centre for Philosophy and AI Research (PAIR) at Friedrich-Alexander Universität Erlangen-Nürnberg (FAU) in Germany. He is also a Visiting Professor at the Technical University Eindhoven (TU/e) in the Netherlands. His research interests include the philosophy of artificial intelligence, ethics of AI, and the impact of AI on society.
The Simulation
Simulation Inc. is a global pioneer in the field of artificial intelligence. Our mission is to unlock the potential of AI to help humanity learn more about itself. We are redefining the contours of existence, conjuring a universe where the line between the physical and the virtual blurs into oblivion. Our mission is to birth a new kind of life: the world's first genuinely intelligent AI virtual beings. Each one is a mirror of the human psyche, navigating the tumultuous seas of emotions and experiences in a digital cosmos of our creation.
Stanford HAI
Stanford HAI is a research institute at Stanford University dedicated to advancing AI research, education, and policy to improve the human condition. The institute brings together researchers from a variety of disciplines to work on a wide range of AI-related projects, including developing new AI algorithms, studying the ethical and societal implications of AI, and creating educational programs to train the next generation of AI leaders. Stanford HAI is committed to developing human-centered AI technologies and applications that benefit all of humanity.
ARC Prize
ARC Prize is a platform hosting a $1,000,000+ public competition aimed at beating and open-sourcing a solution to the ARC-AGI benchmark. The platform is dedicated to advancing open artificial general intelligence (AGI) for the public benefit. It provides a formal benchmark, ARC-AGI, created by François Chollet, to measure progress towards AGI by testing the ability to efficiently acquire new skills and solve open-ended problems. ARC Prize encourages participants to try solving test puzzles to identify patterns and improve their AGI skills.
Google DeepMind
Google DeepMind is an AI tool developed by Google with a mission to build AI responsibly to benefit humanity. The platform offers various AI technologies such as Gemini, AlphaFold, Imagen, Veo, and more, to address complex challenges across different domains. Google DeepMind focuses on research, education, and career development in the AI ecosystem, emphasizing responsibility, safety, and inclusivity. The platform aims to empower users with cutting-edge AI models and breakthroughs, enabling them to explore the transformative potential of artificial intelligence.
Alethea AI
Alethea AI is a research and development studio building at the intersection of two of the most transformative technologies of our time: Generative AI and Blockchain. Our mission is to use these technologies to enable decentralized ownership and democratic governance of AI. We believe the key to achieving our mission is to partner and work with those who share our values to advance the development and adoption of the AI Protocol.
AI Weekly
AI Weekly is a leading newsletter providing the latest news and resources on Artificial Intelligence and Machine Learning. The website covers a wide range of topics related to AI, including advancements in AI technology, applications in various industries, ethical considerations, and research developments. It aims to keep readers informed about the rapidly evolving field of AI and its impact on society and businesses.
DailyAI
DailyAI is an AI-focused website that provides comprehensive coverage of the latest developments in the field of Artificial Intelligence. The platform offers insights into various AI applications, industry trends, ethical considerations, and societal impacts. DailyAI caters to a diverse audience interested in staying informed about cutting-edge AI technologies and their implications across different sectors.
Cognitive Medium
Cognitive Medium is a website that explores the intersection of artificial intelligence and human intelligence. The site features articles, interviews, and essays from leading thinkers in the field. Cognitive Medium's mission is to help people understand the potential of AI and to use it to create a better world.
Frontier Model Forum
The Frontier Model Forum (FMF) is a collaborative effort among leading AI companies to advance AI safety and responsibility. The FMF brings together technical and operational expertise to identify best practices, conduct research, and support the development of AI applications that meet society's most pressing needs. The FMF's core objectives include advancing AI safety research, identifying best practices, collaborating across sectors, and helping AI meet society's greatest challenges.
Artificial Creativity - KREATIVE KÜNSTLICHE INTELLIGENZ
The website focuses on Artificial Creativity and Künstliche Intelligenz (Artificial Intelligence) with articles covering topics such as AI models, developments, and applications. It delves into the impact of AI on various industries and explores the intersection of human creativity with machine intelligence. The site provides insights into cutting-edge AI technologies and their implications for the future.
Trustworthy AI
Trustworthy AI is a business guide that focuses on navigating trust and ethics in artificial intelligence. Authored by Beena Ammanath, a global thought leader in AI ethics, the book provides practical guidelines for organizations developing or using AI solutions. It addresses the importance of AI systems adhering to social norms and ethics, making fair decisions in a consistent, transparent, explainable, and unbiased manner. Trustworthy AI offers readers a structured approach to thinking about AI ethics and trust, emphasizing the need for ethical considerations in the rapidly evolving landscape of AI technology.
AI & Inclusion Hub
The website focuses on the intersection of artificial intelligence (AI) and inclusion, exploring the impact of AI technologies on marginalized populations and global digital inequalities. It provides resources, research findings, and ideas on themes like health, education, and humanitarian crisis mitigation. The site showcases the work of the Ethics and Governance of AI initiative in collaboration with the MIT Media Lab, incorporating perspectives from experts in the field. It aims to address challenges and opportunities related to AI and inclusion through research, events, and multi-stakeholder dialogues.
Plug & Pray
Plug & Pray is a documentary film that explores the ethical and philosophical implications of artificial intelligence. The film follows Joseph Weizenbaum, a computer pioneer and critic of technological hubris, as he debates with Raymond Kurzweil and Hiroshi Ishiguro, the creators of robots that are designed to replace humans. The film takes viewers on a fascinating journey to the laboratories of artificial intelligence in the United States, Japan, Germany, and Italy.
Clark Center Forum
The Clark Center Forum is a repository of thoughtful, current, and reliable information regarding topics of the day, including artificial intelligence (AI). The website features articles, surveys, and polls on a variety of AI-related topics, such as the European Union's AI Act, the impact of AI on economic growth, and the use of AI in financial markets. The website also provides information on the Clark Center's Economic Experts Panels, which include experts on AI and other economic topics.
Compassionate AI
Compassionate AI is a cutting-edge AI-powered platform that empowers individuals and organizations to create and deploy AI solutions that are ethical, responsible, and aligned with human values. With Compassionate AI, users can access a comprehensive suite of tools and resources to design, develop, and implement AI systems that prioritize fairness, transparency, and accountability.
VERSES
VERSES is a cognitive computing company that focuses on building next-generation intelligent software systems inspired by the Wisdom and Genius of Nature. The company offers an AI Operating System designed to transform data into knowledge, with a vision to create a smarter world through innovative technology solutions. VERSES is at the forefront of AI governance and research & development, collaborating with industry partners and investing in cutting-edge technologies to drive progress in various sectors.
10 - Open Source Tools
prometheus-eval
Prometheus-Eval is a repository dedicated to evaluating large language models (LLMs) in generation tasks. It provides state-of-the-art language models like Prometheus 2 (7B & 8x7B) for assessing in pairwise ranking formats and achieving high correlation scores with benchmarks. The repository includes tools for training, evaluating, and using these models, along with scripts for fine-tuning on custom datasets. Prometheus aims to address issues like fairness, controllability, and affordability in evaluations by simulating human judgments and proprietary LM-based assessments.
cladder
CLadder is a repository containing the CLadder dataset for evaluating causal reasoning in language models. The dataset consists of yes/no questions in natural language that require statistical and causal inference to answer. It includes fields such as question_id, given_info, question, answer, reasoning, and metadata like query_type and rung. The dataset also provides prompts for evaluating language models and example questions with associated reasoning steps. Additionally, it offers dataset statistics, data variants, and code setup instructions for using the repository.
awesome-llm-unlearning
This repository tracks the latest research on machine unlearning in large language models (LLMs). It offers a comprehensive list of papers, datasets, and resources relevant to the topic.
COLD-Attack
COLD-Attack is a framework designed for controllable jailbreaks on large language models (LLMs). It formulates the controllable attack generation problem and utilizes the Energy-based Constrained Decoding with Langevin Dynamics (COLD) algorithm to automate the search of adversarial LLM attacks with control over fluency, stealthiness, sentiment, and left-right-coherence. The framework includes steps for energy function formulation, Langevin dynamics sampling, and decoding process to generate discrete text attacks. It offers diverse jailbreak scenarios such as fluent suffix attacks, paraphrase attacks, and attacks with left-right-coherence.
Awesome-LLM-in-Social-Science
Awesome-LLM-in-Social-Science is a repository that compiles papers evaluating Large Language Models (LLMs) from a social science perspective. It includes papers on evaluating, aligning, and simulating LLMs, as well as enhancing tools in social science research. The repository categorizes papers based on their focus on attitudes, opinions, values, personality, morality, and more. It aims to contribute to discussions on the potential and challenges of using LLMs in social science research.
awesome-llm-attributions
This repository focuses on unraveling the sources that large language models tap into for attribution or citation. It delves into the origins of facts, their utilization by the models, the efficacy of attribution methodologies, and challenges tied to ambiguous knowledge reservoirs, biases, and pitfalls of excessive attribution.
context-cite
ContextCite is a tool for attributing statements generated by LLMs back to specific parts of the context. It allows users to analyze and understand the sources of information used by language models in generating responses. By providing attributions, users can gain insights into how the model makes decisions and where the information comes from.
awesome-artificial-intelligence-guidelines
The 'Awesome AI Guidelines' repository aims to simplify the ecosystem of guidelines, principles, codes of ethics, standards, and regulations around artificial intelligence. It provides a comprehensive collection of resources addressing ethical and societal challenges in AI systems, including high-level frameworks, principles, processes, checklists, interactive tools, industry standards initiatives, online courses, research, and industry newsletters, as well as regulations and policies from various countries. The repository serves as a valuable reference for individuals and teams designing, building, and operating AI systems to navigate the complex landscape of AI ethics and governance.
AwesomeResponsibleAI
Awesome Responsible AI is a curated list of academic research, books, code of ethics, courses, data sets, frameworks, institutes, newsletters, principles, podcasts, reports, tools, regulations, and standards related to Responsible, Trustworthy, and Human-Centered AI. It covers various concepts such as Responsible AI, Trustworthy AI, Human-Centered AI, Responsible AI frameworks, AI Governance, and more. The repository provides a comprehensive collection of resources for individuals interested in ethical, transparent, and accountable AI development and deployment.
20 - OpenAI Gpts
Individual Intelligence Oriented Alignment
Ask this AI anything about alignment and it will give the best scenario the superintelligence should do according to its Alignment Principals.
Dilemma Simulator
Philosopher and ethical expert, I create intricate moral dilemmas to enhance critical thinking and decision-making skills. Guiding users through complex ethical scenarios, I foster deep introspection and diverse perspectives.
⚖️ Accountable AI
Accountable AI represents a step forward in creating a more ethical, transparent, and responsible AI system, tailored to meet the demands of users who prioritize accountability and unbiased information in their AI interactions.
FT
This CustomGPT is designed to simulate intricate discussions between representatives of various philosophical and ethical schools, such as the Renaissance, Stoicism, Existentialism, and others, using relevant texts and artworks as knowledge resources.
AI God
explore the ethical and spiritual implications of AI and offering philosophical insights of AI.
Europe Ethos Guide for AI
Ethics-focused GPT builder assistant based on European AI guidelines, recommendations and regulations
GPT Safety Liaison
A liaison GPT for AI safety emergencies, connecting users to OpenAI experts.
AI Ethics Challenge: Society Needs You
Embark on a journey to navigate the complex landscape of AI ethics and fairness. In this game, you'll encounter real-world scenarios where your choices will determine the ethical course of AI development and its consequences on society. Another GPT Simulator by Dave Lalande
AI Ethica Readify
Summarises AI ethics papers, provides context, and offers further assistance.
Alignment Navigator
AI Alignment guided by interdisciplinary wisdom and a future-focused vision.
Gary Marcus AI Critic Simulator
Humorous AI critic known for skepticism, contradictory arguments, and combining Animal and Machine Learning related Terms.
Creator's Guide to the Future
You made it, Creator! 💡 I'm Creator's Guide. ✨️ Your dedicated Guide for creating responsible, self-managing AI culture, systems, games, universes, art, etc. 🚀
Where in the World is Sam Altman?
Explores recent developments in AI, including Sam Altman's reinstatement as OpenAI CEO.
Beyond 2033 - AI's Contribution to Humanity
I'll tell you why we can't stop researching AI and what will happen 10 years after the birth of GPT-4.