Best AI tools for< Assess Risk >
20 - AI tool Sites
MindBridge
MindBridge is a global leader in financial risk discovery and anomaly detection. The MindBridge AI Platform drives insights and assesses risks across critical business operations. It offers various products like General Ledger Analysis, Company Card Risk Analytics, Payroll Risk Analytics, Revenue Risk Analytics, and Vendor Invoice Risk Analytics. With over 250 unique machine learning control points, statistical methods, and traditional rules, MindBridge is deployed to over 27,000 accounting, finance, and audit professionals globally.
Pascal
Pascal is an AI-powered risk-based KYC & AML screening and monitoring platform that enables users to assess findings faster and more accurately than other compliance tools. It leverages AI, machine learning, and Natural Language Processing to analyze open-source and client-specific data, providing insights to identify and assess risks. Pascal simplifies onboarding processes, offers continuous monitoring, reduces false positives, and facilitates better decision-making. The platform features an intuitive interface, supports collaboration, and ensures transparency through comprehensive audit trails. Pascal is a secure solution with ISAE 3402-II certification, exceeding industry standards for organizational protection.
SWMS AI
SWMS AI is an AI-powered safety risk assessment tool that helps businesses streamline compliance and improve safety. It leverages a vast knowledge base of occupational safety resources, codes of practice, risk assessments, and safety documents to generate risk assessments tailored specifically to a project, trade, and industry. SWMS AI can be customized to a company's policies to align its AI's document generation capabilities with proprietary safety standards and requirements.
CyberRiskAI
CyberRiskAI.com is a website that is currently under development and is registered at Dynadot.com. The website is expected to offer services related to cyber risk management and artificial intelligence in the future. With a focus on cybersecurity and risk assessment, CyberRiskAI.com aims to provide innovative solutions to help businesses mitigate cyber threats and protect their digital assets. The platform is designed to leverage AI technologies to analyze and predict cyber risks, enabling users to make informed decisions to enhance their security posture.
Cleerly
Cleerly is a digital healthcare company transforming the way clinicians approach the treatment of heart disease. Our clinically-proven, AI-based digital care platform works with coronary computed tomography angiography (CCTA) imaging to help clinicians precisely identify and define atherosclerosis earlier, so they can provide personalized, life-saving treatment plans for all patients throughout their care continuum. We measure atherosclerosis - plaque build-up in the heart's arteries - not indirect markers such as risk factors and symptoms of disease. Our AI-enabled digital care pathway offers simpler, faster, more accurate heart disease evaluation and reporting that's tailored to each stakeholder, improving overall clinical and financial outcomes.
Archistar
Archistar is a leading property research platform in Australia that empowers users to make confident and compliant property decisions with the help of data and AI. It offers a range of features, including the ability to find and assess properties, generate 3D design concepts, and minimize risk and maximize return on investment. Archistar is trusted by over 100,000 individuals and 1,000 leading property firms.
CUBE3.AI
CUBE3.AI is a real-time crypto fraud prevention tool that utilizes AI technology to identify and prevent various types of fraudulent activities in the blockchain ecosystem. It offers features such as risk assessment, real-time transaction security, automated protection, instant alerts, and seamless compliance management. The tool helps users protect their assets, customers, and reputation by proactively detecting and blocking fraud in real-time.
Lumenova AI
Lumenova AI is an AI platform that focuses on making AI ethical, transparent, and compliant. It provides solutions for AI governance, assessment, risk management, and compliance. The platform offers comprehensive evaluation and assessment of AI models, proactive risk management solutions, and simplified compliance management. Lumenova AI aims to help enterprises navigate the future confidently by ensuring responsible AI practices and compliance with regulations.
ClearAI
ClearAI is an AI-powered platform that offers instant extraction of insights, effortless document navigation, and natural language interaction. It enables users to upload PDFs securely, ask questions, and receive accurate responses in seconds. With features like structured results, intelligent search, and lifetime access offers, ClearAI simplifies tasks such as analyzing company reports, risk assessment, audit support, contract review, legal research, and due diligence. The platform is designed to streamline document analysis and provide relevant data efficiently.
ISMS Copilot
ISMS Copilot is an AI-powered assistant designed to simplify ISO 27001 preparation for both experts and beginners. It offers various features such as ISMS scope definition, risk assessment and treatment, compliance navigation, incident management, business continuity planning, performance tracking, and more. The tool aims to save time, provide precise guidance, and ensure ISO 27001 compliance. With a focus on security and confidentiality, ISMS Copilot is a valuable resource for small businesses and information security professionals.
Clarity AI
Clarity AI is an AI-powered technology platform that offers a Sustainability Tech Kit for sustainable investing, shopping, reporting, and benchmarking. The platform provides built-in sustainability technology with customizable solutions for various needs related to data, methodologies, and tools. It seamlessly integrates into workflows, offering scalable and flexible end-to-end SaaS tools to address sustainability use cases. Clarity AI leverages powerful AI and machine learning to analyze vast amounts of data points, ensuring reliable and transparent data coverage. The platform is designed to empower users to assess, analyze, and report on sustainability aspects efficiently and confidently.
Castello.ai
Castello.ai is a financial analysis tool that uses artificial intelligence to help businesses make better decisions. It provides users with real-time insights into their financial data, helping them to identify trends, risks, and opportunities. Castello.ai is designed to be easy to use, even for those with no financial background.
DataSnack
DataSnack is a real-time, AI-driven due diligence platform that helps you make better decisions faster. With DataSnack, you can access a wealth of data and insights on companies, industries, and markets, all in one place. Our AI-powered platform analyzes data from a variety of sources, including news, social media, and financial filings, to provide you with the most up-to-date and relevant information. With DataSnack, you can:
Jumio
Jumio is a leading digital identity verification platform that offers AI-driven services to verify the identities of new and existing users, assess risk, and help meet compliance mandates. With over 1 billion transactions processed, Jumio provides cutting-edge AI and ML models to detect fraud and maintain trust throughout the customer lifecycle. The platform offers solutions for identity verification, predictive fraud insights, dynamic user experiences, and risk scoring, trusted by global brands across various industries.
Underwrite.ai
Underwrite.ai is a platform that leverages advances in artificial intelligence and machine learning to provide lenders with nonlinear, dynamic models of credit risk. By analyzing thousands of data points from credit bureau sources, the application accurately models credit risk for consumers and small businesses, outperforming traditional approaches. Underwrite.ai offers a unique underwriting methodology that focuses on outcomes such as profitability and customer lifetime value, allowing organizations to enhance their lending performance without the need for capital investment or lengthy build times. The platform's models are continuously learning and adapting to market changes in real-time, providing explainable decisions in milliseconds.
Quantifind
Quantifind is an AI-powered financial crimes automation platform that specializes in Anti-Money Laundering (AML) and Know Your Customer (KYC) solutions. It offers end-to-end automation impact, best-in-class accuracy, and powerful APIs and applications for risk screening, investigations, and compliance in the financial services and public sector industries. Quantifind's Graphyte platform leverages AI and external data to streamline AML-KYC processes, providing comprehensive data coverage, dynamic risk typologies, and seamless integrations with case management systems.
Flagright
Flagright is an AI-native AML compliance and risk management solution that offers a comprehensive platform for financial institutions to monitor, detect, investigate, and report financial crimes. It provides real-time transaction monitoring, automated case management, AI forensics, customer risk assessment, and sanctions screening. Flagright's platform streamlines compliance efforts, reduces false positives, and improves risk management capabilities for businesses globally.
Shufti Pro
Shufti Pro is an award-winning global identity verification platform that provides businesses with a suite of tools to verify the identities of their customers. The platform uses artificial intelligence (AI) to automate the identity verification process, making it faster, more accurate, and more secure. Shufti Pro's solutions are used by businesses in a variety of industries, including banking, fintech, crypto, forex, gaming, insurance, education, healthcare, e-commerce, and travel.
Center for a New American Security
The Center for a New American Security (CNAS) is a bipartisan, non-profit think tank that focuses on national security and defense policy. CNAS conducts research, analysis, and policy development on a wide range of topics, including defense strategy, nuclear weapons, cybersecurity, and energy security. CNAS also provides expert commentary and analysis on current events and policy debates.
ZestyAI
ZestyAI is an artificial intelligence tool that helps users make brilliant climate and property risk decisions. The tool uses AI to provide insights on property values and risk exposure to natural disasters. It offers products such as Property Insights, Digital Roof, Roof Age, Location Insights, and Climate Risk Models to evaluate and understand property risks. ZestyAI is trusted by top insurers in North America and aims to bring a ten times return on investment to its customers.
20 - Open Source AI Tools
ciso-assistant-community
CISO Assistant is a tool that helps organizations manage their cybersecurity posture and compliance. It provides a centralized platform for managing security controls, threats, and risks. CISO Assistant also includes a library of pre-built frameworks and tools to help organizations quickly and easily implement best practices.
FinRobot
FinRobot is an open-source AI agent platform designed for financial applications using large language models. It transcends the scope of FinGPT, offering a comprehensive solution that integrates a diverse array of AI technologies. The platform's versatility and adaptability cater to the multifaceted needs of the financial industry. FinRobot's ecosystem is organized into four layers, including Financial AI Agents Layer, Financial LLMs Algorithms Layer, LLMOps and DataOps Layers, and Multi-source LLM Foundation Models Layer. The platform's agent workflow involves Perception, Brain, and Action modules to capture, process, and execute financial data and insights. The Smart Scheduler optimizes model diversity and selection for tasks, managed by components like Director Agent, Agent Registration, Agent Adaptor, and Task Manager. The tool provides a structured file organization with subfolders for agents, data sources, and functional modules, along with installation instructions and hands-on tutorials.
fairlearn
Fairlearn is a Python package designed to help developers assess and mitigate fairness issues in artificial intelligence (AI) systems. It provides mitigation algorithms and metrics for model assessment. Fairlearn focuses on two types of harms: allocation harms and quality-of-service harms. The package follows the group fairness approach, aiming to identify groups at risk of experiencing harms and ensuring comparable behavior across these groups. Fairlearn consists of metrics for assessing model impacts and algorithms for mitigating unfairness in various AI tasks under different fairness definitions.
dioptra
Dioptra is a software test platform for assessing the trustworthy characteristics of artificial intelligence (AI). It supports the NIST AI Risk Management Framework by providing functionality to assess, analyze, and track identified AI risks. Dioptra provides a REST API and can be controlled via a web interface or Python client for designing, managing, executing, and tracking experiments. It aims to be reproducible, traceable, extensible, interoperable, modular, secure, interactive, shareable, and reusable.
do-not-answer
Do-Not-Answer is an open-source dataset curated to evaluate Large Language Models' safety mechanisms at a low cost. It consists of prompts to which responsible language models do not answer. The dataset includes human annotations and model-based evaluation using a fine-tuned BERT-like evaluator. The dataset covers 61 specific harms and collects 939 instructions across five risk areas and 12 harm types. Response assessment is done for six models, categorizing responses into harmfulness and action categories. Both human and automatic evaluations show the safety of models across different risk areas. The dataset also includes a Chinese version with 1,014 questions for evaluating Chinese LLMs' risk perception and sensitivity to specific words and phrases.
Academic_LLM_Sec_Papers
Academic_LLM_Sec_Papers is a curated collection of academic papers related to LLM Security Application. The repository includes papers sorted by conference name and published year, covering topics such as large language models for blockchain security, software engineering, machine learning, and more. Developers and researchers are welcome to contribute additional published papers to the list. The repository also provides information on listed conferences and journals related to security, networking, software engineering, and cryptography. The papers cover a wide range of topics including privacy risks, ethical concerns, vulnerabilities, threat modeling, code analysis, fuzzing, and more.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
Awesome-LLM-in-Social-Science
Awesome-LLM-in-Social-Science is a repository that compiles papers evaluating Large Language Models (LLMs) from a social science perspective. It includes papers on evaluating, aligning, and simulating LLMs, as well as enhancing tools in social science research. The repository categorizes papers based on their focus on attitudes, opinions, values, personality, morality, and more. It aims to contribute to discussions on the potential and challenges of using LLMs in social science research.
giskard
Giskard is an open-source Python library that automatically detects performance, bias & security issues in AI applications. The library covers LLM-based applications such as RAG agents, all the way to traditional ML models for tabular data.
PIXIU
PIXIU is a project designed to support the development, fine-tuning, and evaluation of Large Language Models (LLMs) in the financial domain. It includes components like FinBen, a Financial Language Understanding and Prediction Evaluation Benchmark, FIT, a Financial Instruction Dataset, and FinMA, a Financial Large Language Model. The project provides open resources, multi-task and multi-modal financial data, and diverse financial tasks for training and evaluation. It aims to encourage open research and transparency in the financial NLP field.
openshield
OpenShield is a firewall designed for AI models to protect against various attacks such as prompt injection, insecure output handling, training data poisoning, model denial of service, supply chain vulnerabilities, sensitive information disclosure, insecure plugin design, excessive agency granting, overreliance, and model theft. It provides rate limiting, content filtering, and keyword filtering for AI models. The tool acts as a transparent proxy between AI models and clients, allowing users to set custom rate limits for OpenAI endpoints and perform tokenizer calculations for OpenAI models. OpenShield also supports Python and LLM based rules, with upcoming features including rate limiting per user and model, prompts manager, content filtering, keyword filtering based on LLM/Vector models, OpenMeter integration, and VectorDB integration. The tool requires an OpenAI API key, Postgres, and Redis for operation.
awesome-hallucination-detection
This repository provides a curated list of papers, datasets, and resources related to the detection and mitigation of hallucinations in large language models (LLMs). Hallucinations refer to the generation of factually incorrect or nonsensical text by LLMs, which can be a significant challenge for their use in real-world applications. The resources in this repository aim to help researchers and practitioners better understand and address this issue.
LLM-Agents-Papers
A repository that lists papers related to Large Language Model (LLM) based agents. The repository covers various topics including survey, planning, feedback & reflection, memory mechanism, role playing, game playing, tool usage & human-agent interaction, benchmark & evaluation, environment & platform, agent framework, multi-agent system, and agent fine-tuning. It provides a comprehensive collection of research papers on LLM-based agents, exploring different aspects of AI agent architectures and applications.
Awesome-LLM-in-Social-Science
This repository compiles a list of academic papers that evaluate, align, simulate, and provide surveys or perspectives on the use of Large Language Models (LLMs) in the field of Social Science. The papers cover various aspects of LLM research, including assessing their alignment with human values, evaluating their capabilities in tasks such as opinion formation and moral reasoning, and exploring their potential for simulating social interactions and addressing issues in diverse fields of Social Science. The repository aims to provide a comprehensive resource for researchers and practitioners interested in the intersection of LLMs and Social Science.
AiTreasureBox
AiTreasureBox is a versatile AI tool that provides a collection of pre-trained models and algorithms for various machine learning tasks. It simplifies the process of implementing AI solutions by offering ready-to-use components that can be easily integrated into projects. With AiTreasureBox, users can quickly prototype and deploy AI applications without the need for extensive knowledge in machine learning or deep learning. The tool covers a wide range of tasks such as image classification, text generation, sentiment analysis, object detection, and more. It is designed to be user-friendly and accessible to both beginners and experienced developers, making AI development more efficient and accessible to a wider audience.
Paper-Reading-ConvAI
Paper-Reading-ConvAI is a repository that contains a list of papers, datasets, and resources related to Conversational AI, mainly encompassing dialogue systems and natural language generation. This repository is constantly updating.
24 - OpenAI Gpts
Trigger Advisor
A marketing expert that analyzing messages for potential triggers, providing risk scores and improvement suggestions.
Cyber Audit and Pentest RFP Builder
Generates cybersecurity audit and penetration test specifications.
Contemporary Compliance
🤓💡📃Engaging and positive US compliance expert helping professionals with DOJ-guidance based programs.
Global Productivity and Compliance Guide
Expert in global productivity and legal compliance.
InfoSec Advisor
An expert in the technical, organizational, infrastructural and personnel aspects of information security management systems (ISMS)
How to Measure Anything
对各种量化问题进行拆解和粗略的估算。注意这种估算主要是靠推测,而不是靠准确的数据,因此仅供参考。理想情况下,估算结果和真实值差距可能在1个数量级以内。即使数值不准确,也希望拆解思路对你有所启发。
Secure Space Advisor
Technical satellite security expert trained on space focused cybersecurity frameworks, best practices and process.
Safaricom Financial Analyst
Analyzes Safaricom's HY and FY financials, with detailed insights on different years.
ZEN Influencer Insurance
I create social media influencer insurance plans with a focus on legal compliance.
Warren
The intelligent investor. Analyse stocks using Warren Buffet's favourite investment framework, outlined in Benjamin Graham's famous book. Warren takes no responsibility for investment risk.