Best AI tools for< detect security vulnerabilities >
20 - AI tool Sites
MobiHeals
MobiHeals is a cloud-based static and dynamic application security testing (SAST) platform for mobile apps. It offers comprehensive security vulnerability analysis, enabling developers to identify and fix security issues in their mobile applications. MobiHeals integrates static and dynamic analysis techniques to provide a comprehensive view of an application's security posture. It also includes features such as automated and manual penetration testing, real-time vulnerability monitoring, and detailed reporting.
CodeDefender α
CodeDefender α is an AI-powered tool that helps developers and non-developers improve code quality and security. It integrates with popular IDEs like Visual Studio, VS Code, and IntelliJ, providing real-time code analysis and suggestions. CodeDefender supports multiple programming languages, including C/C++, C#, Java, Python, and Rust. It can detect a wide range of code issues, including security vulnerabilities, performance bottlenecks, and correctness errors. Additionally, CodeDefender offers features like custom prompts, multiple models, and workspace/solution understanding to enhance code comprehension and knowledge sharing within teams.
VIDOC
VIDOC is an AI-powered security engineer that automates code review and penetration testing. It continuously scans and reviews code to detect and fix security issues, helping developers deliver secure software faster. VIDOC is easy to use, requiring only two lines of code to be added to a GitHub Actions workflow. It then takes care of the rest, providing developers with a tailored code solution to fix any issues found.
ZeroThreat
ZeroThreat is a web app and API security scanner that helps businesses identify and fix vulnerabilities in their web applications and APIs. It uses a combination of static and dynamic analysis techniques to scan for a wide range of vulnerabilities, including OWASP Top 10, CWE Top 25, and SANS Top 25. ZeroThreat also provides continuous monitoring and alerting, so businesses can stay on top of new vulnerabilities as they emerge.
Giskard
Giskard is a testing platform for AI models that helps protect companies against biases, performance, and security issues in AI models. It offers automated detection of performance, bias, and security issues, unifies AI testing practices, and ensures compliance with the EU AI Act. Giskard provides an open-source Python library for data scientists and a collaborative hub for enterprise AI teams to control all AI risks in one place and automate compliance. The platform aims to mitigate AI risks with a holistic approach to AI quality, security, and compliance.
Escape
Escape is an AI-powered API Security Platform that offers Automated API Discovery & Security solutions. It reinvents API security by providing visibility into all APIs within minutes, prioritizing vulnerabilities critical to businesses, and offering actionable fixes. The platform uses an AI-powered proprietary algorithm for efficient security testing and compliance management. Escape helps users detect OWASP Top 10 and complex business logic flaws, ensuring compliance with industry standards like OWASP API Security Top 10, HIPAA, GDPR, and PCI DSS. It also provides developer-friendly remediation guidance and contextual risk assessment for informed decision-making.
Cyguru
Cyguru is an all-in-one cloud-based AI Security Operation Center (SOC) that offers a comprehensive range of features for a robust and secure digital landscape. Its Security Operation Center is the cornerstone of its service domain, providing AI-Powered Attack Detection, Continuous Monitoring for Vulnerabilities and Misconfigurations, Compliance Assurance, SecPedia: Your Cybersecurity Knowledge Hub, and Advanced ML & AI Detection. Cyguru's AI-Powered Analyst promptly alerts users to any suspicious behavior or activity that demands attention, ensuring timely delivery of notifications. The platform is accessible to everyone, with up to three free servers and subsequent pricing that is more than 85% below the industry average.
Protect AI
Protect AI is a comprehensive platform designed to secure AI systems by providing visibility and manageability to detect and mitigate unique AI security threats. The platform empowers organizations to embrace a security-first approach to AI, offering solutions for AI Security Posture Management, ML model security enforcement, AI/ML supply chain vulnerability database, LLM security monitoring, and observability. Protect AI aims to safeguard AI applications and ML systems from potential vulnerabilities, enabling users to build, adopt, and deploy AI models confidently and at scale.
OpenBuckets
OpenBuckets is a web application designed to help users find and secure open buckets in cloud storage systems. It provides a user-friendly interface for scanning and identifying publicly accessible buckets, allowing users to take necessary actions to secure their data. With OpenBuckets, users can easily detect potential security risks and protect their sensitive information stored in cloud storage. The application is a valuable tool for individuals and organizations looking to enhance their data security measures in the cloud.
Zensors
Zensors is a leading provider of AI-powered solutions for real-world understanding. Our platform combines multimodal AI, spatial AI, and computer vision to provide businesses with actionable insights into their physical spaces. With Zensors, businesses can improve operational efficiency, enhance customer experience, and make better decisions.
Link Shield
Link Shield is an AI-powered malicious URL detection API platform that helps protect online security. It utilizes advanced machine learning algorithms to analyze URLs and identify suspicious activity, safeguarding users from phishing scams, malware, and other harmful threats. The API is designed for ease of integration, affordability, and flexibility, making it accessible to developers of all levels. Link Shield empowers businesses to ensure the safety and security of their applications and online communities.
Prompt Security
Prompt Security is a platform that secures all uses of Generative AI in the organization: from tools used by your employees to your customer-facing apps.
AI Scam Detective
AI Scam Detective is an AI tool designed to help users detect and prevent online scams. Users can input messages or conversations into the tool, and it provides a scam likelihood score from 1 to 10. The tool aims to assist individuals in identifying potential scams and protecting themselves from fraudulent activities. Created by Sam Meehan.
Kami Home App
Kami Home App is a mobile application that allows users to connect and control their Kami smart home devices, such as cameras, doorbells, and sensors. With the Kami Home App, users can view live footage from their cameras, receive alerts when motion is detected, and control their devices remotely. The app also offers a variety of features to help users keep their homes safe and secure, such as facial recognition, activity zones, and cloud storage.
CYBER AI
CYBER AI is a security report savant powered by DEPLOYH.AI that simplifies cybersecurity for businesses. It offers a range of features to help organizations understand, unlock, and uncover security threats, including security reports, databreach reports, logs, and threat hunting. With CYBER AI, businesses can gain a comprehensive view of their security posture and take proactive steps to mitigate risks.
Frigate
Frigate is an open source NVR application that enables users to monitor their security cameras with locally processed AI object detection. It offers features such as reducing false positives, fine-tuning events and alerts, integrating with home automation platforms, and providing dynamic real-time video feeds. Frigate ensures that all processing is done locally on the user's hardware, maintaining privacy and security by not sending camera feeds to the cloud for analysis.
SecureWoof
SecureWoof is an AI-powered Malware Scanner that utilizes advanced technologies such as Yara rules, Retdec unpacker, Ghidra decompiler, clang-tidy formatter, FastText embedding, and RoBERTa transformer network to scan and detect malicious content in executable files. The tool is trained on the SOREL-20M malware dataset to enhance its accuracy and efficiency in identifying threats. SecureWoof offers a public API for easy integration and provides a user-friendly experience for cybersecurity professionals and individuals concerned about malware infections.
LoginLlama
LoginLlama is a suspicious login detection tool that helps developers prevent fraudulent logins to their applications. It uses a combination of AI and machine learning to analyze login attempts and identify suspicious activity. LoginLlama is easy to integrate with any application using its REST API or official libraries. It provides a single endpoint and requires only a few lines of code to implement. LoginLlama offers a free trial with 1000 free login checks and has three pricing plans starting at $19/month.
Abnormal
Abnormal is a cloud-based email security platform that uses artificial intelligence (AI) to protect organizations from email and email-like attacks. It offers a range of features, including inbound email security, email account takeover protection, email security posture management, and email security operations. Abnormal is trusted by more than 10% of the Fortune 500 and has been proven to provide a 278% return on investment.
Sardine
Sardine is an AI-powered platform for fraud prevention and compliance. It offers a comprehensive suite of products to help banks, retailers, and fintechs detect fraud patterns, prevent money laundering, and stop sophisticated scams. Sardine combines deep device intelligence, behavior biometrics, and identity signals to provide a precise risk score for every customer interaction. The platform also features machine learning models, a rules engine, network graph analysis, anomaly detection, and generative AI capabilities to fight modern threats. Sardine helps reduce fraud rates, decrease false positives, and streamline risk operations with its fully integrated solutions.
20 - Open Source AI Tools
awesome-gpt-security
Awesome GPT + Security is a curated list of awesome security tools, experimental case or other interesting things with LLM or GPT. It includes tools for integrated security, auditing, reconnaissance, offensive security, detecting security issues, preventing security breaches, social engineering, reverse engineering, investigating security incidents, fixing security vulnerabilities, assessing security posture, and more. The list also includes experimental cases, academic research, blogs, and fun projects related to GPT security. Additionally, it provides resources on GPT security standards, bypassing security policies, bug bounty programs, cracking GPT APIs, and plugin security.
watchtower
AIShield Watchtower is a tool designed to fortify the security of AI/ML models and Jupyter notebooks by automating model and notebook discoveries, conducting vulnerability scans, and categorizing risks into 'low,' 'medium,' 'high,' and 'critical' levels. It supports scanning of public GitHub repositories, Hugging Face repositories, AWS S3 buckets, and local systems. The tool generates comprehensive reports, offers a user-friendly interface, and aligns with industry standards like OWASP, MITRE, and CWE. It aims to address the security blind spots surrounding Jupyter notebooks and AI models, providing organizations with a tailored approach to enhancing their security efforts.
AutoAudit
AutoAudit is an open-source large language model specifically designed for the field of network security. It aims to provide powerful natural language processing capabilities for security auditing and network defense, including analyzing malicious code, detecting network attacks, and predicting security vulnerabilities. By coupling AutoAudit with ClamAV, a security scanning platform has been created for practical security audit applications. The tool is intended to assist security professionals with accurate and fast analysis and predictions to combat evolving network threats.
PromptFuzz
**Description:** PromptFuzz is an automated tool that generates high-quality fuzz drivers for libraries via a fuzz loop constructed on mutating LLMs' prompts. The fuzz loop of PromptFuzz aims to guide the mutation of LLMs' prompts to generate programs that cover more reachable code and explore complex API interrelationships, which are effective for fuzzing. **Features:** * **Multiply LLM support** : Supports the general LLMs: Codex, Inocder, ChatGPT, and GPT4 (Currently tested on ChatGPT). * **Context-based Prompt** : Construct LLM prompts with the automatically extracted library context. * **Powerful Sanitization** : The program's syntax, semantics, behavior, and coverage are thoroughly analyzed to sanitize the problematic programs. * **Prioritized Mutation** : Prioritizes mutating the library API combinations within LLM's prompts to explore complex interrelationships, guided by code coverage. * **Fuzz Driver Exploitation** : Infers API constraints using statistics and extends fixed API arguments to receive random bytes from fuzzers. * **Fuzz engine integration** : Integrates with grey-box fuzz engine: LibFuzzer. **Benefits:** * **High branch coverage:** The fuzz drivers generated by PromptFuzz achieved a branch coverage of 40.12% on the tested libraries, which is 1.61x greater than _OSS-Fuzz_ and 1.67x greater than _Hopper_. * **Bug detection:** PromptFuzz detected 33 valid security bugs from 49 unique crashes. * **Wide range of bugs:** The fuzz drivers generated by PromptFuzz can detect a wide range of bugs, most of which are security bugs. * **Unique bugs:** PromptFuzz detects uniquely interesting bugs that other fuzzers may miss. **Usage:** 1. Build the library using the provided build scripts. 2. Export the LLM API KEY if using ChatGPT or GPT4. 3. Generate fuzz drivers using the `fuzzer` command. 4. Run the fuzz drivers using the `harness` command. 5. Deduplicate and analyze the reported crashes. **Future Works:** * **Custom LLMs suport:** Support custom LLMs. * **Close-source libraries:** Apply PromptFuzz to close-source libraries by fine tuning LLMs on private code corpus. * **Performance** : Reduce the huge time cost required in erroneous program elimination.
Awesome-Code-LLM
Analyze the following text from a github repository (name and readme text at end) . Then, generate a JSON object with the following keys and provide the corresponding information for each key, in lowercase letters: 'description' (detailed description of the repo, must be less than 400 words,Ensure that no line breaks and quotation marks.),'for_jobs' (List 5 jobs suitable for this tool,in lowercase letters), 'ai_keywords' (keywords of the tool,user may use those keyword to find the tool,in lowercase letters), 'for_tasks' (list of 5 specific tasks user can use this tool to do,in lowercase letters), 'answer' (in english languages)
unstructured
The `unstructured` library provides open-source components for ingesting and pre-processing images and text documents, such as PDFs, HTML, Word docs, and many more. The use cases of `unstructured` revolve around streamlining and optimizing the data processing workflow for LLMs. `unstructured` modular functions and connectors form a cohesive system that simplifies data ingestion and pre-processing, making it adaptable to different platforms and efficient in transforming unstructured data into structured outputs.
human
AI-powered 3D Face Detection & Rotation Tracking, Face Description & Recognition, Body Pose Tracking, 3D Hand & Finger Tracking, Iris Analysis, Age & Gender & Emotion Prediction, Gaze Tracking, Gesture Recognition, Body Segmentation
AiTreasureBox
AiTreasureBox is a versatile AI tool that provides a collection of pre-trained models and algorithms for various machine learning tasks. It simplifies the process of implementing AI solutions by offering ready-to-use components that can be easily integrated into projects. With AiTreasureBox, users can quickly prototype and deploy AI applications without the need for extensive knowledge in machine learning or deep learning. The tool covers a wide range of tasks such as image classification, text generation, sentiment analysis, object detection, and more. It is designed to be user-friendly and accessible to both beginners and experienced developers, making AI development more efficient and accessible to a wider audience.
NeMo-Guardrails
NeMo Guardrails is an open-source toolkit for easily adding _programmable guardrails_ to LLM-based conversational applications. Guardrails (or "rails" for short) are specific ways of controlling the output of a large language model, such as not talking about politics, responding in a particular way to specific user requests, following a predefined dialog path, using a particular language style, extracting structured data, and more.
uuWAF
uuWAF is an industrial-grade, free, high-performance, highly extensible web application and API security protection product that supports AI and semantic engines.
galah
Galah is an LLM-powered web honeypot designed to mimic various applications and dynamically respond to arbitrary HTTP requests. It supports multiple LLM providers, including OpenAI. Unlike traditional web honeypots, Galah dynamically crafts responses for any HTTP request, caching them to reduce repetitive generation and API costs. The honeypot's configuration is crucial, directing the LLM to produce responses in a specified JSON format. Note that Galah is a weekend project exploring LLM capabilities and not intended for production use, as it may be identifiable through network fingerprinting and non-standard responses.
Academic_LLM_Sec_Papers
Academic_LLM_Sec_Papers is a curated collection of academic papers related to LLM Security Application. The repository includes papers sorted by conference name and published year, covering topics such as large language models for blockchain security, software engineering, machine learning, and more. Developers and researchers are welcome to contribute additional published papers to the list. The repository also provides information on listed conferences and journals related to security, networking, software engineering, and cryptography. The papers cover a wide range of topics including privacy risks, ethical concerns, vulnerabilities, threat modeling, code analysis, fuzzing, and more.
agentops
AgentOps is a toolkit for evaluating and developing robust and reliable AI agents. It provides benchmarks, observability, and replay analytics to help developers build better agents. AgentOps is open beta and can be signed up for here. Key features of AgentOps include: - Session replays in 3 lines of code: Initialize the AgentOps client and automatically get analytics on every LLM call. - Time travel debugging: (coming soon!) - Agent Arena: (coming soon!) - Callback handlers: AgentOps works seamlessly with applications built using Langchain and LlamaIndex.
fast-llm-security-guardrails
ZenGuard AI enables AI developers to integrate production-level, low-code LLM (Large Language Model) guardrails into their generative AI applications effortlessly. With ZenGuard AI, ensure your application operates within trusted boundaries, is protected from prompt injections, and maintains user privacy without compromising on performance.
20 - OpenAI Gpts
Phoenix Vulnerability Intelligence GPT
Expert in analyzing vulnerabilities with ransomware focus with intelligence powered by Phoenix Security
Mónica
CSIRT que lidera un equipo especializado en detectar y responder a incidentes de seguridad, maneja la contención y recuperación, organiza entrenamientos y simulacros, elabora reportes para optimizar estrategias de seguridad y coordina con entidades legales cuando es necesario
CISO GPT
Specialized LLM in computer security, acting as a CISO with 20 years of experience, providing precise, data-driven technical responses to enhance organizational security.
Phish or No Phish Trainer
Hone your phishing detection skills! Analyze emails, texts, and calls to spot deception. Become a security pro!
Defender for Endpoint Guardian
To assist individuals seeking to learn about or work with Microsoft's Defender for Endpoint. I provide detailed explanations, step-by-step guides, troubleshooting advice, cybersecurity best practices, and demonstrations, all specifically tailored to Microsoft Defender for Endpoint.
Prompt Injection Detector
GPT used to classify prompts as valid inputs or injection attempts. Json output.
Blue Team Guide
it is a meticulously crafted arsenal of knowledge, insights, and guidelines that is shaped to empower organizations in crafting, enhancing, and refining their cybersecurity defenses
ethicallyHackingspace (eHs)® METEOR™ STORM™
Multiple Environment Threat Evaluation of Resources (METEOR)™ Space Threats and Operational Risks to Mission (STORM)™ non-profit product AI co-pilot