Best AI tools for< Enforce Rate Limits >
20 - AI tool Sites

Backmesh
Backmesh is an AI tool that serves as a proxy on edge CDN servers, enabling secure and direct access to LLM APIs without the need for a backend or SDK. It allows users to call LLM APIs from their apps, ensuring protection through JWT verification and rate limits. Backmesh also offers user analytics for LLM API calls, helping identify usage patterns and enhance user satisfaction within AI applications.

DeepSource
DeepSource is a Unified DevSecOps Platform that secures the entire development lifecycle with static analysis and AI. It offers code quality and SAST, open-source security, and is trusted by over 6,000 companies. The platform helps in finding and fixing security vulnerabilities before code is merged, with a low false-positive rate and customizable security gates for pull requests. DeepSource is built for modern software development, providing features like Autofix™ AI, code coverage, and integrations with popular tools like Jira and GitHub Issues. It offers detailed reports, issue suppression, and metric thresholds to ensure clean and secure code shipping.

DocDriven
DocDriven is an AI-powered documentation-driven API development tool that provides a shared workspace for optimizing the API development process. It helps in designing APIs faster and more efficiently, collaborating on API changes in real-time, exploring all APIs in one workspace, generating AI code, maintaining API documentation, and much more. DocDriven aims to streamline communication and coordination among backend developers, frontend developers, UI designers, and product managers, ensuring high-quality API design and development.

Transparency Coalition
The Transparency Coalition is a platform dedicated to advocating for legislation and transparency in the field of artificial intelligence. It aims to create AI safeguards for the greater good by focusing on training data, accountability, and ethical practices in AI development and deployment. The platform emphasizes the importance of regulating training data to prevent misuse and harm caused by AI systems. Through advocacy and education, the Transparency Coalition seeks to promote responsible AI innovation and protect personal privacy.

ModelOp
ModelOp is the leading AI Governance software for enterprises, providing a single source of truth for all AI systems, automated process workflows, real-time insights, and integrations to extend the value of existing technology investments. It helps organizations safeguard AI initiatives without stifling innovation, ensuring compliance, accelerating innovation, and improving key performance indicators. ModelOp supports generative AI, Large Language Models (LLMs), in-house, third-party vendor, and embedded systems. The software enables visibility, accountability, risk tiering, systemic tracking, enforceable controls, workflow automation, reporting, and rapid establishment of AI governance.

Writer
Writer is a full-stack generative AI platform that offers industry-leading models Palmyra-Med and Palmyra-Fin. It provides a secure enterprise platform to embed generative AI into any business process, enforce legal and brand compliance, and gain insights through analysis. Writer's platform abstracts complexity, allowing users to focus on AI-first workflows without the need to maintain infrastructure. The platform includes Palmyra LLMs, Knowledge Graph, and AI guardrails to ensure quality, control, transparency, accuracy, and security in AI applications.

Loti
Loti is an online protection tool designed for public figures, including major artists, athletes, executives, and creators. It scans the internet daily to identify instances where the user's face or voice appear, takes down infringing accounts and content, and recaptures revenue. Loti offers features such as protecting against fake accounts and deepfakes, enforcing licensing agreements, and detecting and eliminating fake social media accounts. It is a valuable tool for managing and safeguarding a public figure's online presence and brand image.

DryRun Security
DryRun Security is a contextual security analysis tool designed to help organizations identify and mitigate risks in their codebase. By providing real-time insights and feedback, DryRun Security empowers security leaders, AppSec engineers, and developers to proactively secure their code and streamline compliance efforts. The tool goes beyond traditional pattern-matching approaches by considering codepaths, developer intent, and language-specific checks to uncover vulnerabilities in context. With customizable code policies and natural language enforcement, DryRun Security offers a user-friendly experience for enhancing code security and collaboration between security and development teams.

Watchdog
Watchdog is an AI-powered chat moderation tool designed to fully automate chat moderation for Telegram communities. It helps community owners tackle rulebreakers, trolls, and spambots effortlessly, ensuring consistent rule enforcement and user retention. With features like automatic monitoring, customizable rule enforcement, and quick setup, Watchdog offers significant cost savings and eliminates the need for manual moderation. The tool is developed by Ben, a solo developer, who created it to address the challenges he faced in managing his own community. Watchdog aims to save time, money, and enhance user experience by swiftly identifying and handling rule violations.

Pulumi
Pulumi is an AI-powered infrastructure as code tool that allows engineers to manage cloud infrastructure using various programming languages like Node.js, Python, Go, .NET, Java, and YAML. It offers features such as generative AI-powered cloud management, security enforcement through policies, automated deployment workflows, asset management, compliance remediation, and AI insights over the cloud. Pulumi helps teams provision, automate, and evolve cloud infrastructure, centralize and secure secrets management, and gain security, compliance, and cost insights across all cloud assets.

Protect AI
Protect AI is a comprehensive platform designed to secure AI systems by providing visibility and manageability to detect and mitigate unique AI security threats. The platform empowers organizations to embrace a security-first approach to AI, offering solutions for AI Security Posture Management, ML model security enforcement, AI/ML supply chain vulnerability database, LLM security monitoring, and observability. Protect AI aims to safeguard AI applications and ML systems from potential vulnerabilities, enabling users to build, adopt, and deploy AI models confidently and at scale.

Frontitude
Frontitude is an AI writing assistant designed for design teams to help write engaging and consistent UX content effortlessly within Figma. It empowers teams to brainstorm, get copy suggestions, and deliver consistent content without leaving the design platform. The tool integrates content guidelines into the design system, streamlines design reviews, and works great with the Frontitude platform. Users can save writing time, maintain a consistent voice, and generate ideas with the help of this AI-powered writing assistant.

Yokoy
Yokoy is an AI-powered spend management suite that helps midsize companies and global enterprises save money on every dollar spent. It automates accounts payable and expense management tasks, streamlines global data, keeps spend under control, and enforces compliance. Yokoy's API facilitates seamless integrations with enterprise-level systems for end-to-end automation across entities and geographies.

MineOS
MineOS is an automation-driven platform that focuses on privacy, security, and compliance. It offers a comprehensive suite of tools and solutions to help businesses manage their data privacy needs efficiently. By leveraging AI and special discovery methods, MineOS adapts unique data processes to universal privacy standards seamlessly. The platform provides features such as data mapping, AI governance, DSR automations, consent management, and security & compliance solutions to ensure data visibility and governance. MineOS is recognized as the industry's #1 rated data governance platform, offering cost-effective control of data systems and centralizing data subject request handling.

Domino Data Lab
Domino Data Lab is an enterprise AI platform that enables users to build, deploy, and manage AI models across any environment. It fosters collaboration, establishes best practices, and ensures governance while reducing costs. The platform provides access to a broad ecosystem of open source and commercial tools, and infrastructure, allowing users to accelerate and scale AI impact. Domino serves as a central hub for AI operations and knowledge, offering integrated workflows, automation, and hybrid multicloud capabilities. It helps users optimize compute utilization, enforce compliance, and centralize knowledge across teams.

Kubiya
Kubiya is an AI-powered platform designed to reduce Time-To-Automation for Developer and Infrastructure Operations teams. It allows users to focus on high-impact work by automating routine tasks and processes, powered by an Agentic-Native Platform. Kubiya is trusted by world-class engineering and operations teams, offering features such as AI teammates, JIT Permissions, Infrastructure Provisioning, Help Desk support, and Incident Response automation.

42Signals
42Signals is an AI-driven eCommerce analytics software that provides real-time insights and analytics for consumer brands. It empowers businesses to anticipate market trends, monitor competitors, optimize digital shelves, and drive sales growth with unmatched accuracy and speed. The platform offers a suite of tools including Digital Shelf Analytics, Competitor Analysis, Voice of Customer Analytics, and more to help businesses dominate the digital marketplace.

Operant
Operant is a cloud-native runtime protection platform that offers instant visibility and control from infrastructure to APIs. It provides AI security shield for applications, API threat protection, Kubernetes security, automatic microsegmentation, and DevSecOps solutions. Operant helps defend APIs, protect Kubernetes, and shield AI applications by detecting and blocking various attacks in real-time. It simplifies security for cloud-native environments with zero instrumentation, application code changes, or integrations.

FairNow
FairNow is an AI governance platform that simplifies and centralizes AI risk management at scale. It provides audit-ready compliance, automated bias audits, customizable AI governance tools, and seamless data integration options. FairNow ensures organization-wide oversight, meticulous organization, and unwavering accountability, helping organizations align AI projects with compliance requirements and ethical standards.

Velotix
Velotix is an AI-powered data security platform that offers groundbreaking visual data security solutions to help organizations discover, visualize, and use their data securely and compliantly. The platform provides features such as data discovery, permission discovery, self-serve data access, policy-based access control, AI recommendations, and automated policy management. Velotix aims to empower enterprises with smart and compliant data access controls, ensuring data integrity and compliance. The platform helps organizations gain data visibility, control access, and enforce policy compliance, ultimately enhancing data security and governance.
20 - Open Source AI Tools

chatgpt-subtitle-translator
This tool utilizes the OpenAI ChatGPT API to translate text, with a focus on line-based translation, particularly for SRT subtitles. It optimizes token usage by removing SRT overhead and grouping text into batches, allowing for arbitrary length translations without excessive token consumption while maintaining a one-to-one match between line input and output.

aiac
AIAC is a library and command line tool to generate Infrastructure as Code (IaC) templates, configurations, utilities, queries, and more via LLM providers such as OpenAI, Amazon Bedrock, and Ollama. Users can define multiple 'backends' targeting different LLM providers and environments using a simple configuration file. The tool allows users to ask a model to generate templates for different scenarios and composes an appropriate request to the selected provider, storing the resulting code to a file and/or printing it to standard output.

one-api
One API 是一个开源项目,它通过标准的 OpenAI API 格式访问所有的大模型,开箱即用。它支持多种大模型,包括 OpenAI ChatGPT 系列模型、Anthropic Claude 系列模型、Google PaLM2/Gemini 系列模型、Mistral 系列模型、百度文心一言系列模型、阿里通义千问系列模型、讯飞星火认知大模型、智谱 ChatGLM 系列模型、360 智脑、腾讯混元大模型、Moonshot AI、百川大模型、MINIMAX、Groq、Ollama、零一万物、阶跃星辰。One API 还支持配置镜像以及众多第三方代理服务,支持通过负载均衡的方式访问多个渠道,支持 stream 模式,支持多机部署,支持令牌管理,支持兑换码管理,支持渠道管理,支持用户分组以及渠道分组,支持渠道设置模型列表,支持查看额度明细,支持用户邀请奖励,支持以美元为单位显示额度,支持发布公告,设置充值链接,设置新用户初始额度,支持模型映射,支持失败自动重试,支持绘图接口,支持 Cloudflare AI Gateway,支持丰富的自定义设置,支持通过系统访问令牌调用管理 API,进而**在无需二开的情况下扩展和自定义** One API 的功能,支持 Cloudflare Turnstile 用户校验,支持用户管理,支持多种用户登录注册方式,支持主题切换,配合 Message Pusher 可将报警信息推送到多种 App 上。

langroid
Langroid is a Python framework that makes it easy to build LLM-powered applications. It uses a multi-agent paradigm inspired by the Actor Framework, where you set up Agents, equip them with optional components (LLM, vector-store and tools/functions), assign them tasks, and have them collaboratively solve a problem by exchanging messages. Langroid is a fresh take on LLM app-development, where considerable thought has gone into simplifying the developer experience; it does not use Langchain.

kwaak
Kwaak is a tool that allows users to run a team of autonomous AI agents locally from their own machine. It enables users to write code, improve test coverage, update documentation, and enhance code quality while focusing on building innovative projects. Kwaak is designed to run multiple agents in parallel, interact with codebases, answer questions about code, find examples, write and execute code, create pull requests, and more. It is free and open-source, allowing users to bring their own API keys or models via Ollama. Kwaak is part of the bosun.ai project, aiming to be a platform for autonomous code improvement.

prompt-injection-defenses
This repository provides a collection of tools and techniques for defending against injection attacks in software applications. It includes code samples, best practices, and guidelines for implementing secure coding practices to prevent common injection vulnerabilities such as SQL injection, XSS, and command injection. The tools and resources in this repository aim to help developers build more secure and resilient applications by addressing one of the most common and critical security threats in modern software development.

blog
This repository contains a simple blog application built using Python and Flask framework. It allows users to create, read, update, and delete blog posts. The application uses SQLite database for storing blog data and provides a basic user interface for interacting with the blog. The code is well-organized and easy to understand, making it suitable for beginners looking to learn web development with Python and Flask.

llama3_interpretability_sae
This project focuses on implementing Sparse Autoencoders (SAEs) for mechanistic interpretability in Large Language Models (LLMs) like Llama 3.2-3B. The SAEs aim to untangle superimposed representations in LLMs into separate, interpretable features for each neuron activation. The project provides an end-to-end pipeline for capturing training data, training the SAEs, analyzing learned features, and verifying results experimentally. It includes comprehensive logging, visualization, and checkpointing of SAE training, interpretability analysis tools, and a pure PyTorch implementation of Llama 3.1/3.2 chat and text completion. The project is designed for scalability, efficiency, and maintainability.

COLD-Attack
COLD-Attack is a framework designed for controllable jailbreaks on large language models (LLMs). It formulates the controllable attack generation problem and utilizes the Energy-based Constrained Decoding with Langevin Dynamics (COLD) algorithm to automate the search of adversarial LLM attacks with control over fluency, stealthiness, sentiment, and left-right-coherence. The framework includes steps for energy function formulation, Langevin dynamics sampling, and decoding process to generate discrete text attacks. It offers diverse jailbreak scenarios such as fluent suffix attacks, paraphrase attacks, and attacks with left-right-coherence.

monitors4codegen
This repository hosts the official code and data artifact for the paper 'Monitor-Guided Decoding of Code LMs with Static Analysis of Repository Context'. It introduces Monitor-Guided Decoding (MGD) for code generation using Language Models, where a monitor uses static analysis to guide the decoding. The repository contains datasets, evaluation scripts, inference results, a language server client 'multilspy' for static analyses, and implementation of various monitors monitoring for different properties in 3 programming languages. The monitors guide Language Models to adhere to properties like valid identifier dereferences, correct number of arguments to method calls, typestate validity of method call sequences, and more.

CodeGeeX4
CodeGeeX4-ALL-9B is an open-source multilingual code generation model based on GLM-4-9B, offering enhanced code generation capabilities. It supports functions like code completion, code interpreter, web search, function call, and repository-level code Q&A. The model has competitive performance on benchmarks like BigCodeBench and NaturalCodeBench, outperforming larger models in terms of speed and performance.

airflint
Airflint is a tool designed to enforce best practices for all your Airflow Directed Acyclic Graphs (DAGs). It is currently in the alpha stage and aims to help users adhere to recommended practices when working with Airflow. Users can install Airflint from PyPI and integrate it into their existing Airflow environment to improve DAG quality. The tool provides rules for function-level imports and jinja template syntax usage, among others, to enhance the development process of Airflow DAGs.

gptlint
GPTLint is a tool that utilizes Large Language Models (LLMs) to enforce higher-level best practices across a codebase. It offers features such as enforcing rules that are impossible with AST-based approaches, simple markdown format for rules, easy customization of rules, support for custom project-specific rules, content-based caching, and outputting LLM stats per run. GPTLint supports all major LLM providers and local models, augments ESLint instead of replacing it, and includes guidelines for creating custom rules. However, the MVP rules are currently limited to JS/TS only, single-file context only, and do not support autofixing.

gpt4all
GPT4All is an ecosystem to run powerful and customized large language models that work locally on consumer grade CPUs and any GPU. Note that your CPU needs to support AVX or AVX2 instructions. Learn more in the documentation. A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. Nomic AI supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.

curriculum
The 'curriculum' repository is an open-source content repository by Enki, providing a community-driven curriculum for education. It follows a contributor covenant code of conduct to ensure a safe and engaging learning environment. The content is licensed under Creative Commons, allowing free use for non-commercial purposes with attribution to Enki and the author.

invariant
Invariant Analyzer is an open-source scanner designed for LLM-based AI agents to find bugs, vulnerabilities, and security threats. It scans agent execution traces to identify issues like looping behavior, data leaks, prompt injections, and unsafe code execution. The tool offers a library of built-in checkers, an expressive policy language, data flow analysis, real-time monitoring, and extensible architecture for custom checkers. It helps developers debug AI agents, scan for security violations, and prevent security issues and data breaches during runtime. The analyzer leverages deep contextual understanding and a purpose-built rule matching engine for security policy enforcement.
11 - OpenAI Gpts

IAC Code Guardian
Introducing IAC Code Guardian: Your Trusted IaC Security Expert in Scanning Opentofu, Terrform, AWS Cloudformation, Pulumi, K8s Yaml & Dockerfile

Boundary Coach
Boundary Coach is now fine-tuned and ready for use! It's an advanced guide for assertive boundary setting, offering nuanced advice, practical tips, and interactive exercises. It will provide tailored guidance, avoiding medical or legal advice and suggesting professional help when needed.

Term of Service Drafting Master
Legal Expert in drafting Term of Service (Powered by LegalNow ai.legalnow.xyz)

Trademarks GPT
Trademark Process Assistant, Not an Attorney & Definitely Not Legal Advice (independently verify info received). Gain insights on U.S. trademark process & concepts, USPTO resources, application steps & more - all while being reminded of the importance of consulting legal pros 4 specific guidance.

Seabiscuit IP Guardian
Secure Your Intellectual Property Innovations: Specializes in IP creation, management, and protection, offering expert guidance in U.S. copyright, trademark, patent, and trade secret laws ensuring your intellectual property is well-protected and leveraged effectively. (v1.15)