
verifiers
Environments for LLM Reinforcement Learning
Stars: 3209

Verifiers is a tool designed to verify the correctness of data and information. It provides a set of functions to validate and check the accuracy of various types of data, such as text, numbers, dates, and more. With Verifiers, users can easily ensure the quality and integrity of their data by performing checks and validations according to predefined rules and criteria. The tool is versatile and can be used in a wide range of applications, including data processing, quality control, error detection, and data analysis. Verifiers simplifies the process of data verification and helps users identify and correct errors and inconsistencies in their datasets, leading to improved data quality and reliability.
README:
Verifiers is a library of modular components for creating RL environments and training LLM agents. Environments built with Verifiers can be used directly as LLM evaluations, synthetic data pipelines, or agent harnesses for any OpenAI-compatible model endpoint, in addition to RL training. Verifiers includes an async GRPO implementation built around the transformers
Trainer, is supported by prime-rl
for large-scale FSDP training, and can easily be integrated into any RL framework which exposes an OpenAI-compatible inference client.
Full documentation is available here.
Verifiers is also the native library used by Prime Intellect's Environments Hub; see here for information about publishing your Environments to the Hub.
We recommend using verifiers
along with uv for dependency management in your own project:
# install uv (first time only)
curl -LsSf https://astral.sh/uv/install.sh | sh
# create a fresh project -- 3.11 + 3.12 supported
uv init && uv venv --python 3.12
For local (CPU) development and evaluation with API models, do:
uv add verifiers # uv add 'verifiers[dev]' for Jupyter + testing support
For training on GPUs with vf.GRPOTrainer
, do:
uv add 'verifiers[train]' && uv pip install flash-attn --no-build-isolation
To use the latest main
branch, do:
uv add verifiers @ git+https://github.com/PrimeIntellect-ai/verifiers.git
To use with prime-rl
, see here.
To install verifiers
from source for core library development, do:
git clone https://github.com/PrimeIntellect-ai/verifiers.git
cd verifiers
# for CPU-only dev:
uv sync --extra dev
# or, for trainer dev:
uv sync --all-extras && uv pip install flash-attn --no-build-isolation
# install pre-commit hooks
uv run pre-commit install
In general, we recommend that you build and train Environments with verifiers
, not in verifiers
. If you find yourself needing to clone and modify the core library in order to implement key functionality for your project, we'd love for you to open an issue so that we can try and streamline the development experience. Our aim is for verifiers
to be a reliable toolkit to build on top of, and to minimize the "fork proliferation" which often pervades the RL infrastructure ecosystem.
Environments in Verifiers are installable Python modules which can specify dependencies in a pyproject.toml
, and which expose a load_environment
function for instantiation by downstream applications (e.g. trainers). See environments/
for examples.
To initialize a blank Environment module template, do:
uv run vf-init environment-name # -p /path/to/environments (defaults to "./environments")
To an install an Environment module into your project, do:
uv run vf-install environment-name # -p /path/to/environments (defaults to "./environments")
To install an Environment module from this repo's environments
folder, do:
uv run vf-install math-python --from-repo # -b branch_or_commit (defaults to "main")
Once an Environment module is installed, you can create an instance of the Environment using load_environment
, passing any necessary args:
import verifiers as vf
vf_env = vf.load_environment("environment-name", **env_args)
To run a quick evaluation of your Environment with an API-based model, do:
uv run vf-eval environment-name -s # run and save eval results locally
# vf-eval -h for config options; defaults to gpt-4.1-mini, 5 prompts, 3 rollouts for each
If you're using Prime Intellect infrastructure, the prime
CLI provides first-class commands for working with Verifiers environments through the Environments Hub. Install it with uv tool install prime
, authenticate via prime login
, then use prime env push
to publish your package and prime env install owner/name
(optionally pinning a version) to consume it from pods or local machines.
The core elements of Environments are:
- Datasets: a Hugging Face
Dataset
with aprompt
column for inputs, and optionallyanswer (str)
orinfo (dict)
columns for evaluation (both can be omitted for environments that evaluate based solely on completion quality) - Rollout logic: interactions between models and the environment (e.g.
env_response
+is_completed
for anyMultiTurnEnv
) - Rubrics: an encapsulation for one or more reward functions
- Parsers: optional; an encapsulation for reusable parsing logic
We support both /v1/chat/completions
-style and /v1/completions
-style inference via OpenAI clients, though we generally recommend /v1/chat/completions
-style inference for the vast majority of applications. Both the included GRPOTrainer
as well as prime-rl
support the full set of SamplingParams exposed by vLLM (via their OpenAI-compatible server interface), and leveraging this will often be the appropriate way to implement rollout strategies requiring finer-grained control, such as interrupting and resuming generations for interleaved tool use, or enforcing reasoning budgets.
The primary constraint we impose on rollout logic is that token sequences must be increasing, i.e. once a token has been added to a model's context in a rollout, it must remain as the rollout progresses. Note that this causes issues with some popular reasoning models such as the Qwen3 and DeepSeek-R1-Distill series; see Troubleshooting for guidance on adapting these models to support multi-turn rollouts.
For tasks requiring only a single response from a model for each prompt, you can use SingleTurnEnv
directly by specifying a Dataset and a Rubric. Rubrics are sets of reward functions, which can be either sync or async.
from datasets import load_dataset
import verifiers as vf
dataset = load_dataset("my-account/my-dataset", split="train")
def reward_A(prompt, completion, info) -> float:
# reward fn, e.g. correctness
...
def reward_B(parser, completion) -> float:
# auxiliary reward fn, e.g. format
...
async def metric(completion) -> float:
# non-reward metric, e.g. proper noun count
...
rubric = vf.Rubric(funcs=[reward_A, reward_B, metric], weights=[1.0, 0.5, 0.0])
vf_env = SingleTurnEnv(
dataset=dataset,
rubric=rubric
)
results = vf_env.evaluate(client=OpenAI(), model="gpt-4.1-mini", num_examples=100, rollouts_per_example=1)
vf_env.make_dataset(results) # HF dataset format
Datasets should be formatted with columns for:
-
'prompt' (List[ChatMessage])
OR'question' (str)
fields-
ChatMessage
= e.g.{'role': 'user', 'content': '...'}
- if
question
is set instead ofprompt
, you can also passsystem_prompt (str)
and/orfew_shot (List[ChatMessage])
-
-
answer (str)
AND/ORinfo (dict)
(both optional, can be omitted entirely) -
task (str)
: optional, used byEnvGroup
andRubricGroup
for orchestrating composition of Environments and Rubrics
The following named attributes available for use by reward functions in your Rubric:
-
prompt
: sequence of input messages -
completion
: sequence of messages generated during rollout by model and Environment -
answer
: primary answer column, optional (defaults to empty string if omitted) -
state
: can be modified during rollout to accumulate any metadata (state['responses']
includes full OpenAI response objects by default) -
info
: auxiliary info needed for reward computation (e.g. test cases), optional (defaults to empty dict if omitted) -
task
: tag for task type (used byEnvGroup
andRubricGroup
) -
parser
: the parser object declared. Note:vf.Parser().get_format_reward_func()
is a no-op (always 1.0); usevf.ThinkParser
or a custom parser if you want a real format adherence reward.
Note: Some environments can fully evaluate using only prompt
, completion
, and state
without requiring ground truth answer
or info
data. Examples include format compliance checking, completion quality assessment, or length-based rewards.
For tasks involving LLM judges, you may wish to use vf.JudgeRubric()
for managing requests to auxiliary models.
Note on concurrency: environment APIs accept max_concurrent
to control parallel rollouts. The vf-eval
CLI currently exposes --max-concurrent-requests
; ensure this maps to your environment’s concurrency as expected.
vf-eval
also supports specifying sampling_args
as a JSON object, which is sent to the vLLM inference engine:
uv run vf-eval vf-environment-name --sampling-args '{"reasoning_effort": "low"}'
Use vf-eval -s
to save outputs as dataset-formatted JSON, and view all locally-saved eval results with vf-tui
.
For many applications involving tool use, you can use ToolEnv
to leverage models' native tool/function-calling capabilities in an agentic loop. Tools can be specified as generic Python functions (with type hints and docstrings), which will then be passed in JSON schema form to each inference request.
import verifiers as vf
vf_env = vf.ToolEnv(
dataset= ... # HF Dataset with 'prompt'/'question' and optionally 'answer'/'info' columns
rubric= ... # Rubric object; vf.ToolRubric() can be optionally used for counting tool invocations in each rollout
tools=[search_tool, read_article_tool, python_tool], # python functions with type hints + docstrings
max_turns=10
)
In cases where your tools require heavy computational resources, we recommend hosting your tools as standalone servers (e.g. MCP servers) and creating lightweight wrapper functions to pass to ToolEnv
. Parallel tool call support is enabled by default.
For training, or self-hosted endpoints, you'll want to enable auto tool choice in vLLM with the appropriate parser. If your model does not support native tool calling, you may find the XMLParser
abstraction useful for rolling your own tool call parsing on top of MultiTurnEnv
; see environments/xml_tool_env
for an example.
Both SingleTurnEnv
and ToolEnv
are instances of MultiTurnEnv
, which exposes an interface for writing custom Environment interaction protocols. The two methods you must override are
from typing import Tuple
import verifiers as vf
from verifiers.types import Messages, State
class YourMultiTurnEnv(vf.MultiTurnEnv):
def __init__(self,
dataset: Dataset,
rubric: Rubric,
max_turns: int,
**kwargs):
async def is_completed(self, messages: Messages, state: State, **kwargs) -> bool:
# return whether or not a rollout is completed
async def env_response(self, messages: Messages, state: State, **kwargs) -> Tuple[Messages, State]:
# return new environment message(s) + updated state
If your application requires more fine-grained control than is allowed by MultiTurnEnv
, you may want to inherit from the base Environment
functionality directly and override the rollout
method.
The included trainer (vf.GRPOTrainer
) supports running GRPO-style RL training via Accelerate/DeepSpeed, and uses vLLM for inference. It supports both full-parameter finetuning, and is optimized for efficiently training dense transformer models on 2-16 GPUs.
# install environment
vf-install vf-wordle (-p /path/to/environments | --from-repo)
# quick eval
vf-eval vf-wordle -m (model_name in configs/endpoints.py) -n NUM_EXAMPLES -r ROLLOUTS_PER_EXAMPLE
# inference (shell 0)
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5 vf-vllm --model willcb/Qwen3-1.7B-Wordle \
--data-parallel-size 7 --enforce-eager --disable-log-requests
# training (shell 1)
CUDA_VISIBLE_DEVICES=6,7 accelerate launch --num-processes 2 \
--config-file configs/zero3.yaml examples/grpo/train_wordle.py --size 1.7B
Alternatively, you can train environments with the external prime-rl
project (FSDP-first orchestration). See the prime-rl
README for installation and examples. For example:
# orchestrator config (prime-rl)
[environment]
id = "vf-math-python" # or your environment ID
# run (prime-rl)
uv run rl \
--trainer @ configs/your_exp/train.toml \
--orchestrator @ configs/your_exp/orch.toml \
--inference @ configs/your_exp/infer.toml
- Ensure your
wandb
andhuggingface-cli
logins are set up (or setreport_to=None
intraining_args
). You should also have something set as yourOPENAI_API_KEY
in your environment (can be a dummy key for vLLM). - If using high max concurrency, increase the number of allowed open sockets (e.g.
ulimit -n 4096
) - On some setups, inter-GPU communication can hang or crash during vLLM weight syncing. This can usually be alleviated by setting (or unsetting)
NCCL_P2P_DISABLE=1
in your environment (or potentiallyNCCL_CUMEM_ENABLE=1
). Try this as your first step if you experience NCCL-related issues. - If problems persist, please open an issue.
GRPOTrainer
is optimized for setups with at least 2 GPUs, scaling up to multiple nodes. 2-GPU setups with sufficient memory to enable small-scale experimentation can be rented for <$1/hr.
If you do not require LoRA support, you may want to use the prime-rl
trainer, which natively supports Environments created using verifiers
, is more optimized for performance and scalability via FSDP, includes a broader set of configuration options and user experience features, and has more battle-tested defaults. Both trainers support asynchronous rollouts, and use a one-step off-policy delay by default for overlapping training and inference. See the prime-rl
docs for usage instructions.
See the full docs for more information.
Verifiers warmly welcomes community contributions! Please open an issue or PR if you encounter bugs or other pain points during your development, or start a discussion for more open-ended questions.
Please note that the core verifiers/
library is intended to be a relatively lightweight set of reusable components rather than an exhaustive catalog of RL environments. Consider sharing any environments you create to the Environments Hub 🙂
Originally created by Will Brown (@willccbb).
If you use this code in your research, please cite:
@misc{brown_verifiers_2025,
author = {William Brown},
title = {{Verifiers}: Environments for LLM Reinforcement Learning},
howpublished = {\url{https://github.com/willccbb/verifiers}},
note = {Commit abcdefg • accessed DD Mon YYYY},
year = {2025}
}
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for verifiers
Similar Open Source Tools

verifiers
Verifiers is a tool designed to verify the correctness of data and information. It provides a set of functions to validate and check the accuracy of various types of data, such as text, numbers, dates, and more. With Verifiers, users can easily ensure the quality and integrity of their data by performing checks and validations according to predefined rules and criteria. The tool is versatile and can be used in a wide range of applications, including data processing, quality control, error detection, and data analysis. Verifiers simplifies the process of data verification and helps users identify and correct errors and inconsistencies in their datasets, leading to improved data quality and reliability.

PerforatedAI
PerforatedAI is a machine learning tool designed to automate the process of analyzing and extracting information from perforated documents. It uses advanced OCR technology to accurately identify and extract data from documents with perforations, such as surveys, questionnaires, and forms. The tool can handle various types of perforations and is capable of processing large volumes of documents quickly and efficiently. PerforatedAI streamlines the data extraction process, saving time and reducing errors associated with manual data entry. It is a valuable tool for businesses and organizations that deal with large amounts of perforated documents on a regular basis.

upgini
Upgini is an intelligent data search engine with a Python library that helps users find and add relevant features to their ML pipeline from various public, community, and premium external data sources. It automates the optimization of connected data sources by generating an optimal set of machine learning features using large language models, GraphNNs, and recurrent neural networks. The tool aims to simplify feature search and enrichment for external data to make it a standard approach in machine learning pipelines. It democratizes access to data sources for the data science community.

ROGRAG
ROGRAG is a powerful open-source tool designed for data analysis and visualization. It provides a user-friendly interface for exploring and manipulating datasets, making it ideal for researchers, data scientists, and analysts. With ROGRAG, users can easily import, clean, analyze, and visualize data to gain valuable insights and make informed decisions. The tool supports a wide range of data formats and offers a variety of statistical and visualization tools to help users uncover patterns, trends, and relationships in their data. Whether you are working on exploratory data analysis, statistical modeling, or data visualization, ROGRAG is a versatile tool that can streamline your workflow and enhance your data analysis capabilities.

clewdr
Clewdr is a collaborative platform for data analysis and visualization. It allows users to upload datasets, perform various data analysis tasks, and create interactive visualizations. The platform supports multiple users working on the same project simultaneously, enabling real-time collaboration and sharing of insights. Clewdr is designed to streamline the data analysis process and facilitate communication among team members. With its user-friendly interface and powerful features, Clewdr is suitable for data scientists, analysts, researchers, and anyone working with data to gain valuable insights and make informed decisions.

arconia
Arconia is a powerful open-source tool for managing and visualizing data in a user-friendly way. It provides a seamless experience for data analysts and scientists to explore, clean, and analyze datasets efficiently. With its intuitive interface and robust features, Arconia simplifies the process of data manipulation and visualization, making it an essential tool for anyone working with data.

agent-pod
Agent POD is a project focused on capturing and storing personal digital data in a user-controlled environment, with the goal of enabling agents to interact with the data. It explores questions related to structuring information, creating an efficient data capture system, integrating with protocols like SOLID, and enabling data storage for groups. The project aims to transition from traditional data-storing apps to a system where personal data is owned and controlled by the user, facilitating the creation of 'solid-first' apps.

datatune
Datatune is a data analysis tool designed to help users explore and analyze datasets efficiently. It provides a user-friendly interface for importing, cleaning, visualizing, and modeling data. With Datatune, users can easily perform tasks such as data preprocessing, feature engineering, model selection, and evaluation. The tool offers a variety of statistical and machine learning algorithms to support data analysis tasks. Whether you are a data scientist, analyst, or researcher, Datatune can streamline your data analysis workflow and help you derive valuable insights from your data.

vizra-adk
Vizra-ADK is a data visualization tool that allows users to create interactive and customizable visualizations for their data. With a user-friendly interface and a wide range of customization options, Vizra-ADK makes it easy for users to explore and analyze their data in a visually appealing way. Whether you're a data scientist looking to create informative charts and graphs, or a business analyst wanting to present your findings in a compelling way, Vizra-ADK has you covered. The tool supports various data formats and provides features like filtering, sorting, and grouping to help users make sense of their data quickly and efficiently.

turftopic
Turftopic is a Python library that provides tools for sentiment analysis and topic modeling of text data. It allows users to analyze large volumes of text data to extract insights on sentiment and topics. The library includes functions for preprocessing text data, performing sentiment analysis using machine learning models, and conducting topic modeling using algorithms such as Latent Dirichlet Allocation (LDA). Turftopic is designed to be user-friendly and efficient, making it suitable for both beginners and experienced data analysts.

waidrin
Waidrin is a powerful web scraping tool that allows users to easily extract data from websites. It provides a user-friendly interface for creating custom web scraping scripts and supports various data formats for exporting the extracted data. With Waidrin, users can automate the process of collecting information from multiple websites, saving time and effort. The tool is designed to be flexible and scalable, making it suitable for both beginners and advanced users in the field of web scraping.

LightLLM
LightLLM is a lightweight library for linear and logistic regression models. It provides a simple and efficient way to train and deploy machine learning models for regression tasks. The library is designed to be easy to use and integrate into existing projects, making it suitable for both beginners and experienced data scientists. With LightLLM, users can quickly build and evaluate regression models using a variety of algorithms and hyperparameters. The library also supports feature engineering and model interpretation, allowing users to gain insights from their data and make informed decisions based on the model predictions.

CredSweeper
CredSweeper is a tool designed to detect credentials like tokens, passwords, and API keys in directories or files. It helps users identify potential exposure of sensitive information by scanning lines, filtering, and utilizing an AI model. The tool reports lines containing possible credentials, their location, and the expected type of credential.

RAG-To-Know
RAG-To-Know is a versatile tool for knowledge extraction and summarization. It leverages the RAG (Retrieval-Augmented Generation) framework to provide a seamless way to retrieve and summarize information from various sources. With RAG-To-Know, users can easily extract key insights and generate concise summaries from large volumes of text data. The tool is designed to streamline the process of information retrieval and summarization, making it ideal for researchers, students, journalists, and anyone looking to quickly grasp the essence of complex information.

SpecForge
SpecForge is a powerful tool for generating API specifications from code. It helps developers to easily create and maintain accurate API documentation by extracting information directly from the codebase. With SpecForge, users can streamline the process of documenting APIs, ensuring consistency and reducing manual effort. The tool supports various programming languages and frameworks, making it versatile and adaptable to different development environments. By automating the generation of API specifications, SpecForge enhances collaboration between developers and stakeholders, improving overall project efficiency and quality.

Website-Crawler
Website-Crawler is a tool designed to extract data from websites in an automated manner. It allows users to scrape information such as text, images, links, and more from web pages. The tool provides functionalities to navigate through websites, handle different types of content, and store extracted data for further analysis. Website-Crawler is useful for tasks like web scraping, data collection, content aggregation, and competitive analysis. It can be customized to extract specific data elements based on user requirements, making it a versatile tool for various web data extraction needs.
For similar tasks

verifiers
Verifiers is a tool designed to verify the correctness of data and information. It provides a set of functions to validate and check the accuracy of various types of data, such as text, numbers, dates, and more. With Verifiers, users can easily ensure the quality and integrity of their data by performing checks and validations according to predefined rules and criteria. The tool is versatile and can be used in a wide range of applications, including data processing, quality control, error detection, and data analysis. Verifiers simplifies the process of data verification and helps users identify and correct errors and inconsistencies in their datasets, leading to improved data quality and reliability.

akeru
Akeru.ai is an open-source AI platform leveraging the power of decentralization. It offers transparent, safe, and highly available AI capabilities. The platform aims to give developers access to open-source and transparent AI resources through its decentralized nature hosted on an edge network. Akeru API introduces features like retrieval, function calling, conversation management, custom instructions, data input optimization, user privacy, testing and iteration, and comprehensive documentation. It is ideal for creating AI agents and enhancing web and mobile applications with advanced AI capabilities. The platform runs on a Bittensor Subnet design that aims to democratize AI technology and promote an equitable AI future. Akeru.ai embraces decentralization challenges to ensure a decentralized and equitable AI ecosystem with security features like watermarking and network pings. The API architecture integrates with technologies like Bun, Redis, and Elysia for a robust, scalable solution.

vts
VTS (Vector Transport Service) is an open-source tool developed by Zilliz based on Apache Seatunnel for moving vectors and unstructured data. It addresses data migration needs, supports real-time data streaming and offline import, simplifies unstructured data transformation, and ensures end-to-end data quality. Core capabilities include rich connectors, stream and batch processing, distributed snapshot support, high performance, and real-time monitoring. Future developments include incremental synchronization, advanced data transformation, and enhanced monitoring. VTS supports various connectors for data migration and offers advanced features like Transformers, cluster mode deployment, RESTful API, Docker deployment, and more.
For similar jobs

Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.

skyvern
Skyvern automates browser-based workflows using LLMs and computer vision. It provides a simple API endpoint to fully automate manual workflows, replacing brittle or unreliable automation solutions. Traditional approaches to browser automations required writing custom scripts for websites, often relying on DOM parsing and XPath-based interactions which would break whenever the website layouts changed. Instead of only relying on code-defined XPath interactions, Skyvern adds computer vision and LLMs to the mix to parse items in the viewport in real-time, create a plan for interaction and interact with them. This approach gives us a few advantages: 1. Skyvern can operate on websites it’s never seen before, as it’s able to map visual elements to actions necessary to complete a workflow, without any customized code 2. Skyvern is resistant to website layout changes, as there are no pre-determined XPaths or other selectors our system is looking for while trying to navigate 3. Skyvern leverages LLMs to reason through interactions to ensure we can cover complex situations. Examples include: 1. If you wanted to get an auto insurance quote from Geico, the answer to a common question “Were you eligible to drive at 18?” could be inferred from the driver receiving their license at age 16 2. If you were doing competitor analysis, it’s understanding that an Arnold Palmer 22 oz can at 7/11 is almost definitely the same product as a 23 oz can at Gopuff (even though the sizes are slightly different, which could be a rounding error!) Want to see examples of Skyvern in action? Jump to #real-world-examples-of- skyvern

pandas-ai
PandasAI is a Python library that makes it easy to ask questions to your data in natural language. It helps you to explore, clean, and analyze your data using generative AI.

vanna
Vanna is an open-source Python framework for SQL generation and related functionality. It uses Retrieval-Augmented Generation (RAG) to train a model on your data, which can then be used to ask questions and get back SQL queries. Vanna is designed to be portable across different LLMs and vector databases, and it supports any SQL database. It is also secure and private, as your database contents are never sent to the LLM or the vector database.

databend
Databend is an open-source cloud data warehouse that serves as a cost-effective alternative to Snowflake. With its focus on fast query execution and data ingestion, it's designed for complex analysis of the world's largest datasets.

Avalonia-Assistant
Avalonia-Assistant is an open-source desktop intelligent assistant that aims to provide a user-friendly interactive experience based on the Avalonia UI framework and the integration of Semantic Kernel with OpenAI or other large LLM models. By utilizing Avalonia-Assistant, you can perform various desktop operations through text or voice commands, enhancing your productivity and daily office experience.

marvin
Marvin is a lightweight AI toolkit for building natural language interfaces that are reliable, scalable, and easy to trust. Each of Marvin's tools is simple and self-documenting, using AI to solve common but complex challenges like entity extraction, classification, and generating synthetic data. Each tool is independent and incrementally adoptable, so you can use them on their own or in combination with any other library. Marvin is also multi-modal, supporting both image and audio generation as well using images as inputs for extraction and classification. Marvin is for developers who care more about _using_ AI than _building_ AI, and we are focused on creating an exceptional developer experience. Marvin users should feel empowered to bring tightly-scoped "AI magic" into any traditional software project with just a few extra lines of code. Marvin aims to merge the best practices for building dependable, observable software with the best practices for building with generative AI into a single, easy-to-use library. It's a serious tool, but we hope you have fun with it. Marvin is open-source, free to use, and made with 💙 by the team at Prefect.

activepieces
Activepieces is an open source replacement for Zapier, designed to be extensible through a type-safe pieces framework written in Typescript. It features a user-friendly Workflow Builder with support for Branches, Loops, and Drag and Drop. Activepieces integrates with Google Sheets, OpenAI, Discord, and RSS, along with 80+ other integrations. The list of supported integrations continues to grow rapidly, thanks to valuable contributions from the community. Activepieces is an open ecosystem; all piece source code is available in the repository, and they are versioned and published directly to npmjs.com upon contributions. If you cannot find a specific piece on the pieces roadmap, please submit a request by visiting the following link: Request Piece Alternatively, if you are a developer, you can quickly build your own piece using our TypeScript framework. For guidance, please refer to the following guide: Contributor's Guide