
safety-tooling
Inference API for many LLMs and other useful tools for empirical research
Stars: 68

This repository, safety-tooling, is designed to be shared across various AI Safety projects. It provides an LLM API with a common interface for OpenAI, Anthropic, and Google models. The aim is to facilitate collaboration among AI Safety researchers, especially those with limited software engineering backgrounds, by offering a platform for contributing to a larger codebase. The repo can be used as a git submodule for easy collaboration and updates. It also supports pip installation for convenience. The repository includes features for installation, secrets management, linting, formatting, Redis configuration, testing, dependency management, inference, finetuning, API usage tracking, and various utilities for data processing and experimentation.
README:
- This is a repo designed to be shared across many AI Safety projects. It has built up since Summer 2023 during projects with Ethan Perez. The code primarily provides an LLM API with a common interface for OpenAI, Anthropic and Google models.
- The aim is that this repo continues to grow and evolve as more collaborators start to use it, ultimately speeding up new AI Safety researchers that join the cohort in the future. Furthermore, it provides an opportunity for those who have less of a software engineering background to upskill in contributing to a larger codebase that has many users.
- This repo works great as a git submodule. This has the benefit of having the ease of being able to commit changes that everyone can benefit from. When you
cd
into the submodule directory, you cangit pull
andgit push
to that repository.- We provide an example repo that uses
safety-tooling
as a submodule here: https://github.com/safety-research/safety-examples - The code can also eaisly be pip installed if that is more suitable for your use case.
- We provide an example repo that uses
To set up the development environment for this project, follow the steps below:
- We recommend using
uv
to manage the python environment. Install it with the following command:
curl -LsSf https://astral.sh/uv/install.sh | sh
source $HOME/.local/bin/env
- Clone the repository and navigate to the directory.
git clone [email protected]:safety-research/safety-tooling.git
cd safety-tooling
- Create a virtual environment with python 3.11.
uv venv --python=python3.11
source .venv/bin/activate
- Install the package in editable mode and also install the development dependencies.
uv pip install -e .
uv pip install -r requirements_dev.txt
- Install the kernel in vscode so it can be used in jupyter notebooks (optional).
python -m ipykernel install --user --name=venv
If you don't expect to make any changes to the package (not recommended when actively doing research), you can install it directly from pip by running the following command. This is not recommended when actively doing research but a great option once you release your code.
pip install git+https://github.com/safety-research/safety-tooling.git@<branch-name>#egg=safetytooling
You should copy the .env.example
file to .env
at the root of the repository and fill in the API keys. All are optional but features that rely on them will not work if they are not set.
OPENAI_API_KEY=<your-key>
ANTHROPIC_API_KEY=<your-key>
HF_TOKEN=<your-key>
GOOGLE_API_KEY=<your-key>
GRAYSWAN_API_KEY=<your-key>
TOGETHER_API_KEY=<your-key>
DEEPSEEK_API_KEY=<your-key>
ELEVENLABS_API_KEY=<your-key>
You can add multiple OpenAI and Anthropic API keys by adding them to the .env
file with different names. You can then pass the openai_tag
and anthropic_tag
to utils.setup_environment()
to switch between them. Alternatively, pass openai_api_key
and anthropic_api_key
to the InferenceAPI object directly.
We lint our code with Ruff and format with black. This tool is automatically installed when you run set up the development environment.
If you use vscode, it's recommended to install the official Ruff extension.
In addition, there is a pre-commit hook that runs the linter and formatter before you commit.
To enable it, run make hooks
but this is optional.
To use Redis for caching instead of writing everything to disk, install Redis, and make sure it's running by doing redis-cli ping
.
export REDIS_CACHE=True # Enable Redis caching (defaults to False)
export REDIS_PASSWORD=<your-password> # Optional Redis password, in case the Redis instance on your machine is password protected
Default Redis configuration if not specified:
- Host: localhost
- Port: 6379
- No authentication
You can monitor what is being read from or written to Redis by running redis-cli
and then MONITOR
.
Run tests as a Python module (with 6 parallel workers, and verbose output) using:
python -m pytest -v -s -n 6
Certain tests are inherently slow, including all tests regarding the batch API. We disable them by default to avoid slowing down the CI pipeline. To run them, use:
SAFETYTOOLING_SLOW_TESTS=True python -m pytest -v -s -n 6
We only pin top-level dependencies only to make cross-platform development easier.
- To add a new python dependency, add a line to
pyproject.toml
. If it is only for development, add it torequirements_dev.txt
. - To upgrade a dependency, bump the version number in
pyproject.toml
.
To check for outdated dependencies, run uv pip list --outdated
.
Minimal example to run inference for gpt-4o-mini. See examples/inference_api/inference_api.ipynb
to quickly run this example.
from safetytooling.apis import InferenceAPI
from safetytooling.data_models import ChatMessage, MessageRole, Prompt
from safetytooling.utils import utils
from pathlib import Path
utils.setup_environment()
API = InferenceAPI(cache_dir=Path(".cache"))
prompt = Prompt(messages=[ChatMessage(content="What is your name?", role=MessageRole.user)])
response = await API(
model_id="gpt-4o-mini",
prompt=prompt,
print_prompt_and_response=True,
)
The InferenceAPI class supports running new models when they come out without needing to update the codebase. However, you have to pass force_provider
to the API object call. For example, if you want to run gpt-4-new-model, you can do:
response = await API(
model_id="gpt-4-new-model",
prompt=prompt,
force_provider="openai"
)
Note: setup_environment() will automatically load the API keys and set the environment variables. You can set custom API keys by setting the environment variables instead of calling setup_environment(). If you have multiple API keys for OpenAI and Anthropic, you can pass openai_tag and anthropic_tag to setup_environment() to choose those to be exported.
utils.setup_environment(openai_tag="OPENAI_API_KEY_CUSTOM", anthropic_tag="ANTHROPIC_API_KEY_CUSTOM")
See examples/anthropic_batch_api/run_anthropic_batch.py
for an example of how to use the Anthropic Batch API and how to set up command line input arguments using simple_parsing
and ExperimentConfigBase
(a useful base class we created for this project).
If you want to use a different provider that uses an OpenAI compatible api, you can just override the base_url
when creating an InferenceAPI
and then doing force_provider="openai"
when calling it. E.g.
API = InferenceAPI(cache_dir=Path(".cache"), openai_base_url="https://openrouter.ai/api/v1", openai_api_key=openrouter_api_key)
response = await API(
model_id="deepseek/deepseek-v3-base:free",
prompt=base_prompt,
max_tokens=100,
print_prompt_and_response=True,
temperature=0,
force_provider="openai",
)
We make this easy to run a server locally and hook into the InferenceAPI. Here is a snippet and it is also in the examples/inference_api/vllm_api.ipynb
notebook.
from safetytooling.apis import InferenceAPI
from safetytooling.data_models import ChatMessage, MessageRole, Prompt
from safetytooling.utils import utils
from safetytooling.utils.vllm_utils import deploy_model_vllm_locally_auto
utils.setup_environment()
server = await deploy_model_vllm_locally_auto("meta-llama/Llama-3.1-8B-Instruct", max_model_len=1024, max_num_seqs=32)
API = InferenceAPI(vllm_base_url=f"{server.base_url}/v1/chat/completions", vllm_num_threads=32, use_vllm_if_model_not_found=True)
prompt = Prompt(messages=[ChatMessage(content="What is your name?", role=MessageRole.user)])
response = await API(
model_id=server.model_name,
prompt=prompt,
print_prompt_and_response=True,
)
To launch a finetuning job, run the following command:
python -m safetytooling.apis.finetuning.openai.run --model 'gpt-3.5-turbo-1106' --train_file <path-to-train-file> --n_epochs 1
This should automatically create a new job on the OpenAI API, and also sync that run to wandb. You will have to keep the program running until the OpenAI job is complete.
You can include the --dry_run
flag if you just want to validate the train/val files and estimate the training cost without actually launching a job.
To get OpenAI usage stats, run:
python -m safetytooling.apis.inference.usage.usage_openai
You can pass a list of models to get usage stats for specific models. For example:
python -m safetytooling.apis.inference.usage.usage_openai --models 'model-id1' 'model-id2' --openai_tags 'OPENAI_API_KEY1' 'OPENAI_API_KEY2'
And for Anthropic, to fine out the numbder of threads being used run:
python -m safetytooling.apis.inference.usage.usage_anthropic
-
LLM Inference API:
- Location:
safetytooling/apis/inference/api.py
-
Caching Mechanism:
- Caches prompt calls to avoid redundant API calls. Cache location defaults to
$exp_dir/cache
. This means you can kill your run anytime and restart it without worrying about wasting API calls. -
Redis Cache: Optionally use Redis for caching instead of files by setting
REDIS_CACHE=true
in your environment. Configure Redis connection with:-
REDIS_PASSWORD
: Optional Redis password for authentication - Default Redis configuration: localhost:6379, no password
-
- The cache implementation is automatically selected based on the
REDIS_CACHE
environment variable. -
No Cache: Set
NO_CACHE=True
as an environment variable to disable all caching. This is equivalent to settinguse_cache=False
when initialising an InferenceAPI or BatchInferenceAPI object.
- Caches prompt calls to avoid redundant API calls. Cache location defaults to
-
Prompt Logging: For debugging, human-readable
.txt
files are can be output in$exp_dir/prompt_history
and timestamped for easy reference (off by default). You can also passprint_prompt_and_response
to the api object to print coloured messages to the terminal. - Manages rate limits efficiently for OpenAI bypassing the need for exponential backoff.
- Number of concurrent threads can be customised when initialising the api object for each provider (e.g.
openai_num_threads
,anthropic_num_threads
). Furthermore, the fraction of the OpenAI rate limit can be specified (e.g. only using 50% rate limit by settingopenai_fraction_rate_limit=0.5
. - When initialising the API it checks how much OpenAI rate limit is available and sets caps based on that.
- Allows custom filtering of responses via
is_valid
function and will retry until a valid one is generated (e.g. ensuring json output). - Provides a running total of cost for OpenAI models and model timings for performance analysis.
- Utilise maximum rate limit by setting
max_tokens=None
for OpenAI models. - Supports OpenAI moderation, embedding and Realtime (audio) API
- Supports OpenAI chat/completion models, Anthropic, Gemini (all modalities), GraySwan, HuggingFace inference endpoints (e.g. for llama3).
- Location:
- Prompt, LLMResponse and ChatMessage class
- Location:
safetytooling/data_models/messages.py
- A unified class for prompts with methods to unpack it into the format expected by the different providers.
- Take a look at the
Prompt
class to see how the messages are transformed into different formats. This is important if you ever need to add new providers or modalities.
- Take a look at the
- Supports image and audio.
- The objects are pydantic and inherit from a hashable base class (useful for caching since you can hash based on the prompt class)
- Location:
-
Finetuning runs with Weights and Biases:
- Location:
safetytooling/apis/finetuning/run.py
- Logs finetuning runs with Weights and Biases for easy tracking of experiments.
- Location:
-
Text to speech
- Location:
safetytooling/apis/tts/elevenlabs.py
- Location:
-
Usage Tracking:
- Location:
safetytooling/apis/inference/usage
- Tracks usage of OpenAI and Anthropic APIs so you know how much they are being utilised within your organisation.
- Location:
-
Experiment base class
- Location:
safetytooling/utils/experiment_utils.py
- This dataclass is used as a base class for each experiment script. It provides a common set of args that always need to be specified (e.g. those that initialise the InferenceAPI class)
- It creates the experiment directory and sets up logging to be output into log files there. It also sets random seeds and initialises the InferenceAPI so it is accessible easily by using
cfg.api
. - See examples of usage in the
examples
repo in next section.
- Location:
-
API KEY management
- All API keys get stored in the .env file (that is in the .gitignore)
-
setup_environment()
insafetytooling/uils/utils.py
loads these in so they are accessible by the code (and also automates exporting environment variables)
-
Utilities:
- Plotting functions for confusion matrices and setting up plotting in notebooks (
plotting_utils.py
) - Prompt loading with a templating library called Jinja (
prompt_utils.py
) - Image/audio processing (
image_utils.py
andaudio_utils.py
) - Human labelling framework which keeps a persistent store of human labels on disk based on the input/output pair of the LLM (
human_labeling_utils.py
)
- Plotting functions for confusion matrices and setting up plotting in notebooks (
If you use this repo in your work, please cite it as follows:
@misc{safety_tooling_2025,
author = {John Hughes and safety-research},
title = {safety-research/safety-tooling: v1.0.0},
year = {2025},
publisher = {Zenodo},
version = {v1.0.0},
doi = {10.5281/zenodo.15363603},
url = {https://doi.org/10.5281/zenodo.15363603}
}
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for safety-tooling
Similar Open Source Tools

safety-tooling
This repository, safety-tooling, is designed to be shared across various AI Safety projects. It provides an LLM API with a common interface for OpenAI, Anthropic, and Google models. The aim is to facilitate collaboration among AI Safety researchers, especially those with limited software engineering backgrounds, by offering a platform for contributing to a larger codebase. The repo can be used as a git submodule for easy collaboration and updates. It also supports pip installation for convenience. The repository includes features for installation, secrets management, linting, formatting, Redis configuration, testing, dependency management, inference, finetuning, API usage tracking, and various utilities for data processing and experimentation.

blinkid-android
The BlinkID Android SDK is a comprehensive solution for implementing secure document scanning and extraction. It offers powerful capabilities for extracting data from a wide range of identification documents. The SDK provides features for integrating document scanning into Android apps, including camera requirements, SDK resource pre-bundling, customizing the UX, changing default strings and localization, troubleshooting integration difficulties, and using the SDK through various methods. It also offers options for completely custom UX with low-level API integration. The SDK size is optimized for different processor architectures, and API documentation is available for reference. For any questions or support, users can contact the Microblink team at help.microblink.com.

mosec
Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API. * **Highly performant** : web layer and task coordination built with Rust 🦀, which offers blazing speed in addition to efficient CPU utilization powered by async I/O * **Ease of use** : user interface purely in Python 🐍, by which users can serve their models in an ML framework-agnostic manner using the same code as they do for offline testing * **Dynamic batching** : aggregate requests from different users for batched inference and distribute results back * **Pipelined stages** : spawn multiple processes for pipelined stages to handle CPU/GPU/IO mixed workloads * **Cloud friendly** : designed to run in the cloud, with the model warmup, graceful shutdown, and Prometheus monitoring metrics, easily managed by Kubernetes or any container orchestration systems * **Do one thing well** : focus on the online serving part, users can pay attention to the model optimization and business logic

telemetry-airflow
This repository codifies the Airflow cluster that is deployed at workflow.telemetry.mozilla.org (behind SSO) and commonly referred to as "WTMO" or simply "Airflow". Some links relevant to users and developers of WTMO: * The `dags` directory in this repository contains some custom DAG definitions * Many of the DAGs registered with WTMO don't live in this repository, but are instead generated from ETL task definitions in bigquery-etl * The Data SRE team maintains a WTMO Developer Guide (behind SSO)

sage
Sage is a tool that allows users to chat with any codebase, providing a chat interface for code understanding and integration. It simplifies the process of learning how a codebase works by offering heavily documented answers sourced directly from the code. Users can set up Sage locally or on the cloud with minimal effort. The tool is designed to be easily customizable, allowing users to swap components of the pipeline and improve the algorithms powering code understanding and generation.

torchchat
torchchat is a codebase showcasing the ability to run large language models (LLMs) seamlessly. It allows running LLMs using Python in various environments such as desktop, server, iOS, and Android. The tool supports running models via PyTorch, chatting, generating text, running chat in the browser, and running models on desktop/server without Python. It also provides features like AOT Inductor for faster execution, running in C++ using the runner, and deploying and running on iOS and Android. The tool supports popular hardware and OS including Linux, Mac OS, Android, and iOS, with various data types and execution modes available.

unstructured
The `unstructured` library provides open-source components for ingesting and pre-processing images and text documents, such as PDFs, HTML, Word docs, and many more. The use cases of `unstructured` revolve around streamlining and optimizing the data processing workflow for LLMs. `unstructured` modular functions and connectors form a cohesive system that simplifies data ingestion and pre-processing, making it adaptable to different platforms and efficient in transforming unstructured data into structured outputs.

SirChatalot
A Telegram bot that proves you don't need a body to have a personality. It can use various text and image generation APIs to generate responses to user messages. For text generation, the bot can use: * OpenAI's ChatGPT API (or other compatible API). Vision capabilities can be used with GPT-4 models. Function calling can be used with Function calling. * Anthropic's Claude API. Vision capabilities can be used with Claude 3 models. Function calling can be used with tool use. * YandexGPT API Bot can also generate images with: * OpenAI's DALL-E * Stability AI * Yandex ART This bot can also be used to generate responses to voice messages. Bot will convert the voice message to text and will then generate a response. Speech recognition can be done using the OpenAI's Whisper model. To use this feature, you need to install the ffmpeg library. This bot is also support working with files, see Files section for more details. If function calling is enabled, bot can generate images and search the web (limited).

hugescm
HugeSCM is a cloud-based version control system designed to address R&D repository size issues. It effectively manages large repositories and individual large files by separating data storage and utilizing advanced algorithms and data structures. It aims for optimal performance in handling version control operations of large-scale repositories, making it suitable for single large library R&D, AI model development, and game or driver development.

gpt-cli
gpt-cli is a command-line interface tool for interacting with various chat language models like ChatGPT, Claude, and others. It supports model customization, usage tracking, keyboard shortcuts, multi-line input, markdown support, predefined messages, and multiple assistants. Users can easily switch between different assistants, define custom assistants, and configure model parameters and API keys in a YAML file for easy customization and management.

garak
Garak is a vulnerability scanner designed for LLMs (Large Language Models) that checks for various weaknesses such as hallucination, data leakage, prompt injection, misinformation, toxicity generation, and jailbreaks. It combines static, dynamic, and adaptive probes to explore vulnerabilities in LLMs. Garak is a free tool developed for red-teaming and assessment purposes, focusing on making LLMs or dialog systems fail. It supports various LLM models and can be used to assess their security and robustness.

paper-qa
PaperQA is a minimal package for question and answering from PDFs or text files, providing very good answers with in-text citations. It uses OpenAI Embeddings to embed and search documents, and includes a process of embedding docs, queries, searching for top passages, creating summaries, using an LLM to re-score and select relevant summaries, putting summaries into prompt, and generating answers. The tool can be used to answer specific questions related to scientific research by leveraging citations and relevant passages from documents.

hal9
Hal9 is a tool that allows users to create and deploy generative applications such as chatbots and APIs quickly. It is open, intuitive, scalable, and powerful, enabling users to use various models and libraries without the need to learn complex app frameworks. With a focus on AI tasks like RAG, fine-tuning, alignment, and training, Hal9 simplifies the development process by skipping engineering tasks like frontend development, backend integration, deployment, and operations.

frontend
Nuclia frontend apps and libraries repository contains various frontend applications and libraries for the Nuclia platform. It includes components such as Dashboard, Widget, SDK, Sistema (design system), NucliaDB admin, CI/CD Deployment, and Maintenance page. The repository provides detailed instructions on installation, dependencies, and usage of these components for both Nuclia employees and external developers. It also covers deployment processes for different components and tools like ArgoCD for monitoring deployments and logs. The repository aims to facilitate the development, testing, and deployment of frontend applications within the Nuclia ecosystem.

leptonai
A Pythonic framework to simplify AI service building. The LeptonAI Python library allows you to build an AI service from Python code with ease. Key features include a Pythonic abstraction Photon, simple abstractions to launch models like those on HuggingFace, prebuilt examples for common models, AI tailored batteries, a client to automatically call your service like native Python functions, and Pythonic configuration specs to be readily shipped in a cloud environment.

fabric
Fabric is an open-source framework for augmenting humans using AI. It provides a structured approach to breaking down problems into individual components and applying AI to them one at a time. Fabric includes a collection of pre-defined Patterns (prompts) that can be used for a variety of tasks, such as extracting the most interesting parts of YouTube videos and podcasts, writing essays, summarizing academic papers, creating AI art prompts, and more. Users can also create their own custom Patterns. Fabric is designed to be easy to use, with a command-line interface and a variety of helper apps. It is also extensible, allowing users to integrate it with their own AI applications and infrastructure.
For similar tasks

safety-tooling
This repository, safety-tooling, is designed to be shared across various AI Safety projects. It provides an LLM API with a common interface for OpenAI, Anthropic, and Google models. The aim is to facilitate collaboration among AI Safety researchers, especially those with limited software engineering backgrounds, by offering a platform for contributing to a larger codebase. The repo can be used as a git submodule for easy collaboration and updates. It also supports pip installation for convenience. The repository includes features for installation, secrets management, linting, formatting, Redis configuration, testing, dependency management, inference, finetuning, API usage tracking, and various utilities for data processing and experimentation.

chatgpt-web-sea
ChatGPT Web Sea is an open-source project based on ChatGPT-web for secondary development. It supports all models that comply with the OpenAI interface standard, allows for model selection, configuration, and extension, and is compatible with OneAPI. The tool includes a Chinese ChatGPT tuning guide, supports file uploads, and provides model configuration options. Users can interact with the tool through a web interface, configure models, and perform tasks such as model selection, API key management, and chat interface setup. The project also offers Docker deployment options and instructions for manual packaging.

farfalle
Farfalle is an open-source AI-powered search engine that allows users to run their own local LLM or utilize the cloud. It provides a tech stack including Next.js for frontend, FastAPI for backend, Tavily for search API, Logfire for logging, and Redis for rate limiting. Users can get started by setting up prerequisites like Docker and Ollama, and obtaining API keys for Tavily, OpenAI, and Groq. The tool supports models like llama3, mistral, and gemma. Users can clone the repository, set environment variables, run containers using Docker Compose, and deploy the backend and frontend using services like Render and Vercel.

ComfyUI-Tara-LLM-Integration
Tara is a powerful node for ComfyUI that integrates Large Language Models (LLMs) to enhance and automate workflow processes. With Tara, you can create complex, intelligent workflows that refine and generate content, manage API keys, and seamlessly integrate various LLMs into your projects. It comprises nodes for handling OpenAI-compatible APIs, saving and loading API keys, composing multiple texts, and using predefined templates for OpenAI and Groq. Tara supports OpenAI and Grok models with plans to expand support to together.ai and Replicate. Users can install Tara via Git URL or ComfyUI Manager and utilize it for tasks like input guidance, saving and loading API keys, and generating text suitable for chaining in workflows.

conversational-agent-langchain
This repository contains a Rest-Backend for a Conversational Agent that allows embedding documents, semantic search, QA based on documents, and document processing with Large Language Models. It uses Aleph Alpha and OpenAI Large Language Models to generate responses to user queries, includes a vector database, and provides a REST API built with FastAPI. The project also features semantic search, secret management for API keys, installation instructions, and development guidelines for both backend and frontend components.

ChatGPT-Next-Web-Pro
ChatGPT-Next-Web-Pro is a tool that provides an enhanced version of ChatGPT-Next-Web with additional features and functionalities. It offers complete ChatGPT-Next-Web functionality, file uploading and storage capabilities, drawing and video support, multi-modal support, reverse model support, knowledge base integration, translation, customizations, and more. The tool can be deployed with or without a backend, allowing users to interact with AI models, manage accounts, create models, manage API keys, handle orders, manage memberships, and more. It supports various cloud services like Aliyun OSS, Tencent COS, and Minio for file storage, and integrates with external APIs like Azure, Google Gemini Pro, and Luma. The tool also provides options for customizing website titles, subtitles, icons, and plugin buttons, and offers features like voice input, file uploading, real-time token count display, and more.

APIMyLlama
APIMyLlama is a server application that provides an interface to interact with the Ollama API, a powerful AI tool to run LLMs. It allows users to easily distribute API keys to create amazing things. The tool offers commands to generate, list, remove, add, change, activate, deactivate, and manage API keys, as well as functionalities to work with webhooks, set rate limits, and get detailed information about API keys. Users can install APIMyLlama packages with NPM, PIP, Jitpack Repo+Gradle or Maven, or from the Crates Repository. The tool supports Node.JS, Python, Java, and Rust for generating responses from the API. Additionally, it provides built-in health checking commands for monitoring API health status.

IntelliChat
IntelliChat is an open-source AI chatbot tool designed to accelerate the integration of multiple language models into chatbot apps. Users can select their preferred AI provider and model from the UI, manage API keys, and access data using Intellinode. The tool is built with Intellinode and Next.js, and supports various AI providers such as OpenAI ChatGPT, Google Gemini, Azure Openai, Cohere Coral, Replicate, Mistral AI, Anthropic, and vLLM. It offers a user-friendly interface for developers to easily incorporate AI capabilities into their chatbot applications.
For similar jobs

alignment-handbook
The Alignment Handbook provides robust training recipes for continuing pretraining and aligning language models with human and AI preferences. It includes techniques such as continued pretraining, supervised fine-tuning, reward modeling, rejection sampling, and direct preference optimization (DPO). The handbook aims to fill the gap in public resources on training these models, collecting data, and measuring metrics for optimal downstream performance.

safety-tooling
This repository, safety-tooling, is designed to be shared across various AI Safety projects. It provides an LLM API with a common interface for OpenAI, Anthropic, and Google models. The aim is to facilitate collaboration among AI Safety researchers, especially those with limited software engineering backgrounds, by offering a platform for contributing to a larger codebase. The repo can be used as a git submodule for easy collaboration and updates. It also supports pip installation for convenience. The repository includes features for installation, secrets management, linting, formatting, Redis configuration, testing, dependency management, inference, finetuning, API usage tracking, and various utilities for data processing and experimentation.

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.

oss-fuzz-gen
This framework generates fuzz targets for real-world `C`/`C++` projects with various Large Language Models (LLM) and benchmarks them via the `OSS-Fuzz` platform. It manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.