
fiftyone
Refine high-quality datasets and visual AI models
Stars: 9897

FiftyOne is an open-source tool designed for building high-quality datasets and computer vision models. It supercharges machine learning workflows by enabling users to visualize datasets, interpret models faster, and improve efficiency. With FiftyOne, users can explore scenarios, identify failure modes, visualize complex labels, evaluate models, find annotation mistakes, and much more. The tool aims to streamline the process of improving machine learning models by providing a comprehensive set of features for data analysis and model interpretation.
README:
The open-source tool for building high-quality datasets and computer vision models
Website • Docs • Try it Now • Getting Started Guides • Tutorials • Blog • Community
We created FiftyOne to supercharge your visual AI projects by enabling you to visualize datasets, analyze models, and improve data quality more efficiently than ever before 🤝
If you're looking to scale to production-grade, collaborative, cloud-native enterprise workloads, check out FiftyOne Enterprise 🚀
As simple as:
pip install fiftyone
More details
FiftyOne supports Python 3.9 - 3.12.
For most users, we recommend installing the latest release version of FiftyOne
via pip
as shown above.
If you want to contribute to FiftyOne or install the latest development version, then you can also perform a source install.
See the prerequisites section for system-specific setup information.
We strongly recommend that you install FiftyOne in a virtual environment to maintain a clean workspace.
Consult the installation guide for troubleshooting and other information about getting up-and-running with FiftyOne.
Install from source
Follow the instructions below to install FiftyOne from source and build the App.
You'll need the following tools installed:
- Python (3.9 - 3.12)
- Node.js - on Linux, we recommend using nvm to install an up-to-date version.
-
Yarn - once Node.js is installed, you can
enable Yarn via
corepack enable
We strongly recommend that you install FiftyOne in a virtual environment to maintain a clean workspace.
If you are working in Google Colab, skip to here.
First, clone the repository:
git clone https://github.com/voxel51/fiftyone
cd fiftyone
Then run the install script:
# Mac or Linux
./install.sh
# Windows
.\install.bat
If you run into issues importing FiftyOne, you may need to add the path to the
cloned repository to your PYTHONPATH
:
export PYTHONPATH=$PYTHONPATH:/path/to/fiftyone
Note that the install script adds to your nvm
settings in your ~/.bashrc
or
~/.bash_profile
, which is needed for installing and building the App.
To upgrade an existing source installation to the bleeding edge, simply pull
the latest develop
branch and rerun the install script:
git checkout develop
git pull
# Mac or Linux
bash install.bash
# Windows
.\install.bat
When you pull in new changes to the App, you will need to rebuild it, which you
can do either by rerunning the install script or just running yarn build
in
the ./app
directory.
If you would like to
contribute to FiftyOne,
you should perform a developer installation using the -d
flag of the install
script:
# Mac or Linux
bash install.bash -d
# Windows
.\install.bat -d
Although not required, developers typically prefer to configure their FiftyOne installation to connect to a self-installed and managed instance of MongoDB, which you can do by following these simple steps.
You can install from source in Google Colab by running the following in a cell and then restarting the runtime:
%%shell
git clone --depth 1 https://github.com/voxel51/fiftyone.git
cd fiftyone
# Mac or Linux
bash install.bash
# Windows
.\install.bat
See the docs guide for information on building and contributing to the documentation.
You can uninstall FiftyOne as follows:
pip uninstall fiftyone fiftyone-brain fiftyone-db
Prerequisites for beginners
Follow the instructions for your operating system or environment to perform basic system setup before installing FiftyOne.
If you're an experienced developer, you've likely already done this.
Linux
These steps work on a clean install of Ubuntu Desktop 24.04, and should also work on Ubuntu 24.04 and 22.04, and on Ubuntu Server:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install python3-venv python3-dev build-essential git-all libgl1-mesa-dev
- On Linux, you will need at least the
openssl
andlibcurl
packages - On Debian-based distributions, you will need to install
libcurl4
orlibcurl3
instead oflibcurl
, depending on the age of your distribution
# Ubuntu
sudo apt install libcurl4 openssl
# Fedora
sudo dnf install libcurl openssl
python3 -m venv fiftyone_env
source fiftyone_env/bin/activate
If you plan to work with video datasets, you'll need to install FFmpeg:
sudo apt-get install ffmpeg
MacOS
xcode-select --install
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
After running the above command, follow the instructions in your terminal to complete the Homebrew installation.
brew install [email protected]
brew install protobuf
python3 -m venv fiftyone_env
source fiftyone_env/bin/activate
If you plan to work with video datasets, you'll need to install FFmpeg:
brew install ffmpeg
Windows
Download a Python 3.9 - 3.12 installer from python.org. Make sure to pick a 64-bit version. For example, this Python 3.10.11 installer.
Double-click on the installer to run it, and follow the steps in the installer.
- Check the box to add Python to your
PATH
- At the end of the installer, there is an option to disable the
PATH
length limit. It is recommended to click this
Download Microsoft Visual C++ Redistributable. Double-click on the installer to run it, and follow the steps in the installer.
Download Git from this link. Double-click on the installer to run it, and follow the steps in the installer.
- Press
Win + R
. typecmd
, and pressEnter
. Alternatively, search Command Prompt in the Start Menu. - Navigate to your project.
cd C:\path\to\your\project
- Create the environment
python -m venv fiftyone_env
- Activate the environment typing this in the command line window
fiftyone_env\Scripts\activate
- After activation, your command prompt should change and show the name of
the virtual environment
(fiftyone_env) C:\path\to\your\project
If you plan to work with video datasets, you'll need to install FFmpeg.
Download an FFmpeg binary from here. Add
FFmpeg's path (e.g., C:\ffmpeg\bin
) to your PATH
environmental variable.
Docker
Refer to these instructions to see how to build and run Docker images containing release or source builds of FiftyOne.
Dive right into FiftyOne by opening a Python shell and running the snippet below, which downloads a small dataset and launches the FiftyOne App so you can explore it:
import fiftyone as fo
import fiftyone.zoo as foz
dataset = foz.load_zoo_dataset("quickstart")
session = fo.launch_app(dataset)
Then check out this Colab notebook to see some common workflows on the quickstart dataset.
Note that if you are running the above code in a script, you must include
session.wait()
to block execution until you close the App. See
this page
for more information.
- Visualize Complex Datasets: Easily explore images, videos, and associated labels in a powerful visual interface.
https://github.com/user-attachments/assets/9dc2db88-967d-43fa-bda0-85e4d5ab6a7a
- Explore Embeddings: Select points of interest and view the corresponding samples/labels.
https://github.com/user-attachments/assets/246faeb7-dcab-4e01-9357-e50f6b106da7
- Analyze and Improve Models: Evaluate model performance, identify failure modes, and fine-tune your models.
https://github.com/user-attachments/assets/8c32d6c4-51e7-4fea-9a3c-2ffd9690f5d6
- Advanced Data Curation: Quickly find and fix data issues, annotation errors, and edge cases.
https://github.com/user-attachments/assets/24fa1960-c2dd-46ae-ae5f-d58b3b84cfe4
- Rich Integrations: Works with popular deep learning libraries like PyTorch, Hugging Face, Ultralytics, and more.
https://github.com/user-attachments/assets/de5f25e1-a967-4362-9e04-616449e745e5
- Open and Extensible: Customize and extend FiftyOne to fit your specific needs.
https://github.com/user-attachments/assets/c7ed496d-0cf7-45d6-9853-e349f1abd6f8
Check out these resources to get up and running with FiftyOne:
Getting Started Guides | Tutorials | Recipes | User Guide | Examples | API Reference | CLI Reference |
---|
Full documentation is available at fiftyone.ai.
Want to securely collaborate on billions of samples in the cloud and connect to your compute resources to automate your workflows? Check out FiftyOne Enterprise.
Refer to our common issues page to troubleshoot installation issues. If you're still stuck, check our frequently asked questions page for more answers.
If you encounter an issue that the above resources don't help you resolve, feel free to open an issue on GitHub or contact us on Discord.
Connect with us through your preferred channels:
🎊 Share how FiftyOne makes your visual AI projects a reality on social media and tag us with @Voxel51 and #FiftyOne 🎊
FiftyOne and FiftyOne Brain are open source and community contributions are welcome! Check out the contribution guide to learn how to get involved.
Special thanks to these amazing people for contributing to FiftyOne!
If you use FiftyOne in your research, feel free to cite the project (but only if you love it 😊):
@article{moore2020fiftyone,
title={FiftyOne},
author={Moore, B. E. and Corso, J. J.},
journal={GitHub. Note: https://github.com/voxel51/fiftyone},
year={2020}
}
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for fiftyone
Similar Open Source Tools

fiftyone
FiftyOne is an open-source tool designed for building high-quality datasets and computer vision models. It supercharges machine learning workflows by enabling users to visualize datasets, interpret models faster, and improve efficiency. With FiftyOne, users can explore scenarios, identify failure modes, visualize complex labels, evaluate models, find annotation mistakes, and much more. The tool aims to streamline the process of improving machine learning models by providing a comprehensive set of features for data analysis and model interpretation.

Unity-MCP
Unity-MCP is an AI helper designed for game developers using Unity. It facilitates a wide range of tasks in Unity Editor and running games on any platform by connecting to AI via TCP connection. The tool allows users to chat with AI like with a human, supports local and remote usage, and offers various default AI tools. Users can provide detailed information for classes, fields, properties, and methods using the 'Description' attribute in C# code. Unity-MCP enables instant C# code compilation and execution, provides access to assets and C# scripts, and offers tools for proper issue understanding and project data manipulation. It also allows users to find and call methods in the codebase, work with Unity API, and access human-readable descriptions of code elements.

yolo-flutter-app
Ultralytics YOLO for Flutter is a Flutter plugin that allows you to integrate Ultralytics YOLO computer vision models into your mobile apps. It supports both Android and iOS platforms, providing APIs for object detection and image classification. The plugin leverages Flutter Platform Channels for seamless communication between the client and host, handling all processing natively. Before using the plugin, you need to export the required models in `.tflite` and `.mlmodel` formats. The plugin provides support for tasks like detection and classification, with specific instructions for Android and iOS platforms. It also includes features like camera preview and methods for object detection and image classification on images. Ultralytics YOLO thrives on community collaboration and offers different licensing paths for open-source and commercial use cases.

amica
Amica is an application that allows you to easily converse with 3D characters in your browser. You can import VRM files, adjust the voice to fit the character, and generate response text that includes emotional expressions.

AutoRAG
AutoRAG is an AutoML tool designed to automatically find the optimal RAG pipeline for your data. It simplifies the process of evaluating various RAG modules to identify the best pipeline for your specific use-case. The tool supports easy evaluation of different module combinations, making it efficient to find the most suitable RAG pipeline for your needs. AutoRAG also offers a cloud beta version to assist users in running and optimizing the tool, along with building RAG evaluation datasets for a starting price of $9.99 per optimization.

fiftyone-brain
FiftyOne Brain contains the open source AI/ML capabilities for the FiftyOne ecosystem, enabling users to automatically analyze and manipulate their datasets and models. Features include visual similarity search, query by text, finding unique and representative samples, finding media quality problems and annotation mistakes, and more.

ragflow
RAGFlow is an open-source Retrieval-Augmented Generation (RAG) engine that combines deep document understanding with Large Language Models (LLMs) to provide accurate question-answering capabilities. It offers a streamlined RAG workflow for businesses of all sizes, enabling them to extract knowledge from unstructured data in various formats, including Word documents, slides, Excel files, images, and more. RAGFlow's key features include deep document understanding, template-based chunking, grounded citations with reduced hallucinations, compatibility with heterogeneous data sources, and an automated and effortless RAG workflow. It supports multiple recall paired with fused re-ranking, configurable LLMs and embedding models, and intuitive APIs for seamless integration with business applications.

mLoRA
mLoRA (Multi-LoRA Fine-Tune) is an open-source framework for efficient fine-tuning of multiple Large Language Models (LLMs) using LoRA and its variants. It allows concurrent fine-tuning of multiple LoRA adapters with a shared base model, efficient pipeline parallelism algorithm, support for various LoRA variant algorithms, and reinforcement learning preference alignment algorithms. mLoRA helps save computational and memory resources when training multiple adapters simultaneously, achieving high performance on consumer hardware.

ai-toolkit
The AI Toolkit by Ostris is a collection of tools for machine learning, specifically designed for image generation, LoRA (latent representations of attributes) extraction and manipulation, and model training. It provides a user-friendly interface and extensive documentation to make it accessible to both developers and non-developers. The toolkit is actively under development, with new features and improvements being added regularly. Some of the key features of the AI Toolkit include: - Batch Image Generation: Allows users to generate a batch of images based on prompts or text files, using a configuration file to specify the desired settings. - LoRA (lierla), LoCON (LyCORIS) Extractor: Facilitates the extraction of LoRA and LoCON representations from pre-trained models, enabling users to modify and manipulate these representations for various purposes. - LoRA Rescale: Provides a tool to rescale LoRA weights, allowing users to adjust the influence of specific attributes in the generated images. - LoRA Slider Trainer: Enables the training of LoRA sliders, which can be used to control and adjust specific attributes in the generated images, offering a powerful tool for fine-tuning and customization. - Extensions: Supports the creation and sharing of custom extensions, allowing users to extend the functionality of the toolkit with their own tools and scripts. - VAE (Variational Auto Encoder) Trainer: Facilitates the training of VAEs for image generation, providing users with a tool to explore and improve the quality of generated images. The AI Toolkit is a valuable resource for anyone interested in exploring and utilizing machine learning for image generation and manipulation. Its user-friendly interface, extensive documentation, and active development make it an accessible and powerful tool for both beginners and experienced users.

cognee
Cognee is an open-source framework designed for creating self-improving deterministic outputs for Large Language Models (LLMs) using graphs, LLMs, and vector retrieval. It provides a platform for AI engineers to enhance their models and generate more accurate results. Users can leverage Cognee to add new information, utilize LLMs for knowledge creation, and query the system for relevant knowledge. The tool supports various LLM providers and offers flexibility in adding different data types, such as text files or directories. Cognee aims to streamline the process of working with LLMs and improving AI models for better performance and efficiency.

quickvid
QuickVid is an open-source video summarization tool that uses AI to generate summaries of YouTube videos. It is built with Whisper, GPT, LangChain, and Supabase. QuickVid can be used to save time and get the essence of any YouTube video with intelligent summarization.

pebblo
Pebblo enables developers to safely load data and promote their Gen AI app to deployment without worrying about the organization’s compliance and security requirements. The project identifies semantic topics and entities found in the loaded data and summarizes them on the UI or a PDF report.

airflint
Airflint is a tool designed to enforce best practices for all your Airflow Directed Acyclic Graphs (DAGs). It is currently in the alpha stage and aims to help users adhere to recommended practices when working with Airflow. Users can install Airflint from PyPI and integrate it into their existing Airflow environment to improve DAG quality. The tool provides rules for function-level imports and jinja template syntax usage, among others, to enhance the development process of Airflow DAGs.

modelscope-agent
ModelScope-Agent is a customizable and scalable Agent framework. A single agent has abilities such as role-playing, LLM calling, tool usage, planning, and memory. It mainly has the following characteristics: - **Simple Agent Implementation Process**: Simply specify the role instruction, LLM name, and tool name list to implement an Agent application. The framework automatically arranges workflows for tool usage, planning, and memory. - **Rich models and tools**: The framework is equipped with rich LLM interfaces, such as Dashscope and Modelscope model interfaces, OpenAI model interfaces, etc. Built in rich tools, such as **code interpreter**, **weather query**, **text to image**, **web browsing**, etc., make it easy to customize exclusive agents. - **Unified interface and high scalability**: The framework has clear tools and LLM registration mechanism, making it convenient for users to expand more diverse Agent applications. - **Low coupling**: Developers can easily use built-in tools, LLM, memory, and other components without the need to bind higher-level agents.

langroid
Langroid is a Python framework that makes it easy to build LLM-powered applications. It uses a multi-agent paradigm inspired by the Actor Framework, where you set up Agents, equip them with optional components (LLM, vector-store and tools/functions), assign them tasks, and have them collaboratively solve a problem by exchanging messages. Langroid is a fresh take on LLM app-development, where considerable thought has gone into simplifying the developer experience; it does not use Langchain.

RD-Agent
RD-Agent is a tool designed to automate critical aspects of industrial R&D processes, focusing on data-driven scenarios to streamline model and data development. It aims to propose new ideas ('R') and implement them ('D') automatically, leading to solutions of significant industrial value. The tool supports scenarios like Automated Quantitative Trading, Data Mining Agent, Research Copilot, and more, with a framework to push the boundaries of research in data science. Users can create a Conda environment, install the RDAgent package from PyPI, configure GPT model, and run various applications for tasks like quantitative trading, model evolution, medical prediction, and more. The tool is intended to enhance R&D processes and boost productivity in industrial settings.
For similar tasks

labelbox-python
Labelbox is a data-centric AI platform for enterprises to develop, optimize, and use AI to solve problems and power new products and services. Enterprises use Labelbox to curate data, generate high-quality human feedback data for computer vision and LLMs, evaluate model performance, and automate tasks by combining AI and human-centric workflows. The academic & research community uses Labelbox for cutting-edge AI research.

promptfoo
Promptfoo is a tool for testing and evaluating LLM output quality. With promptfoo, you can build reliable prompts, models, and RAGs with benchmarks specific to your use-case, speed up evaluations with caching, concurrency, and live reloading, score outputs automatically by defining metrics, use as a CLI, library, or in CI/CD, and use OpenAI, Anthropic, Azure, Google, HuggingFace, open-source models like Llama, or integrate custom API providers for any LLM API.

vespa
Vespa is a platform that performs operations such as selecting a subset of data in a large corpus, evaluating machine-learned models over the selected data, organizing and aggregating it, and returning it, typically in less than 100 milliseconds, all while the data corpus is continuously changing. It has been in development for many years and is used on a number of large internet services and apps which serve hundreds of thousands of queries from Vespa per second.

python-aiplatform
The Vertex AI SDK for Python is a library that provides a convenient way to use the Vertex AI API. It offers a high-level interface for creating and managing Vertex AI resources, such as datasets, models, and endpoints. The SDK also provides support for training and deploying custom models, as well as using AutoML models. With the Vertex AI SDK for Python, you can quickly and easily build and deploy machine learning models on Vertex AI.

ScandEval
ScandEval is a framework for evaluating pretrained language models on mono- or multilingual language tasks. It provides a unified interface for benchmarking models on a variety of tasks, including sentiment analysis, question answering, and machine translation. ScandEval is designed to be easy to use and extensible, making it a valuable tool for researchers and practitioners alike.

opencompass
OpenCompass is a one-stop platform for large model evaluation, aiming to provide a fair, open, and reproducible benchmark for large model evaluation. Its main features include: * Comprehensive support for models and datasets: Pre-support for 20+ HuggingFace and API models, a model evaluation scheme of 70+ datasets with about 400,000 questions, comprehensively evaluating the capabilities of the models in five dimensions. * Efficient distributed evaluation: One line command to implement task division and distributed evaluation, completing the full evaluation of billion-scale models in just a few hours. * Diversified evaluation paradigms: Support for zero-shot, few-shot, and chain-of-thought evaluations, combined with standard or dialogue-type prompt templates, to easily stimulate the maximum performance of various models. * Modular design with high extensibility: Want to add new models or datasets, customize an advanced task division strategy, or even support a new cluster management system? Everything about OpenCompass can be easily expanded! * Experiment management and reporting mechanism: Use config files to fully record each experiment, and support real-time reporting of results.

flower
Flower is a framework for building federated learning systems. It is designed to be customizable, extensible, framework-agnostic, and understandable. Flower can be used with any machine learning framework, for example, PyTorch, TensorFlow, Hugging Face Transformers, PyTorch Lightning, scikit-learn, JAX, TFLite, MONAI, fastai, MLX, XGBoost, Pandas for federated analytics, or even raw NumPy for users who enjoy computing gradients by hand.

thinc
Thinc is a lightweight deep learning library that offers an elegant, type-checked, functional-programming API for composing models, with support for layers defined in other frameworks such as PyTorch, TensorFlow and MXNet. You can use Thinc as an interface layer, a standalone toolkit or a flexible way to develop new models.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.

spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.