neptune-client

neptune-client

📘 The experiment tracker for foundation model training

Stars: 574

Visit
 screenshot

Neptune is a scalable experiment tracker for teams training foundation models. Log millions of runs, effortlessly monitor and visualize model training, and deploy on your infrastructure. Track 100% of metadata to accelerate AI breakthroughs. Log and display any framework and metadata type from any ML pipeline. Organize experiments with nested structures and custom dashboards. Compare results, visualize training, and optimize models quicker. Version models, review stages, and access production-ready models. Share results, manage users, and projects. Integrate with 25+ frameworks. Trusted by great companies to improve workflow.

README:

neptune.ai

Quickstart   •   Website   •   Docs   •   Examples   •   Resource center   •   Blog  

What is neptune.ai?

Neptune is the most scalable experiment tracker for teams that train foundation models.

Log millions of runs, view and compare them all in seconds. Effortlessly monitor and visualize months-long model training with multiple steps and branches.

Deploy Neptune on your infra from day one, track 100% of your metadata and get to the next big AI breakthrough faster.

Watch a 3min explainer video →  

Watch a 20min product demo →  

Play with a live example project in the Neptune app →  

Getting started

Step 1: Create a free account

Step 2: Install the Neptune client library

pip install neptune

Step 3: Add an experiment tracking snippet to your code

import neptune

run = neptune.init_run(project="workspace-name/project-name")
run["parameters"] = {"lr": 0.1, "dropout": 0.4}
run["test_accuracy"] = 0.84

Open in Colab  

 

Core features

Log and display

Add a snippet to any step of your ML pipeline once. Decide what and how you want to log. Run a million times.

  • Any framework: any code, fastai, PyTorch, Lightning, TensorFlow/Keras, scikit-learn, 🤗 Transformers, XGBoost, Optuna.

  • Any metadata type: metrics, parameters, dataset and model versions, images, interactive plots, videos, hardware (GPU, CPU, memory), code state.

  • From anywhere in your ML pipeline: multinode pipelines, distributed computing, log during or after execution, log offline, and sync when you are back online.  

 

all metadata metrics
 

 

Organize experiments

Organize logs in a fully customizable nested structure. Display model metadata in user-defined dashboard templates.

  • Nested metadata structure: the flexible API lets you customize the metadata logging structure however you want. Organize nested parameter configs or the results on k-fold validation splits the way they should be.

  • Custom dashboards: combine different metadata types in one view. Define it for one run. Use anywhere. Look at GPU, memory consumption, and load times to debug training speed. See learning curves, image predictions, and confusion matrix to debug model quality.

  • Table views: create different views of the runs table and save them for later. You can have separate table views for debugging, comparing parameter sets, or best experiments.  

 

organize dashboards
 

 

Compare results

Visualize training live in the neptune.ai web app. See how different parameters and configs affect the results. Optimize models quicker.

  • Compare: learning curves, parameters, images, datasets.

  • Search, sort, and filter: experiments by any field you logged. Use our query language to filter runs based on parameter values, metrics, execution times, or anything else.

  • Visualize and display: runs table, interactive display, folder structure, dashboards.

  • Monitor live: hardware consumption metrics, GPU, CPU, memory.

  • Group by: dataset versions, parameters.  

 

compare, search, filter
 

 

Version models

Version, review, and access production-ready models and metadata associated with them in a single place.

 

Share results

Have a single place where your team can see the results and access all models and experiments.

  • Send a link: share every chart, dashboard, table view, or anything else you see in the neptune.ai app by copying and sending persistent URLs.

  • Query API: access all model metadata via neptune.ai API. Whatever you logged, you can query in a similar way.

  • Manage users and projects: create different projects, add users to them, and grant different permissions levels.

  • Add your entire org: you can collaborate with a team on every plan, even the Free one. So, invite your entire organization, including product managers and subject matter experts, to increase the visibility from the very beginning.  

 

share persistent link
 

 

Integrate with any MLOps stack

neptune.ai integrates with 25+ frameworks: PyTorch, Lightning, TensorFlow/Keras, LightGBM, scikit-learn, XGBoost, Optuna, Kedro, 🤗 Transformers, fastai, Prophet, detectron2, Airflow, and more.



PyTorch Lightning

Example:

from pytorch_lightning import Trainer
from lightning.pytorch.loggers import NeptuneLogger

# Create NeptuneLogger instance
from neptune import ANONYMOUS_API_TOKEN

neptune_logger = NeptuneLogger(
    api_key=ANONYMOUS_API_TOKEN,
    project="common/pytorch-lightning-integration",
    tags=["training", "resnet"],  # optional
)

# Pass the logger to the Trainer
trainer = Trainer(max_epochs=10, logger=neptune_logger)

# Run the Trainer
trainer.fit(my_model, my_dataloader)

neptune-pl  

github-code jupyter-code Open In Colab  

 

neptune.ai is trusted by great companies

 

Read how various customers use Neptune to improve their workflow.  

 

Support

If you get stuck or simply want to talk to us about something, here are your options:

 

People behind

Created with ❤️ by the neptune.ai team

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for neptune-client

Similar Open Source Tools

For similar tasks

For similar jobs