vertex-ai-mlops
Google Cloud Platform Vertex AI end-to-end workflows for machine learning operations
Stars: 611
Vertex AI is a platform for end-to-end model development. It consist of core components that make the processes of MLOps possible for design patterns of all types.
README:
|
View on GitHub |
Share On:
|
|
Connect With Author On:
|
2025 UPDATE: The repository is progressing toward an MLOps focused approach for predictive and generative AI operations.
The main focus of the repository is new content developed in:
- MLOps
- Frameworks
- Applied *
- mostly Applied GenAI and Applied Forecasting
What to expect in 2025:
- Much more MLOps content in the
/MLOpsfolder- A detailed review of GenAI tooling in the
Applied GenAIfolder- Many framework specific workflows in the
Frameworksfolder- Migration of all the numbered folders to these main folders
- Collapsing all the Applied * folders into a single 'Applied Workflows' folder with subfolders by topic: GenAI, Forecasting, ...
- A complete rewrite of this readme.md file
- Content that is deemed stale is moved into subfolders named
legacyand links from/to are updated.
2024 UPDATE: This repository is evolving from end-to-end workflows for various frameworks into an MLOps focused approach for development of predictive and generative AI operations. The new approach is being developed in the MLOps folder. Once it nears completion, the content in this repository will be rearranged into the following structure:
- MLOps
- Pipelines
- Experiments
- Feature Store
- Model Monitoring
- ...
- Applied Examples
- Forecasting
- GenAI
- ...
- Framework Workflows
- BigQuery ML
- TensorFlow
- scikit-learn
- ...
- ...
This is the original readme from prior to the shift in this repository. After the content rearrangement is complete and this information is incorporate above it will be removed.
I want to share and enable Vertex AI from Google Cloud with you. The goal here is to share a comprehensive set of end-to-end workflows for machine learning that each cover the range of data to model to serving and managing - even automating the flow. Regardless of your data type, skill level or framework preferences you will find something helpful here. You can even ask for what you need and I might be able to work it into updates!
Click to watch on YouTube
Click here to see current playlist for this repository
To better understand which content is most helpful to users, this repository uses tracking pixels in each markdown (.md) and notebook (.ipynb) file. No user or location data is collected. The only information captured is that the content was rendered/viewed which gives us a daily count of usage. Please share any concerns you have with this in repositories discussion board and I am happy to also provide a branch without the tracking.
A script is provided to remove this tracking from your local copy of this repository in the file pixel_remove.py in the folder pixel. This readme also has the complete code for creating the tracking in case you want to use replicate it or just understand it in greater detail.
This repository is presented as workflows using, primarily, interactive python notebooks .ipynb. Why? These are easy to review, share, and move. They contain elements for both code and narrative. The narrative can be written with plain text, Markdown and/or HTML which makes providing visual explanations easy. This reinforces the goal of this repository: information that is easily accessible, portable, and great for starting points in your own work.
In notebooks, execution is driven from the locally attached compute. In this repository that means the Python code is currently running in the notebooks compute. The code in this repository heavily leans on orchestrating services in GCP rather than doing data compute in the local environment to the notebook. That means these notebooks are designed to run on minimal machine sizes, like n1-standard2 even. The heavy work of training and serving is done on Vertex AI, BigQuery, and other Google Cloud services. You will even find notebooks that author code, and then deploy the code in services like Vertex AI Custon Training and Vertex AI Pipelines.
There are sections that use other languages, like R, as well as creating files that are external to the notebooks: dockerfile, .py scripts and modules, etc.
The code in this repository is opinionated. It is not completely production ready as well as not simply ad-hoc exploration. It aims to the right of the continum of exploration to deployment: 'hello-world' to CI/CD/CT. In our data science daily work we might think of the process as:
In explore, everything is code as you go. At some point in this exploration ideas find value and need to be developed.
In develop, the approach is usually something like:
- make it work
- get a working end to end flow
- clean it up
- revisit the code and remove parts that are no longer needed and reorder based on what is learned
- generalize it
- parameterize
- use functions
- control flow: start using logic to check for out of bound conditions
- optimize it
- better use of data structures to handle data usage during execution
- consider execution timing and optimize for the simoultaneous goal of readability (= maintainability) and compute time
In many cases, getting from development to deployment is simple:
- schedule a notebook - a lot like skipping the develop stage
- deploy a pipeline
- create a cloud function
But, inevitably, as a workflow proves value it requires more effort before you deploy:
- error handling
- unit testing
- move from specialized code to generalized code:
- use classes
- control environment handling
So where does the code in the repository fall? In the late develop phase with strong readability and adaptibility.
- Tables: Tabular, structured data in rows and columns
- Language: Text for translation and/or understanding
- Vision: Images
- Video
- Use Pre-Trained APIs
- Automate building Custom Models
- End-to-end Custom ML with core tools in the framework of your choice
This is a series of workflow demonstrations that use the same data source to build and deploy the same machine learning model with different frameworks and automation. These are meant to help get started in understanding and learning Vertex AI and provide starting points for new projects.
The demonstrations focus on workflows and don't delve into the specifics of ML frameworks other than how to integrate and automate with Vertex AI. Let me know if you have ideas for more workflows or details to include!
To understand the contents of this repository, the following charts uncover the groupings of the content.
| Direction |
|---|
![]() |
| Pre-Trained Models |
|
|||||
|---|---|---|---|---|---|---|
| Data Type | Pre-Trained Model | Prediction Types | Related Solutions | |||
|
Text |
Cloud Translation API |
Detect, Translate |
Cloud Text-to-Speech |
AutoML Translation |
||
|
Cloud Natural Language API |
Entities (Identify and label), Sentiment, Entity Sentiment, Syntax, Content Classification |
Healthceare Natural Language API |
AutoML Text |
|||
|
Image |
Cloud Vision API |
Crop Hint, OCR, Face Detect, Image Properties, Label Detect, Landmark Detect, Logo Detect, Object Localization, Safe Search, Web Detect |
|
AutoML Image |
||
|
Audio |
Cloud Media Translation API |
Real-time speech translation |
Cloud Speech-to-Text |
|||
|
Video |
Cloud Video Intelligence API |
Label Detect*, Shot Detect*, Explicit Content Detect*, Speech Transcription, Object Tracking*, Text Detect, Logo Detect, Face Detect, Person Detect, Celebrity Recognition |
Vertex AI Vision |
AutoML Video |
||
| AutoML | ||
|---|---|---|
| Data Type |
AutoML |
Prediction Types |
|
Table |
AutoML Tables |
|
|
Image |
AutoML Image |
|
|
Video |
AutoML Video |
|
|
Text |
AutoML Text |
|
|
Text |
AutoML Translation |
Translation |
This work focuses on cases where you have training data:
| Overview |
|---|
![]() |
| AutoML | BigQuery ML | Vertex AI | Forecasting with AutoML, BigQuery ML, OSS Prophet |
|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
Vetex AI is a platform for end-to-end model development. It consist of core components that make the processes of MLOps possible for design patterns of all types.
Many Vertex AI resources can be viewed and monitored directly in the GCP Console. Vertex AI resources are primarily created, and modified with the Vertex AI API.
The API is accessible from:
- the command line with
gcloud ai, - REST,
- gRPC,
- or the client libraries (built on top of gRPC) for
The notebooks in this repository primarily use the Python client aiplatform. There is occasional use of aiplatform.gapic, aiplatform_v1 and aiplatform_v1beta1.
For the full details on the APIs versions and layers and how/when to use each, see this helpful note.
Install the Vertex AI Python Client
pip install google-cloud-aiplatformExample Usage: Listing all Models in Vertex AI Model Registry
PROJECT = 'statmike-mlops-349915'
REGION = 'us-central1'
# List all models for project in region with: aiplatform
from google.cloud import aiplatform
aiplatform.init(project = PROJECT, location = REGION)
model_list = aiplatform.Model.list()The demonstrations are presented in a series of notebooks that are best run in JupyterLab. These can be reviewed directly in this repository on GitHub or cloned to your Jupyter instance on Vertex AI Workbench Instances.
Select the files and review them directly in the browser or IDE of your choice. This can be helpful for general understanding and selecting sections to copy/paste to your project. Some options to get a local copy of this repositories content:
- use git:
git clone https://github.com/statmike/vertex-ai-mlops - use
wgetto copy individual files directly from GitHub:- Go to the notebook on GitHub.com and right-click the download link. Then select copy link address.
- Alternatively, click the Raw button on GitHub and then copy the URL that loads.
- Run the following from a notebook cell or directly from a terminal (without the !). Note the slightly different address that points directly to raw content on GitHub.
!wget "https://raw.githubusercontent.com/statmike/vertex-ai-mlops/main/<path and filename>.ipynb"
- Use Colab (and soon Vetex AI Enterprise Colab) to open the notebooks. Many of the notebooks have section at the top with buttons for opening directly in Colab. Some notebooks don't yet have this feature and some use local Docker which is not available on Colab.
TL;DR
In Google Cloud Console, Select/Create a Project then go to Vertex AI > Workbench > Instances
- Create a new notebook and Open JupyterLab
- Clone this repository using Git Menu, Open and run
00 - Environment Setup.ipynb
- Create a Project
- Link, Alternatively, go to: Console > IAM & Admin > Manage Resources
- Click "+ Create Project"
- Provide: name, billing account, organization, location
- Click "Create"
- Enable the APIs: Vertex AI API and Notebooks API
-
Link
- Alternatively, go to:
- Console > Vertex AI, then enable API
- Then Console > Vertex AI > Workbench, then enable API
- Alternatively, go to:
-
Link
- Create A Notebook with Vertex AI Workbench Instances:
- Go to: Console > Vertex AI > Workbench > Instances - direct link
- Create a new instance - instructions
- Once it is started, click the
Open JupyterLablink. - Clone this repository to the JupyterLab instance:
- Either:
- Go to the
Gitmenu and chooseClone a Repository - Choose the Git icon on the left toolbar and click
Clone a Repository
- Go to the
- Provide the Clone URI of this repository: https://github.com/statmike/vertex-ai-mlops.git
- In the File Browser you will now have the folder "vertex-ai-mlops" that contains the files from this repository
- Either:
- Setup the Notebook Environment for these workflows
- Open the notebook vertex-ai-mlops/00 - Environment Setup
- Follow the instructions and run the cells
Resources on these items:
- Google Cloud Projects
- Vertex AI environment
- Introduction to Vertex AI Workbench
- Create a Vetex AI Workbench Instance
-
Learning Machine Learning
- I often get asked "How do I learn about ML?". There are lots of good answers. ....
-
Explorations
- This is a series of projects for exploring new, new-to-me, and emerging tools in the ML world!
-
Tips
- Tips for using the repository and notebooks with examples of core skills like building containers, parameterizing jobs and interacting with other GCP services. These tips help with scaling jobs and developing them with a focus on CI/CD.
This is my personal repository of demonstrations I use for learning and sharing Vertex AI. There are many more resources available. Within each notebook I have included a resources section and a related training section.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for vertex-ai-mlops
Similar Open Source Tools
vertex-ai-mlops
Vertex AI is a platform for end-to-end model development. It consist of core components that make the processes of MLOps possible for design patterns of all types.
cognee
Cognee is an open-source framework designed for creating self-improving deterministic outputs for Large Language Models (LLMs) using graphs, LLMs, and vector retrieval. It provides a platform for AI engineers to enhance their models and generate more accurate results. Users can leverage Cognee to add new information, utilize LLMs for knowledge creation, and query the system for relevant knowledge. The tool supports various LLM providers and offers flexibility in adding different data types, such as text files or directories. Cognee aims to streamline the process of working with LLMs and improving AI models for better performance and efficiency.
RD-Agent
RD-Agent is a tool designed to automate critical aspects of industrial R&D processes, focusing on data-driven scenarios to streamline model and data development. It aims to propose new ideas ('R') and implement them ('D') automatically, leading to solutions of significant industrial value. The tool supports scenarios like Automated Quantitative Trading, Data Mining Agent, Research Copilot, and more, with a framework to push the boundaries of research in data science. Users can create a Conda environment, install the RDAgent package from PyPI, configure GPT model, and run various applications for tasks like quantitative trading, model evolution, medical prediction, and more. The tool is intended to enhance R&D processes and boost productivity in industrial settings.
transformers
Transformers is a state-of-the-art pretrained models library that acts as the model-definition framework for machine learning models in text, computer vision, audio, video, and multimodal tasks. It centralizes model definition for compatibility across various training frameworks, inference engines, and modeling libraries. The library simplifies the usage of new models by providing simple, customizable, and efficient model definitions. With over 1M+ Transformers model checkpoints available, users can easily find and utilize models for their tasks.
EmbodiedScan
EmbodiedScan is a holistic multi-modal 3D perception suite designed for embodied AI. It introduces a multi-modal, ego-centric 3D perception dataset and benchmark for holistic 3D scene understanding. The dataset includes over 5k scans with 1M ego-centric RGB-D views, 1M language prompts, 160k 3D-oriented boxes spanning 760 categories, and dense semantic occupancy with 80 common categories. The suite includes a baseline framework named Embodied Perceptron, capable of processing multi-modal inputs for 3D perception tasks and language-grounded tasks.
liboai
liboai is a simple C++17 library for the OpenAI API, providing developers with access to OpenAI endpoints through a collection of methods and classes. It serves as a spiritual port of OpenAI's Python library, 'openai', with similar structure and features. The library supports various functionalities such as ChatGPT, Audio, Azure, Functions, Image DALL·E, Models, Completions, Edit, Embeddings, Files, Fine-tunes, Moderation, and Asynchronous Support. Users can easily integrate the library into their C++ projects to interact with OpenAI services.
Alpaca
Alpaca is an Ollama client for managing and chatting with multiple AI models. It offers a user-friendly way to interact with local AI and third-party providers like Gemini and ChatGPT. The open-source tool supports features such as multiple model conversations, image and document recognition, code highlighting, notifications, import/export chats, and more.
modelscope-agent
ModelScope-Agent is a customizable and scalable Agent framework. A single agent has abilities such as role-playing, LLM calling, tool usage, planning, and memory. It mainly has the following characteristics: - **Simple Agent Implementation Process**: Simply specify the role instruction, LLM name, and tool name list to implement an Agent application. The framework automatically arranges workflows for tool usage, planning, and memory. - **Rich models and tools**: The framework is equipped with rich LLM interfaces, such as Dashscope and Modelscope model interfaces, OpenAI model interfaces, etc. Built in rich tools, such as **code interpreter**, **weather query**, **text to image**, **web browsing**, etc., make it easy to customize exclusive agents. - **Unified interface and high scalability**: The framework has clear tools and LLM registration mechanism, making it convenient for users to expand more diverse Agent applications. - **Low coupling**: Developers can easily use built-in tools, LLM, memory, and other components without the need to bind higher-level agents.
glide
Glide is a cloud-native LLM gateway that provides a unified REST API for accessing various large language models (LLMs) from different providers. It handles LLMOps tasks such as model failover, caching, key management, and more, making it easy to integrate LLMs into applications. Glide supports popular LLM providers like OpenAI, Anthropic, Azure OpenAI, AWS Bedrock (Titan), Cohere, Google Gemini, OctoML, and Ollama. It offers high availability, performance, and observability, and provides SDKs for Python and NodeJS to simplify integration.
airflint
Airflint is a tool designed to enforce best practices for all your Airflow Directed Acyclic Graphs (DAGs). It is currently in the alpha stage and aims to help users adhere to recommended practices when working with Airflow. Users can install Airflint from PyPI and integrate it into their existing Airflow environment to improve DAG quality. The tool provides rules for function-level imports and jinja template syntax usage, among others, to enhance the development process of Airflow DAGs.
Scrapegraph-demo
ScrapeGraphAI is a web scraping Python library that utilizes LangChain, LLM, and direct graph logic to create scraping pipelines. Users can specify the information they want to extract, and the library will handle the extraction process. This repository contains an official demo/trial for the ScrapeGraphAI library, showcasing its capabilities in web scraping tasks. The tool is designed to simplify the process of extracting data from websites by providing a user-friendly interface and powerful scraping functionalities.
AgentBench
AgentBench is a benchmark designed to evaluate Large Language Models (LLMs) as autonomous agents in various environments. It includes 8 distinct environments such as Operating System, Database, Knowledge Graph, Digital Card Game, and Lateral Thinking Puzzles. The tool provides a comprehensive evaluation of LLMs' ability to operate as agents by offering Dev and Test sets for each environment. Users can quickly start using the tool by following the provided steps, configuring the agent, starting task servers, and assigning tasks. AgentBench aims to bridge the gap between LLMs' proficiency as agents and their practical usability.
cocoindex
CocoIndex is the world's first open-source engine that supports both custom transformation logic and incremental updates specialized for data indexing. Users declare the transformation, CocoIndex creates & maintains an index, and keeps the derived index up to date based on source update, with minimal computation and changes. It provides a Python library for data indexing with features like text embedding, code embedding, PDF parsing, and more. The tool is designed to simplify the process of indexing data for semantic search and structured information extraction.
yolo-flutter-app
Ultralytics YOLO for Flutter is a Flutter plugin that allows you to integrate Ultralytics YOLO computer vision models into your mobile apps. It supports both Android and iOS platforms, providing APIs for object detection and image classification. The plugin leverages Flutter Platform Channels for seamless communication between the client and host, handling all processing natively. Before using the plugin, you need to export the required models in `.tflite` and `.mlmodel` formats. The plugin provides support for tasks like detection and classification, with specific instructions for Android and iOS platforms. It also includes features like camera preview and methods for object detection and image classification on images. Ultralytics YOLO thrives on community collaboration and offers different licensing paths for open-source and commercial use cases.
superduperdb
SuperDuperDB is a Python framework for integrating AI models, APIs, and vector search engines directly with your existing databases, including hosting of your own models, streaming inference and scalable model training/fine-tuning. Build, deploy and manage any AI application without the need for complex pipelines, infrastructure as well as specialized vector databases, and moving our data there, by integrating AI at your data's source: - Generative AI, LLMs, RAG, vector search - Standard machine learning use-cases (classification, segmentation, regression, forecasting recommendation etc.) - Custom AI use-cases involving specialized models - Even the most complex applications/workflows in which different models work together SuperDuperDB is **not** a database. Think `db = superduper(db)`: SuperDuperDB transforms your databases into an intelligent platform that allows you to leverage the full AI and Python ecosystem. A single development and deployment environment for all your AI applications in one place, fully scalable and easy to manage.
amplication
Amplication is a robust, open-source development platform designed to revolutionize the creation of scalable and secure .NET and Node.js applications. It automates backend applications development, ensuring consistency, predictability, and adherence to the highest standards with code that's built to scale. The user-friendly interface fosters seamless integration of APIs, data models, databases, authentication, and authorization. Built on a flexible, plugin-based architecture, Amplication allows effortless customization of the code and offers a diverse range of integrations. With a strong focus on collaboration, Amplication streamlines team-oriented development, making it an ideal choice for groups of all sizes, from startups to large enterprises. It enables users to concentrate on business logic while handling the heavy lifting of development. Experience the fastest way to develop .NET and Node.js applications with Amplication.
For similar tasks
vertex-ai-mlops
Vertex AI is a platform for end-to-end model development. It consist of core components that make the processes of MLOps possible for design patterns of all types.
ai-on-gke
This repository contains assets related to AI/ML workloads on Google Kubernetes Engine (GKE). Run optimized AI/ML workloads with Google Kubernetes Engine (GKE) platform orchestration capabilities. A robust AI/ML platform considers the following layers: Infrastructure orchestration that support GPUs and TPUs for training and serving workloads at scale Flexible integration with distributed computing and data processing frameworks Support for multiple teams on the same infrastructure to maximize utilization of resources
ray
Ray is a unified framework for scaling AI and Python applications. It consists of a core distributed runtime and a set of AI libraries for simplifying ML compute, including Data, Train, Tune, RLlib, and Serve. Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing ecosystem of community integrations. With Ray, you can seamlessly scale the same code from a laptop to a cluster, making it easy to meet the compute-intensive demands of modern ML workloads.
labelbox-python
Labelbox is a data-centric AI platform for enterprises to develop, optimize, and use AI to solve problems and power new products and services. Enterprises use Labelbox to curate data, generate high-quality human feedback data for computer vision and LLMs, evaluate model performance, and automate tasks by combining AI and human-centric workflows. The academic & research community uses Labelbox for cutting-edge AI research.
djl
Deep Java Library (DJL) is an open-source, high-level, engine-agnostic Java framework for deep learning. It is designed to be easy to get started with and simple to use for Java developers. DJL provides a native Java development experience and allows users to integrate machine learning and deep learning models with their Java applications. The framework is deep learning engine agnostic, enabling users to switch engines at any point for optimal performance. DJL's ergonomic API interface guides users with best practices to accomplish deep learning tasks, such as running inference and training neural networks.
mlflow
MLflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models. MLflow offers a set of lightweight APIs that can be used with any existing machine learning application or library (TensorFlow, PyTorch, XGBoost, etc), wherever you currently run ML code (e.g. in notebooks, standalone applications or the cloud). MLflow's current components are:
* `MLflow Tracking
tt-metal
TT-NN is a python & C++ Neural Network OP library. It provides a low-level programming model, TT-Metalium, enabling kernel development for Tenstorrent hardware.
burn
Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.
For similar jobs
lollms-webui
LoLLMs WebUI (Lord of Large Language Multimodal Systems: One tool to rule them all) is a user-friendly interface to access and utilize various LLM (Large Language Models) and other AI models for a wide range of tasks. With over 500 AI expert conditionings across diverse domains and more than 2500 fine tuned models over multiple domains, LoLLMs WebUI provides an immediate resource for any problem, from car repair to coding assistance, legal matters, medical diagnosis, entertainment, and more. The easy-to-use UI with light and dark mode options, integration with GitHub repository, support for different personalities, and features like thumb up/down rating, copy, edit, and remove messages, local database storage, search, export, and delete multiple discussions, make LoLLMs WebUI a powerful and versatile tool.
Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.
minio
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.
mage-ai
Mage is an open-source data pipeline tool for transforming and integrating data. It offers an easy developer experience, engineering best practices built-in, and data as a first-class citizen. Mage makes it easy to build, preview, and launch data pipelines, and provides observability and scaling capabilities. It supports data integrations, streaming pipelines, and dbt integration.
AiTreasureBox
AiTreasureBox is a versatile AI tool that provides a collection of pre-trained models and algorithms for various machine learning tasks. It simplifies the process of implementing AI solutions by offering ready-to-use components that can be easily integrated into projects. With AiTreasureBox, users can quickly prototype and deploy AI applications without the need for extensive knowledge in machine learning or deep learning. The tool covers a wide range of tasks such as image classification, text generation, sentiment analysis, object detection, and more. It is designed to be user-friendly and accessible to both beginners and experienced developers, making AI development more efficient and accessible to a wider audience.
tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.
airbyte
Airbyte is an open-source data integration platform that makes it easy to move data from any source to any destination. With Airbyte, you can build and manage data pipelines without writing any code. Airbyte provides a library of pre-built connectors that make it easy to connect to popular data sources and destinations. You can also create your own connectors using Airbyte's no-code Connector Builder or low-code CDK. Airbyte is used by data engineers and analysts at companies of all sizes to build and manage their data pipelines.
labelbox-python
Labelbox is a data-centric AI platform for enterprises to develop, optimize, and use AI to solve problems and power new products and services. Enterprises use Labelbox to curate data, generate high-quality human feedback data for computer vision and LLMs, evaluate model performance, and automate tasks by combining AI and human-centric workflows. The academic & research community uses Labelbox for cutting-edge AI research.













