Model Monitoring & Drift Detection#

In this tutorial, we will be leveraging the model monitoring capabilities of MLRun to deploy a model to a live endpoint and calculate data drift.

Make sure you have reviewed the basics in MLRun Quick Start Tutorial.

Tutorial steps:

MLRun Installation and Configuration#

Before running this notebook make sure mlrun is installed and that you have configured the access to the MLRun service.

# install MLRun if not installed, run this only once (restart the notebook after the install !!!)
%pip install mlrun

Setup Project#

First, we will import the dependencies and create an MLRun project. This will contain all of our models, functions, datasets, etc:

import os

import mlrun
import pandas as pd
project = mlrun.get_or_create_project(name="tutorial", context="./", user_project=True)
> 2022-09-21 08:58:03,005 [info] loaded project tutorial from MLRun DB

Note: This tutorial will not focus on training a model, but rather will start from the point of already having a trained model with a corresponding training dataset.

We will be logging the following model file and dataset to deploy and calculate data drift. The model is a AdaBoostClassifier from sklearn and the dataset is in csv format.

model_path = mlrun.get_sample_path('models/model-monitoring/model.pkl')
training_set_path = mlrun.get_sample_path('data/model-monitoring/iris_dataset.csv')

Log Model with Training Data#

Next, we will log the model using MLRun experiment tracking. This is usually done in a training pipeline, but you can also bring in your pre-trained models from other sources. See Working with data and model artifacts and Automated experiment tracking for more information.

model_name = "RandomForestClassifier"
model_artifact = project.log_model(
# the model artifact unique URI

Import and Deploy Serving Function#

Then, we will import the model server function from the MLRun Function Marketplace. Additionally, we will mount the filesytem, add our model that was logged via experiment tracking, and enable drift detection.

The core line here is serving_fn.set_tracking() which will create the required infrastructure behind the scenes to perform drift detection. See the Model monitoring overview for more info on what is deployed.

# Import the serving function from the function hub and mount filesystem
serving_fn = mlrun.import_function('hub://v2_model_server', new_name="serving")

# Add the model to the serving function's routing spec
serving_fn.add_model(model_name, model_path=model_artifact.uri)

# Enable model monitoring

Deploy Serving Function with Drift Detection#

Finally, we can deploy our serving function with drift detection enabled with a single line of code:

> 2022-09-21 08:58:08,053 [info] Starting remote function deploy
2022-09-21 08:58:09  (info) Deploying function
2022-09-21 08:58:09  (info) Building
2022-09-21 08:58:10  (info) Staging files and preparing base images
2022-09-21 08:58:10  (info) Building processor image
2022-09-21 08:58:55  (info) Build complete
2022-09-21 08:59:03  (info) Function deploy complete
> 2022-09-21 08:59:04,232 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-tutorial-nick-serving.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['']}
DeployStatus(state=ready, outputs={'endpoint': '', 'name': 'tutorial-nick-serving'})

View Deployed Resources#

At this point, you should see the newly deployed model server as well as a model-monitoring-stream and a scheduled job (in yellow). The model-monitoring-stream will collect, process, and save the incoming requests to the model server. The scheduled job will do the actual calculation (by default every hour).

Note: You will not see model-monitoring-batch jobs listed until they actually run (by default every hour)


Simulate Production Traffic#

Next, we will use following code to simulate incoming production data using elements from the training set. Because the data is coming from the same training set we logged, we are not expecting any data drift.

Note: By default, the drift calculation will start via the scheduled hourly batch job after receiving 10,000 incoming requests

import json
import logging
from random import choice, uniform
from time import sleep

from tqdm import tqdm

# Suppress print messages

# Get training set as list
iris_data = pd.read_csv(training_set_path).drop("label", axis=1).to_dict(orient="split")["data"]

# Simulate traffic using random elements from training set
for i in tqdm(range(12_000)):
    data_point = choice(iris_data)
    serving_fn.invoke(f'v2/models/{model_name}/infer', json.dumps({'inputs': [data_point]}))
# Resume normal logging
100%|██████████| 12000/12000 [06:45<00:00, 29.63it/s]

View Drift Calculations and Status#

Once data drift has been calculated, you can view it in the MLRun UI. This includes a high level overview of the model status:


A more detailed view on model information and overall drift metrics:


As well as a view for feature-level distributions and drift metrics:


View Detailed Drift Dashboards#

Finally, there are also more detailed Grafana dashboards that show additional information on each model in the project:

For more information on accessing these dashboards, see Model monitoring using Grafana dashboards.


Graphs of individual features over time:


As well as drift and operational metrics over time: