Enable model monitoring#


This is currently a beta feature.

To see tracking results, model monitoring needs to be enabled in each model.

To utilize drift measurement, supply the train set in the training step.

In this section

Enabling model monitoring#

Model activities can be tracked into a real-time stream and time-series DB. The monitoring data is used to create real-time dashboards and track model accuracy and drift. To set the tracking stream options, specify the following function spec attributes:

fn.set_tracking(stream_path, batch, sample)

  • stream_path

    • Enterprise: the v3io stream path (e.g. v3io:///users/..)

    • CE: a valid Kafka stream (e.g. kafka://kafka.default.svc.cluster.local:9092)

  • sample — optional, sample every N requests

  • batch — optional, send micro-batches every N requests

Model monitoring demo#

Use the following code to test and explore model monitoring.

# Set project name
project_name = "demo-project"

Deploy model servers#

Use the following code to deploy a model server in the Iguazio instance.

import os
import pandas as pd
from sklearn.datasets import load_iris
import sys

import mlrun
from mlrun import import_function, get_dataitem, get_or_create_project
from mlrun.platforms import auto_mount

project = get_or_create_project(project_name, context="./")

# Download the pre-trained Iris model
# We choose the correct model to avoid pickle warnings
suffix = (
    mlrun.__version__.split("-")[0].replace(".", "_")
    if sys.version_info[1] > 9
    else "3.9"
model_path = mlrun.get_sample_path(f"models/model-monitoring/model-{suffix}.pkl")


iris = load_iris()
train_set = pd.DataFrame(
    columns=["sepal_length_cm", "sepal_width_cm", "petal_length_cm", "petal_width_cm"],

# Import the serving function from the Function Hub
serving_fn = import_function("hub://v2_model_server", project=project_name).apply(

model_name = "RandomForestClassifier"

# Log the model through the projects API so that it is available through the feature store API
project.log_model(model_name, model_file="model.pkl", training_set=train_set)

# Add the model to the serving function's routing spec
    model_name, model_path=f"store://models/{project_name}/{model_name}:latest"

# Enable model monitoring

# Deploy the function

Simulating requests#

Use the following code to simulate production data.

import json
from time import sleep
from random import choice, uniform

iris_data = iris["data"].tolist()

while True:
    data_point = choice(iris_data)
        f"v2/models/{model_name}/infer", json.dumps({"inputs": [data_point]})
    sleep(uniform(0.2, 1.7))