Enable model monitoring (beta)

To see tracking results, model monitoring needs to be enabled in each model.

To enable model monitoring, include serving_fn.set_tracking() in the model server.

To utilize drift measurement, supply the train set in the training step.

In this section

Model monitoring demo

Use the following code blocks to test and explore model monitoring.

# Set project name
project_name = "demo-project"

Deploy model servers

Use the following code to deploy a model server in the Iguazio instance.

import os
import pandas as pd
from sklearn.datasets import load_iris

from mlrun import import_function, get_dataitem, get_or_create_project
from mlrun.platforms import auto_mount

project = get_or_create_project(project_name, context="./")

# Download the pre-trained Iris model

iris = load_iris()
train_set = pd.DataFrame(iris['data'],
                         columns=['sepal_length_cm', 'sepal_width_cm',
                                  'petal_length_cm', 'petal_width_cm'])

# Import the serving function from the function hub
serving_fn = import_function('hub://v2_model_server', project=project_name).apply(auto_mount())

model_name = "RandomForestClassifier"

# Log the model through the projects API so that it is available through the feature store API
project.log_model(model_name, model_file="model.pkl", training_set=train_set)

# Add the model to the serving function's routing spec
serving_fn.add_model(model_name, model_path=f"store://models/{project_name}/{model_name}:latest")

# Enable model monitoring

# Deploy the function

Simulating requests

Use the following code to simulate production data.

import json
from time import sleep
from random import choice, uniform

iris_data = iris['data'].tolist()

while True:
    data_point = choice(iris_data)
    serving_fn.invoke(f'v2/models/{model_name}/infer', json.dumps({'inputs': [data_point]}))
    sleep(uniform(0.2, 1.7))