Model Monitoring Overview (Beta)

Note

Model Monitoring is based on Iguazio’s streaming technology. Contact Iguazio to enable this feature.

Introduction

MLRun provides a model monitoring service that tracks the performance of models in production to help identify potential issues with concept drift and prediction accuracy before they impact business goals. Typically, Model monitoring is used by devops for tracking model performance and by data scientists to track model drift. Two monitoring types are supported:

  1. Model operational performance (latency, requests per second, etc..)

  2. Drift detection - identifies potential issues with the model. See Drift Analysis for more details.

Model Monitoring provides warning alerts that can be sent to stakeholders for processing.

The Model Monitoring data can be viewed using Iguazio’s user interface or through Grafana Dashboards. Grafana is an interactive web application, visualization tool that can be added as a service in the Iguazio platform. See Model Monitoring Using Grafana Dashboards for more details.

Architecture

The Model Monitoring process flow starts with collecting operational data. The operational data are converted to vectors which are posted to the Model Server. The Model Server is then wrapped around a machine learning model that uses a function to calculate predictions based on the available vectors. Next, the Model Server creates a log for the input and output of the vectors, and the entries are written to the production data stream (a v3io stream). While the Model Server is processing the vectors, a Nuclio operation monitors the log of the data stream and is triggered when a new log entry is detected. The Nuclio function examines the log entry, processes it in to statistics which are then written to the statistics databases (parquet file, time series database and key value database). In parallel, a scheduled MLRun job runs reading the parquet files, performing drift analysis. The drift analysis data is stored so that the user can retrieve it in the Iguazio UI or in a Grafana dashboard.

Architecture

Drift Analysis

The Model Monitoring feature provides drift analysis monitoring. Model Drift in machine learning is a situation where the statistical properties of the target variable (what the model is trying to predict) change over time. In other words, the production data has changed significantly over the course of time and no longer matches the input data used to train the model. So, for this new data, accuracy of the model predictions is low. Drfit analysis statistics are computed once an hour. For more information see Concept Drift.

Common Terminology

The following are terms you will see in all the model monitoring screens:

  • Total Variation Distance (TVD)—this is the statistical difference between the actual predictions, and the model’s trained predictions

  • Hellinger Distance—this is a type of f-divergence that quantifies the similarity between the actual predictions, and the model’s trained predictions.

  • Kullback–Leibler Divergence (KLD)—this is the measure of how the probability distribution of actual predictions is different from the second, model’s trained reference probability distribution.

  • Model Endpoint— a combination of a deployed nuclio function and the models themselves. One function can run multiple endpoints; however, statistics are saved per endpoint.

Model Monitoring Using the Iguazio Platform Interface

Iguazio’s Model Monitoring data is available for viewing through the regular platform interface. The platform provides four information screens with model monitoring data.

Select a project from the project tiles screen. From the project dashboard, press the Models tile to view the models currently deployed . Click Model Endpoints from the menu to display a list of monitored endpoints. If the Model Monitoring feature is not enabled, the endpoints list will be empty.

Model Endpoint Summary List

The Model Endpoints summary list provides a quick view of the Model Monitoring data.

Model Monitoring Summary List

The summary page contains the following fields:

  • Name—the name of the model endpoint

  • Version—user configured version taken from model deployment

  • Class—the implementation class that is used by the endpoint

  • Model—user defined name for the model

  • Labels—user configurable tags that are searchable

  • Uptime—first request for production data

  • Last Prediction—most recent request for production data

  • Error Count—includes prediction process errors such as operational issues (For example, a function in a failed state), as well as data processing errors (For example, invalid timestamps, request ids, type mismatches etc.)

  • Drift—indication of drift status (no drift (green), possible drift (yellow), drift detected (red))

  • Accuracy—a numeric value representing the accuracy of model predictions (N/A)

Note

Model Accuracy is currently under development.

Model Endpoint Overview

The Model Endpoints Overview screen displays general information about the selected model.

Model Endpoints Overview

The Overview page contains the following fields:

  • UUID—the ID of the deployed model

  • Model Class—the implementation class that is used by the endpoint

  • Model Artifact—reference to the model’s file location

  • Function URI—the MLRun function to access the model

  • Last Prediction—most recent request for production data

  • Error Count—includes prediction process errors such as operational issues (For example, a function in a failed state), as well as data processing errors (For example, invalid timestamps, request ids, type mismatches etc.)

  • Accuracy—a numeric value representing the accuracy of model predictions (N/A)

  • Stream path—the input and output stream of the selected model

Use the ellipsis to view the YAML resource file for details about the monitored resource.

Model Drift Analysis

The Model Endpoints Drift Analysis screen provides performance statistics for the currently selected model.

Model Endpoints Drift Analysis

Each of the following fields has both sum and mean numbers displayed. For definitions of the terms see Common Terminology

  • TVD

  • Hellinger

  • KLD

Use the ellipsis to view the YAML resource file for details about the monitored resource.

Model Features Analysis

The Features Analysis pane provides details of the drift analysis in a table format with each feature in the selected model on its own line.

Model Endpoints Features Analysis

The table is broken down into columns with both expected, and actual performance results. The expected column displays the results from the model training phase, and tha actual column displays the results that came from live production data. The following fields are available:

  • Mean

  • STD (Standard deviation)

  • Min

  • Max

  • TVD

  • Hellinger

  • KLD

  • Histograms—the approximate representation of the distribution of the data. Hover over the bars in the graph for details.

Use the ellipsis to view the YAML resource file for details about the monitored resource.

Model Monitoring Using Grafana Dashboards

You can deploy a Grafana service in your Iguazio instance and use Grafana Dashboards to view Model Monitoring details. There are three dashboards available:

Model Endpoints Overview Dashboard

The Overview dashboard will display the model endpoint IDs of a specific project. Only deployed models with Model Monitoring option enabled are displayed. Endpoint IDs are URIs used to provide access to performance data and drift detection statistics of a deployed model.

overview

The Overview screen providers details about the performance of all the deployed and monitored models within a project. You can change projects by choosing a new project from the Project dropdown. The Overview dashboard displays the number of endpoints in the project, the average predictions per second (using a 5-minute rolling average), the average latency (using a 1-hour rolling average), and the total error count in the project.

Additional details include:

  • Endpoint ID—the ID of the deployed model. Use this link to drill down to the model performance and details screens.

  • Function—the MLRun function to access the model

  • Model—user defined name for the model

  • Model Class—the implementation class that is used by the endpoint

  • First Request—first request for production data

  • Last Request—most recent request for production data

  • Error Count—includes prediction process errors such as operational issues (For example, a function in a failed state), as well as data processing errors (For example, invalid timestamps, request ids, type mismatches etc.)

  • Accuracy—a numeric value representing the accuracy of model predictions (N/A)

  • Drift Status—no drift (green), possible drift (yellow), drift detected (red)

At the bottom of the dashboard are heat maps for the Predictions per second, Average Latency and Errors. The heat maps display data based on 15 minute intervals. See How to Read a Heat Mapfor more details.

Click an endpoint ID to drill down the performance details of that model.

How to Read a Heat Map

Heat maps are used to analyze trends and to instantly transform and enhance data through visualizations. This helps identify areas of interest quickly, and empower users to explore the data in order to pinpoint where there may be potential issues. A heat map uses a matrix layout with colour and shading to show the relationship between two categories of values (x and y axes), so the darker the cell, the higher the value. The values presented along each axis correspond to a cell which is then colour-coded to represent the relationship between the two categories. The Predictions per second heatmap shows the relationship between time, and the predictions per second, and the Average Latency per hour shows the relationship between time and the latency.

To properly read the heap maps, look follow the hierarchy of shades from the darkest (the highest values) to the lightest shades (the lowest values).

Note

The exact quantitative values represented by the colors may be difficult to determine. Use the Performance Dashboard to see detailed results.

Model Endpoint Details Dashboard

The model endpoint details dashboard displays the real time performance data of the selected model in detail. Model performance data provided is rich and is used to fine tune or diagnose potential performance issues that may affect business goals. The data in this dashboard changes based on the selection of the project and model.

This dashboard is broken down into three panes:

  1. Project and model summary

  2. Analysis panes

    1. Overall drift analysis

    2. Features analysis

  3. Incoming features graph

details

Project and Model Summary

Use the dropdown to change the project and model. The dashboard presents the following information about the project:

  • Endpoint ID—the ID of the deployed model

  • Model—user defined name for the model

  • Function URI—the MLRun function to access the model

  • Model Class—the implementation class that is used by the endpoint

  • Prediction/s—the average number of predictions per second over a rolling 5-minute period

  • Average Latency—the average latency over a rolling 1-hour period

  • First Request—first request for production data

  • Last Request—most recent request for production data

Use the Performance and Overview buttons view those dashboards.

Analysis Panes

This pane is broken down into sections: Overall Drift Analysis and Features Analysis. The Overall Drift Analysis pane provides performance statistics for the currently selected model.

  • TVD (sum and mean)

  • Hellinger (sum and mean)

  • KLD (sum and mean)

The Features Analysis pane provides details of the drift analysis for each feature in the selected model. This pane includes five types of statistics:

  • Actual (min, mean and max)—results based on actual live data stream

  • Expected (min, mean and max)—results based on training data

  • TVD

  • Hellinger

  • KLD

Incoming Features Graph

This graph displays the performance of the features that are in the selected model based on sampled data points from actual feature production data. The graph displays the values of the features in the model over time.

Model Endpoint Performance Dashboard

Model endpoint performance displays performance details in graphical format.

performance

This dashboard is broken down into 5 graphs:

  • Drift Measures—the overall drift over time for each of the endpoints in the selected model

  • Average Latency—the average latency of the model in 5 minute intervals, for 5 minutes and 1 hour rolling windows

  • Predictions/s—the model predictions per second displayed in 5 second intervals for 5 minutes (rolling)

  • Predictions Count—the number of predictions the model makes for 5 minutes and 1 hour rolling windows

Configuring Grafana Dashboards

You will need to make sure you have a Grafana service running in your Iguazio instance. If you do not have a Grafana service running, see Creating a New Service to create and configure it.

  1. Make sure you have the mlrun-api as a Grafana data source configured in your Grafana service. If not, add it by:

    1. Open your grafana service.

    2. Navigate to Configuration -> Data Sources.

    3. Press Add data source.

    4. Select the SimpleJson datasource and configure the following parameters.

      URL: http://mlrun-api:8080/api/grafana-proxy/model-endpoints
      Access: Server (default)
      
      ## Add a custom header of:
      if working with Iguazio 3.0.x:
          X-V3io-Session-Key: <YOUR ACCESS KEY>
      if working with Iguazio 3.2.x:
          cookie: session=j:{"sid": "<YOUR ACCESS KEY>"}
      
    5. Press Save & Test for verification. You will receive a confirmation with either a success, or a failure message.

  2. Download the following monitoring dashboards:

  3. Import the downloaded dashboards to your Grafana service. To import that dashboards into your Grafana service:

    1. Navigate to your Grafana service in the Services list and press on it

    2. Press the dashboards icon in left menu

    3. In the dashboard management screen press the IMPORT button, and select one file to import. Repeat this step for each dashboard.

Note

You will need to train and deploy a model to see results in the dashboards. The dashboards will immediately display data if you already have a model trained and running with production data.