Installation and setup guide #

This guide outlines the steps for installing and running MLRun.

MLRun comprises two parts: MLRun Server and MLRUN client.

In this section

Deployment options#

There are several deployment options:

  • Local deployment: Deploy a Docker on your laptop or on a single server. This option is good for testing the waters or when working in a small scale environment. It’s limited in terms of computing resources and scale, but simpler for deployment.

  • Kubernetes cluster: Deploy an MLRun server on Kubernetes. This option deploys MLRun on a Kubernetes cluster, which supports elastic scaling. Yet, it is more complex to install as it requires you to install Kubernetes on your own.

  • Iguazio’s Managed Service: A commercial offering by Iguazio. This is the fastest way to explore the full set of MLRun functionalities.
    Note that Iguazio provides a 14 day free trial.

Non-root user support#

By default, MLRun assigns the root user to MLRun runtimes and pods. You can improve the security context by changing the security mode, which is implemented by Igauzio during installation, and applied system-wide:

  • Override: Use the user id of the user that triggered the current run or use the nogroupid for group id. Requires Iguazio v3.5.1.

  • Disabled: Security context is not auto applied (the system aplies the root user). (default)

Security context#

If your system is configured in disabled mode, you can apply the security context to individual runtimes/pods by using function.with_security_context, and the job is assigned to the user or to the user’s group that ran the job.
(You cannot override the user of individual jobs if the system is configured in override mode.) The options are:

from kubernetes import client as k8s_client

security_context = k8s_client.V1SecurityContext(

See the full definition of the V1SecurityContext object.

Some services do not support security context yet:

  • Infrastructure services

    • Kubeflow pipelines core services

  • Services created by MLRun

    • Kaniko, used for building images. (To avoid using Kaniko, use prebuilt images that contain all the requirements.)

    • Spark services

Set up your client#

  • You can work with your favorite IDE (e.g. Pycharm, VScode, Jupyter , Colab etc…). Read how to configure your client against the deployed MLRun server in How to configure your client.

Once you have installed and configured MLRun, follow the Quick Start tutorial and additional Tutorials and Examples to learn how to use MLRun to develop and deploy machine learning applications to production.

For interactive installation and usage tutorials, try the MLRun Katakoda Scenarios.

MLRun client backward compatibility#

Starting from MLRun 0.10.0, the MLRun client and images are compatible with minor MLRun releases that are released during the following 6 months. When you upgrade to 0.11.0, for example, you can continue to use your 0.10-based images.


  • Images from 0.9.0 are not compatible with 0.10.0. Backward compatibility starts from 0.10.0.

  • When you upgrade the MLRun major version, for example 0.10.x to 1.0.x, there is no backward compatibility.

  • The feature store is not backward compatible.

  • When you upgrade the platform, for example from 3.2 to 3.3, the clients should be upgraded. There is no guaranteed compatibility with an older MLRun client after a platform upgrade.

See also Images and their usage in MLRun.