Install MLRun locally using Docker
Contents
Install MLRun locally using Docker#
You can install and use MLRun and Nuclio locally on your computer. This does not include all the services and elastic scaling capabilities, which you can get with the Kubernetes based deployment, but it is much simpler to start with.
Note
Using Docker is limited to local, Nuclio, and Serving runtimes and local pipelines.
Use docker compose
to install MLRun. It deploys the MLRun service,
MLRun UI, Nuclio serverless engine, and optionally the Jupyter server.
There are two installation options:
In both cases you need to set the SHARED_DIR
environment variable to point to a host path for storing MLRun artifacts and DB,
for example export SHARED_DIR=~/mlrun-data
(or use set SHARED_DIR=c:\mlrun-data
in windows), make sure the directory exists.
It is recommended to set the HOST_IP
variable with your computer IP address (required for Nuclio dashboard).
You can select a specific MLRun version with the TAG
variable and Nuclio version with the NUCLIO_TAG
variable.
Add the -d
flag to docker-compose
for running in detached mode (in the background).
Use MLRun with your own client#
The following commands install MLRun + Nuclio for work with your own IDE or notebook.
[Download here]
the compose.yaml
file, save it to the working dir and type:
show the compose.yaml file
services:
mlrun-api:
image: "mlrun/mlrun-api:${TAG:-1.0.2}"
ports:
- "8080:8080"
environment:
MLRUN_ARTIFACT_PATH: "${SHARED_DIR}/{{project}}"
MLRUN_HTTPDB__REAL_PATH: /data
MLRUN_HTTPDB__DATA_VOLUME: "${SHARED_DIR}"
MLRUN_LOG_LEVEL: DEBUG
MLRUN_NUCLIO_DASHBOARD_URL: http://nuclio:8070
MLRUN_HTTPDB__DSN: "sqlite:////data/mlrun.db?check_same_thread=false"
MLRUN_UI__URL: http://localhost:8060
volumes:
- "${SHARED_DIR:?err}:/data"
networks:
- mlrun
mlrun-ui:
image: "mlrun/mlrun-ui:${TAG:-1.0.2}"
ports:
- "8060:80"
environment:
MLRUN_API_PROXY_URL: http://mlrun-api:8080
MLRUN_NUCLIO_MODE: enable
MLRUN_NUCLIO_API_URL: http://nuclio:8070
MLRUN_NUCLIO_UI_URL: http://localhost:8070
networks:
- mlrun
nuclio:
image: "quay.io/nuclio/dashboard:${NUCLIO_TAG:-stable-amd64}"
ports:
- "8070:8070"
environment:
NUCLIO_DASHBOARD_EXTERNAL_IP_ADDRESSES: "${HOST_IP:-127.0.0.1}"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- mlrun
networks:
mlrun: {}
export HOST_IP=<your host IP address>
export SHARED_DIR=~/mlrun-data
mkdir $SHARED_DIR -p
docker-compose -f compose.yaml up
Your HOST_IP
address can be found using the ip addr
or ifconfig
commands. It is recomended to select an address that does not change dynamically (for example the IP of the bridge interface).
set HOST_IP=<your host IP address>
set SHARED_DIR=c:\mlrun-data
mkdir %SHARED_DIR%
docker-compose -f compose.yaml up
Your HOST_IP
address can be found using the ipconfig
shell command, it is recomended to select an address which does not change dynamically (for example the IP of the vEthernet
interface).
This creates 3 services:
MLRun API (in http://localhost:8080)
MLRun UI (in http://localhost:8060)
Nuclio Dashboard/controller (in http://localhost:8070)
After installing MLRun service, set your client environment to work with the service, by setting the MLRun path env variable to
MLRUN_DBPATH=http://localhost:8080
or using .env
files (see setting client environment).
Use MLRun with MLRun Jupyter image#
For the quickest experience with MLRun you can deploy MLRun with a pre integrated Jupyter server loaded with various ready-to-use MLRun examples.
[Download here]
the compose.with-jupyter.yaml
file, save it to the working dir and type:
services:
jupyter:
image: "mlrun/jupyter:${TAG:-1.0.2}"
ports:
- "8080:8080"
- "8888:8888"
environment:
MLRUN_ARTIFACT_PATH: "/home/jovyan/data/{{project}}"
MLRUN_LOG_LEVEL: DEBUG
MLRUN_NUCLIO_DASHBOARD_URL: http://nuclio:8070
MLRUN_HTTPDB__DSN: "sqlite:////home/jovyan/data/mlrun.db?check_same_thread=false"
MLRUN_UI__URL: http://localhost:8060
volumes:
- "${SHARED_DIR:?err}:/home/jovyan/data"
networks:
- mlrun
mlrun-ui:
image: "mlrun/mlrun-ui:${TAG:-1.0.2}"
ports:
- "8060:80"
environment:
MLRUN_API_PROXY_URL: http://jupyter:8080
MLRUN_NUCLIO_MODE: enable
MLRUN_NUCLIO_API_URL: http://nuclio:8070
MLRUN_NUCLIO_UI_URL: http://localhost:8070
networks:
- mlrun
nuclio:
image: "quay.io/nuclio/dashboard:${NUCLIO_TAG:-stable-amd64}"
ports:
- "8070:8070"
environment:
NUCLIO_DASHBOARD_EXTERNAL_IP_ADDRESSES: "${HOST_IP:-127.0.0.1}"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- mlrun
networks:
mlrun: {}
export HOST_IP=<your host IP address>
export SHARED_DIR=~/mlrun-data
mkdir $SHARED_DIR -p
docker-compose -f compose.with-jupyter.yaml up
Your HOST_IP
address can be found using the ip addr
or ifconfig
commands. It is recomended to select an address that does not change dynamically (for example the IP of the bridge interface).
set HOST_IP=<your host IP address>
set SHARED_DIR=c:\mlrun-data
mkdir %SHARED_DIR%
docker-compose -f compose.with-jupyter.yaml up
Your HOST_IP
address can be found using the ipconfig
shell command, it is recomended to select an address which does not change dynamically (for example the IP of the vEthernet
interface).
This creates 4 services:
Jupyter lab (in http://localhost:8888)
MLRun API (in http://localhost:8080), running on the Jupyter container
MLRun UI (in http://localhost:8060)
Nuclio Dashboard/controller (in http://localhost:8070)
After the installation, access the Jupyter server (in http://localhost:8888) and run through the quick-start
tutorial and demos
.
You can see the projects, tasks, and artifacts in MLRun UI (in http://localhost:8060)
The Jupyter environment is pre-configured to work with the local MLRun and Nuclio services.
You can switch to a remote or managed MLRun cluster by editing the mlrun.env
file in the Jupyter files tree.
The artifacts and DB are stored under /home/jovyan/data (/data
in Jupyter tree).