Bases: object

static all()[source]#
error = 'Error'#
failed = 'Failed'#
running = 'Running'#
skipped = 'Skipped'#
static stable_statuses()[source]#
succeeded = 'Succeeded'#
static transient_statuses()[source]# str = '', project: str = '', tag: str = '', filename: str = '', handler: str = '', kind: str = '', image: str | None = None, code_output: str = '', embed_code: bool = True, description: str = '', requirements: str | List[str] | None = None, categories: List[str] | None = None, labels: Dict[str, str] | None = None, with_doc: bool = True, ignored_tags=None, requirements_file: str = '') MpiRuntimeV1Alpha1 | MpiRuntimeV1 | RemoteRuntime | ServingRuntime | DaskCluster | KubejobRuntime | LocalRuntime | Spark3Runtime | RemoteSparkRuntime | DatabricksRuntime[source]#

Convenience function to insert code and configure an mlrun runtime.

Easiest way to construct a runtime type object. Provides the most often used configuration options for all runtimes as parameters.

Instantiated runtimes are considered 'functions' in mlrun, but they are anything from nuclio functions to generic kubernetes pods to spark jobs. Functions are meant to be focused, and as such limited in scope and size. Typically, a function can be expressed in a single python module with added support from custom docker images and commands for the environment. The returned runtime object can be further configured if more customization is required.

One of the most important parameters is 'kind'. This is what is used to specify the chosen runtimes. The options are:

  • local: execute a local python or shell script

  • job: insert the code into a Kubernetes pod and execute it

  • nuclio: insert the code into a real-time serverless nuclio function

  • serving: insert code into orchestrated nuclio function(s) forming a DAG

  • dask: run the specified python code / script as Dask Distributed job

  • mpijob: run distributed Horovod jobs over the MPI job operator

  • spark: run distributed Spark job using Spark Kubernetes Operator

  • remote-spark: run distributed Spark job on remote Spark service

Learn more about {Kinds of function (runtimes)](../concepts/functions-overview.html).

  • name -- function name, typically best to use hyphen-case

  • project -- project used to namespace the function, defaults to 'default'

  • tag -- function tag to track multiple versions of the same function, defaults to 'latest'

  • filename -- path to .py/.ipynb file, defaults to current jupyter notebook

  • handler -- The default function handler to call for the job or nuclio function, in batch functions (job, mpijob, ..) the handler can also be specified in the .run() command, when not specified the entire file will be executed (as main). for nuclio functions the handler is in the form of module:function, defaults to 'main:handler'

  • kind -- function runtime type string - nuclio, job, etc. (see docstring for all options)

  • image -- base docker image to use for building the function container, defaults to None

  • code_output -- specify '.' to generate python module from the current jupyter notebook

  • embed_code -- indicates whether or not to inject the code directly into the function runtime spec, defaults to True

  • description -- short function description, defaults to ''

  • requirements -- list of python packages or pip requirements file path, defaults to None

  • requirements -- a list of python packages

  • requirements_file -- path to a python requirements file

  • categories -- list of categories for mlrun Function Hub, defaults to None

  • labels -- immutable name/value pairs to tag the function with useful metadata, defaults to None

  • with_doc -- indicates whether to document the function parameters, defaults to True

  • ignored_tags -- notebook cells to ignore when converting notebooks to py code (separated by ';')


pre-configured function object from a mlrun runtime class


import mlrun

# create job function object from notebook code and add doc/metadata
fn = mlrun.code_to_function("file_utils", kind="job",
                            handler="open_archive", image="mlrun/mlrun",
                            description = "this function opens a zip archive into a local/mounted folder",
                            categories = ["fileutils"],
                            labels = {"author": "me"})


import mlrun
from pathlib import Path

# create file

# create nuclio function object from python module call
fn = mlrun.code_to_function("nuclio-mover", kind="nuclio",
                            filename="", image="python:3.7",
                            description = "this function moves files from one system to another",
                            requirements = ["pandas"],
                            labels = {"author": "me"}), target, secrets=None)[source]#

download mlrun dataitem (from path/url to target path)'', workdir=None, secrets=None, silent=False)[source]#

Load code, notebook or mlrun function as .py module this function can import a local/remote py file or notebook or load an mlrun function object as a module, you can use this from your code, notebook, or another function (for common libs)

Note: the function may have package requirements which must be satisfied


mod = mlrun.function_to_module('./examples/')
task = mlrun.new_task(inputs={'infile.txt': '../examples/infile.txt'})
context = mlrun.get_or_create_ctx('myfunc', spec=task)
mod.my_job(context, p1=1, p2='x')

fn = mlrun.import_function('hub://open-archive')
mod = mlrun.function_to_module(fn)
data ="")
context = mlrun.get_or_create_ctx('myfunc')
mod.open_archive(context, archive_url=data)
  • code -- path/url to function (.py or .ipynb or .yaml) OR function object

  • workdir -- code workdir

  • secrets -- secrets needed to access the URL (e.g.s3, v3io, ..)

  • silent -- do not raise on errors


python module, secrets=None, db=None) DataItem[source]#

get mlrun dataitem object (from path/url), secrets=None, size=None, offset=0, db=None)[source]#

get mlrun dataitem body (from path/url) str, event=None, spec=None, with_env: bool = True, rundb: str = '', project: str = '', upload_artifacts=False, labels: dict | None = None)[source]#

called from within the user program to obtain a run context

the run context is an interface for receiving parameters, data and logging run results, the run context is read from the event, spec, or environment (in that order), user can also work without a context (local defaults mode)

all results are automatically stored in the "rundb" or artifact store, the path to the rundb can be specified in the call or obtained from env.

  • name -- run name (will be overridden by context)

  • event -- function (nuclio Event object)

  • spec -- dictionary holding run spec

  • with_env -- look for context in environment vars, default True

  • rundb -- path/url to the metadata and artifact database

  • project -- project to initiate the context in (by default mlrun.mlctx.default_project)

  • upload_artifacts -- when using local context (not as part of a job/run), upload artifacts to the system default artifact path location

  • labels -- dict of the context labels


execution context


# load MLRUN runtime context (will be set by the runtime framework e.g. KubeFlow)
context = get_or_create_ctx('train')

# get parameters from the runtime context (or use defaults)
p1 = context.get_param('p1', 1)
p2 = context.get_param('p2', 'a-string')

# access input metadata, values, files, and secrets (passwords)
print(f'Run: {} (uid={context.uid})')
print(f'Params: p1={p1}, p2={p2}')
print(f'accesskey = {context.get_secret("ACCESS_KEY")}')
input_str = context.get_input('infile.txt').get()
print(f'file: {input_str}')

# RUN some useful code e.g. ML training, data prep, etc.

# log scalar result values (job result metrics)
context.log_result('accuracy', p1 * 2)
context.log_result('loss', p1 * 3)
context.set_label('framework', 'sklearn')

# log various types of artifacts (file, web page, table), will be versioned and visible in the UI
context.log_artifact('model.txt', body=b'abc is 123', labels={'framework': 'xgboost'})
context.log_artifact('results.html', body=b'<b> Some HTML <b>', viewer='web-app'), namespace=None, format_: str | PipelinesFormat = PipelinesFormat.summary, project: str | None = None, remote: bool = True)[source]#

Get Pipeline status

  • run_id -- id of pipelines run

  • namespace -- k8s namespace if not default

  • format -- Format of the results. Possible values are: - summary (default value) - Return summary of the object data. - full - Return full pipeline object.

  • project -- the project of the pipeline run

  • remote -- read kfp data from mlrun service (default=True)


kfp run dict'', secrets=None, db='', project=None, new_name=None)[source]#

Create function object from DB or local/remote YAML file

Functions can be imported from function repositories (mlrun Function Hub (formerly Marketplace) or local db), or be read from a remote URL (http(s), s3, git, v3io, ..) containing the function YAML

special URLs:

function hub:       hub://[{source}/]{name}[:{tag}]
local mlrun db:     db://{project-name}/{name}[:{tag}]


function = mlrun.import_function("hub://auto-trainer")
function = mlrun.import_function("./func.yaml")
function = mlrun.import_function("")
  • url -- path/url to Function Hub, db or function YAML file

  • secrets -- optional, credentials dict for DB or URL (s3, v3io, ...)

  • db -- optional, mlrun api/db path

  • project -- optional, target project for the function

  • new_name -- optional, override the imported function name


function object, secrets=None)[source]#

Load function spec from local/remote YAML file, page_token='', page_size=None, sort_by='', filter_='', namespace=None, project='*', format_: PipelinesFormat = PipelinesFormat.metadata_only) Tuple[int, int | None, List[dict]][source]#

List pipelines

  • full -- Deprecated, use format_ instead. if True will set format_ to full, otherwise format_ will be used

  • page_token -- A page token to request the next page of results. The token is acquired from the nextPageToken field of the response from the previous call or can be omitted when fetching the first page.

  • page_size -- The number of pipelines to be listed per page. If there are more pipelines than this number, the response message will contain a nextPageToken field you can use to fetch the next page.

  • sort_by -- Can be format of "field_name", "field_name asc" or "field_name desc" (Example, "name asc" or "id desc"). Ascending by default.

  • filter -- A url-encoded, JSON-serialized Filter protocol buffer, see: [filter.proto](kubeflow/pipelines blob/master/backend/api/filter.proto).

  • namespace -- Kubernetes namespace if other than default

  • project -- Can be used to retrieve only specific project pipelines. "*" for all projects. Note that filtering by project can't be used together with pagination, sorting, or custom filter.

  • format -- Control what will be returned (full/metadata_only/name_only)'', workdir=None, secrets=None, name='name')[source]# str = '', project: str = '', tag: str = '', kind: str = '', command: str = '', image: str = '', args: list | None = None, runtime=None, mode=None, handler: str | None = None, source: str | None = None, requirements: str | List[str] | None = None, kfp=None, requirements_file: str = '')[source]#

Create a new ML function from base properties


# define a container based function (the `` must exist in the container workdir)
f = new_function(command=' -x {x}', image='myrepo/image:latest', kind='job'){"x": 5})

# define a container based function which reads its source from a git archive
f = new_function(command=' -x {x}', image='myrepo/image:latest', kind='job',
                 source='git://'){"x": 5})

# define a local handler function (execute a local function handler)
f = new_function().run(task, handler=myfunction)
  • name -- function name

  • project -- function project (none for 'default')

  • tag -- function version tag (none for 'latest')

  • kind -- runtime type (local, job, nuclio, spark, mpijob, dask, ..)

  • command -- command/url + args (e.g.: --verbose)

  • image -- container image (start with '.' for default registry)

  • args -- command line arguments (override the ones in command)

  • runtime -- runtime (job, nuclio, spark, dask ..) object/dict store runtime specific details and preferences

  • mode -- runtime mode: * pass - will run the command as is in the container (not wrapped by mlrun), the command can use params substitutions like {xparam} and will be replaced with the value of the xparam param if a command is not specified, then image entrypoint shall be used.

  • handler -- The default function handler to call for the job or nuclio function, in batch functions (job, mpijob, ..) the handler can also be specified in the .run() command, when not specified the entire file will be executed (as main). for nuclio functions the handler is in the form of module:function, defaults to "main:handler"

  • source -- valid absolute path or URL to git, zip, or tar file, e.g. git://, http://some/url/ note path source must exist on the image or exist locally when run is local (it is recommended to use 'function.spec.workdir' when source is a filepath instead)

  • requirements -- a list of python packages, defaults to None

  • requirements_file -- path to a python requirements file

  • kfp -- reserved, flag indicating running within kubeflow pipeline


function object, timeout=3600, expected_statuses: List[str] | None = None, namespace=None, remote=True, project: str | None = None)[source]#

Wait for Pipeline status, timeout in sec

  • run_id -- id of pipelines run

  • timeout -- wait timeout in sec

  • expected_statuses -- list of expected statuses, one of [ Succeeded | Failed | Skipped | Error ], by default [ Succeeded ]

  • namespace -- k8s namespace if not default

  • remote -- read kfp data from mlrun service (default=True)

  • project -- the project of the pipeline


kfp run dict list | ValuesView, sleep=3, timeout=0, silent=False)[source]#

wait for multiple runs to complete

Note: need to use watch=False in .run() so the run will not wait for completion


# run two training functions in parallel and wait for the results
inputs = {'dataset': cleaned_data}
run1 ='train_lr', inputs=inputs, watch=False,
                 params={'model_pkg_class': 'sklearn.linear_model.LogisticRegression',
                         'label_column': 'label'})
run2 ='train_lr', inputs=inputs, watch=False,
                 params={'model_pkg_class': 'sklearn.ensemble.RandomForestClassifier',
                         'label_column': 'label'})
completed = wait_for_runs_completion([run1, run2])
  • runs -- list of run objects (the returned values of

  • sleep -- time to sleep between checks (in seconds)

  • timeout -- maximum time to wait in seconds (0 for unlimited)

  • silent -- set to True for silent exit on timeout


list of completed runs