mlrun#

class mlrun.ArtifactType[source]#

Possible artifact types to pack objects as and log using a mlrun.Packager.

mlrun.code_to_function(name: str = '', project: str = '', tag: str = '', filename: str = '', handler: str = '', kind: str = '', image: str | None = None, code_output: str = '', embed_code: bool = True, description: str = '', requirements: str | List[str] | None = None, categories: List[str] | None = None, labels: Dict[str, str] | None = None, with_doc: bool = True, ignored_tags=None, requirements_file: str = '') MpiRuntimeV1Alpha1 | MpiRuntimeV1 | RemoteRuntime | ServingRuntime | DaskCluster | KubejobRuntime | LocalRuntime | Spark3Runtime | RemoteSparkRuntime | DatabricksRuntime[source]#

Convenience function to insert code and configure an mlrun runtime.

Easiest way to construct a runtime type object. Provides the most often used configuration options for all runtimes as parameters.

Instantiated runtimes are considered 'functions' in mlrun, but they are anything from nuclio functions to generic kubernetes pods to spark jobs. Functions are meant to be focused, and as such limited in scope and size. Typically, a function can be expressed in a single python module with added support from custom docker images and commands for the environment. The returned runtime object can be further configured if more customization is required.

One of the most important parameters is 'kind'. This is what is used to specify the chosen runtimes. The options are:

  • local: execute a local python or shell script

  • job: insert the code into a Kubernetes pod and execute it

  • nuclio: insert the code into a real-time serverless nuclio function

  • serving: insert code into orchestrated nuclio function(s) forming a DAG

  • dask: run the specified python code / script as Dask Distributed job

  • mpijob: run distributed Horovod jobs over the MPI job operator

  • spark: run distributed Spark job using Spark Kubernetes Operator

  • remote-spark: run distributed Spark job on remote Spark service

Learn more about [Kinds of function (runtimes)](../concepts/functions-overview.html).

Parameters:
  • name -- function name, typically best to use hyphen-case

  • project -- project used to namespace the function, defaults to 'default'

  • tag -- function tag to track multiple versions of the same function, defaults to 'latest'

  • filename -- path to .py/.ipynb file, defaults to current jupyter notebook

  • handler -- The default function handler to call for the job or nuclio function, in batch functions (job, mpijob, ..) the handler can also be specified in the .run() command, when not specified the entire file will be executed (as main). for nuclio functions the handler is in the form of module:function, defaults to 'main:handler'

  • kind -- function runtime type string - nuclio, job, etc. (see docstring for all options)

  • image -- base docker image to use for building the function container, defaults to None

  • code_output -- specify '.' to generate python module from the current jupyter notebook

  • embed_code -- indicates whether or not to inject the code directly into the function runtime spec, defaults to True

  • description -- short function description, defaults to ''

  • requirements -- list of python packages or pip requirements file path, defaults to None

  • requirements -- a list of python packages

  • requirements_file -- path to a python requirements file

  • categories -- list of categories for mlrun Function Hub, defaults to None

  • labels -- immutable name/value pairs to tag the function with useful metadata, defaults to None

  • with_doc -- indicates whether to document the function parameters, defaults to True

  • ignored_tags -- notebook cells to ignore when converting notebooks to py code (separated by ';')

Returns:

pre-configured function object from a mlrun runtime class

example:

import mlrun

# create job function object from notebook code and add doc/metadata
fn = mlrun.code_to_function("file_utils", kind="job",
                            handler="open_archive", image="mlrun/mlrun",
                            description = "this function opens a zip archive into a local/mounted folder",
                            categories = ["fileutils"],
                            labels = {"author": "me"})

example:

import mlrun
from pathlib import Path

# create file
Path("mover.py").touch()

# create nuclio function object from python module call mover.py
fn = mlrun.code_to_function("nuclio-mover", kind="nuclio",
                            filename="mover.py", image="python:3.7",
                            description = "this function moves files from one system to another",
                            requirements = ["pandas"],
                            labels = {"author": "me"})
mlrun.get_secret_or_env(key: str, secret_provider: Dict | SecretsStore | Callable | None = None, default: str | None = None, prefix: str | None = None) str[source]#

Retrieve value of a secret, either from a user-provided secret store, or from environment variables. The function will retrieve a secret value, attempting to find it according to the following order:

  1. If secret_provider was provided, will attempt to retrieve the secret from it

  2. If an MLRun SecretsStore was provided, query it for the secret key

  3. An environment variable with the same key

  4. An MLRun-generated env. variable, mounted from a project secret (to be used in MLRun runtimes)

  5. The default value

Example:

secrets = { "KEY1": "VALUE1" }
secret = get_secret_or_env("KEY1", secret_provider=secrets)

# Using a function to retrieve a secret
def my_secret_provider(key):
    # some internal logic to retrieve secret
    return value

secret = get_secret_or_env("KEY1", secret_provider=my_secret_provider, default="TOO-MANY-SECRETS")
Parameters:
  • key -- Secret key to look for

  • secret_provider -- Dictionary, callable or SecretsStore to extract the secret value from. If using a callable, it must use the signature callable(key:str)

  • default -- Default value to return if secret was not available through any other means

  • prefix -- When passed, the prefix is added to the secret key.

Returns:

The secret value if found in any of the sources, or default if provided.

mlrun.get_version()[source]#

get current mlrun version

mlrun.handler(labels: Dict[str, str] | None = None, outputs: List[str | Dict[str, str]] | None = None, inputs: bool | Dict[str, str | Type] = True)[source]#

MLRun's handler is a decorator to wrap a function and enable setting labels, parsing inputs (mlrun.DataItem) using type hints and log returning outputs using log hints.

Notice: this decorator is now appplied automatically with the release of mlrun.package. It should not be used manually.

Parameters:
  • labels -- Labels to add to the run. Expecting a dictionary with the labels names as keys. Default: None.

  • outputs --

    Log hints (logging configurations) for the function's returned values. Expecting a list of the following values:

    • str - A string in the format of '{key}:{artifact_type}'. If a string was given without ':' it will indicate the key, and the artifact type will be according to the returned value type's default artifact type. The artifact types supported are listed in the relevant type packager.

    • Dict[str, str] - A dictionary of logging configuration. the key 'key' is mandatory for the logged artifact key.

    • None - Do not log the output.

    If the list length is not equal to the total amount of returned values from the function, those without log hints will be ignored.

    Default: None - meaning no outputs will be logged.

  • inputs --

    Type hints (parsing configurations) for the arguments passed as inputs via the run method of an MLRun function. Can be passed as a boolean value or a dictionary:

    • True - Parse all found inputs to the assigned type hint in the function's signature. If there is no type hint assigned, the value will remain an mlrun.DataItem.

    • False - Do not parse inputs, leaving the inputs as mlrun.DataItem.

    • Dict[str, Union[Type, str]] - A dictionary with argument name as key and the expected type to parse the mlrun.DataItem to. The expected type can be a string as well, idicating the full module path.

    Default: True - meaning inputs will be parsed from DataItem's as long as they are type hinted.

Example:

import mlrun

@mlrun.handler(
    outputs=[
        "my_string",
        None,
        {"key": "my_array", "artifact_type": "file", "file_format": "npy"},
        "my_multiplier: reuslt"
    ]
)
def my_handler(array: np.ndarray, m: int):
    m += 1
    array = array * m
    return "I will be logged", "I won't be logged", array, m

>>> mlrun_function = mlrun.code_to_function("my_code.py", kind="job")
>>> run_object = mlrun_function.run(
...     handler="my_handler",
...     inputs={"array": "store://my_array_Artifact"},
...     params={"m": 2}
... )
>>> run_object.outputs
{'my_string': 'I will be logged', 'my_array': 'store://...', 'my_multiplier': 3}
mlrun.import_function(url='', secrets=None, db='', project=None, new_name=None)[source]#

Create function object from DB or local/remote YAML file

Functions can be imported from function repositories (mlrun Function Hub (formerly Marketplace) or local db), or be read from a remote URL (http(s), s3, git, v3io, ..) containing the function YAML

special URLs:

function hub:       hub://[{source}/]{name}[:{tag}]
local mlrun db:     db://{project-name}/{name}[:{tag}]

examples:

function = mlrun.import_function("hub://auto-trainer")
function = mlrun.import_function("./func.yaml")
function = mlrun.import_function("https://raw.githubusercontent.com/org/repo/func.yaml")
Parameters:
  • url -- path/url to Function Hub, db or function YAML file

  • secrets -- optional, credentials dict for DB or URL (s3, v3io, ...)

  • db -- optional, mlrun api/db path

  • project -- optional, target project for the function

  • new_name -- optional, override the imported function name

Returns:

function object

mlrun.set_environment(api_path: str | None = None, artifact_path: str = '', access_key: str | None = None, username: str | None = None, env_file: str | None = None, mock_functions: str | None = None)[source]#

set and test default config for: api path, artifact_path and project

this function will try and read the configuration from the environment/api and merge it with the user provided project name, artifacts path or api path/access_key. it returns the configured artifacts path, this can be used to define sub paths.

Note: the artifact path is an mlrun data uri (e.g. s3://bucket/path) and can not be used with file utils.

example:

from os import path
project_name, artifact_path = set_environment()
set_environment("http://localhost:8080", artifact_path="./")
set_environment(env_file="mlrun.env")
set_environment("<remote-service-url>", access_key="xyz", username="joe")
Parameters:
  • api_path -- location/url of mlrun api service

  • artifact_path -- path/url for storing experiment artifacts

  • access_key -- set the remote cluster access key (V3IO_ACCESS_KEY)

  • username -- name of the user to authenticate

  • env_file -- path/url to .env file (holding MLRun config and other env vars), see: set_env_from_file()

  • mock_functions -- set to True to create local/mock functions instead of real containers, set to "auto" to auto determine based on the presence of k8s/Nuclio

Returns:

default project name actual artifact path/url, can be used to create subpaths per task or group of artifacts