Run, build, and deploy functions#

In this section


There is a set of methods used to deploy and run project functions. They can be used interactively or inside a pipeline (e.g. Kubeflow). When used inside a pipeline, each method is automatically mapped to the relevant pipeline engine command.

  • run_function() — Run a local or remote task as part of local or remote batch/scheduled task

  • build_function() — deploy an ML function, build a container with its dependencies for use in runs

  • deploy_function() — deploy real-time/online (nuclio or serving based) functions

Use these methods as project methods. For example:

# run the "train" function in myproject
run = myproject.run_function("train", inputs={"data": data_url})  

The first parameter in all three methods is either the function name (in the project), or a function object, used if you want to specify functions that you imported/created ad hoc, or to modify a function spec. For example:

# import a serving function from the Function Hub and deploy a trained model over it
serving = import_function("hub://v2_model_server", new_name="serving")
serving.spec.replicas = 2
deploy = deploy_function(
  models=[{"key": "mymodel", "model_path": train.outputs["model"]}],

You can use the get_function() method to get the function object and manipulate it, for example:

trainer = project.get_function("train")
trainer.with_limits(mem="2G", cpu=2, gpus=1)
run = project.run_function("train", inputs={"data": data_url}) 


Use the run_function() method to run a local or remote batch/scheduled task. The run_function method accepts various parameters such as name, handler, params, inputs, schedule, etc. Alternatively, you can pass a Task object (see: new_task()) that holds all of the parameters and the advanced options.

Functions can host multiple methods (handlers). You can set the default handler per function. You need to specify which handler you intend to call in the run command. You can pass parameters (arguments) or data inputs (such as datasets, feature-vectors, models, or files) to the functions through the run_function method.

The run_function() command returns an MLRun RunObject object that you can use to track the job and its results. If you pass the parameter watch=True (default), the command blocks until the job completes.

MLRun also supports iterative jobs that can run and track multiple child jobs (for hyperparameter tasks, AutoML, etc.). See Hyperparameter tuning optimization for details and examples.

Read further details on running tasks and getting their results.

Run/simulate functions locally:

Functions can also run and be debugged locally by using the local runtime or by setting the local=True parameter in the run() method (for batch functions).

Usage examples:

# create a project with two functions (local and from Function Hub)
project = mlrun.new_project(project_name, "./proj")
project.set_function("", "prep", image="mlrun/mlrun")
project.set_function("hub://auto_trainer", "train")

# run functions (refer to them by name)
run1 = project.run_function("prep", params={"x": 7}, inputs={'data': data_url})
run2 = project.run_function("train", inputs={"dataset": run1.outputs["data"]})

Example with new_task:

import mlrun
project = mlrun.get_or_create_project('example-project')

from mlrun import RunTemplate, new_task, mlconf
from os import path
artifact_path = path.join(mlconf.artifact_path, '{{run.uid}}')
def handler(context, param, model_names):"Running handler")
    context.set_label('category', 'tests')
    for model_name, file_name in model_names:
        context.log_artifact(model_name, body=param.encode(), local_path=file_name)
func = project.set_function("my-func", kind="job", image="mlrun/mlrun")
task = new_task(name='mytask', handler=handler, artifact_path=artifact_path, project='project-name')
run_object = project.run_function("my-func", local=True, base_task=task)

See mlrun.model.new_task() for a description of the new_task parameters.


The build_function() method is used to deploy an ML function and build a container with its dependencies for use in runs.


# build the "trainer" function image (based on the specified requirements and code repo)

The build_function() method accepts different parameters that can add to, or override, the function build spec. You can specify the target or base image extra docker commands, builder environment, and source credentials (builder_env), etc.

See further details and examples in Build function image.


The deploy_function() method is used to deploy real-time/online (nuclio or serving) functions and pipelines. Read more about Real-time serving pipelines.

Basic example:

# Deploy a real-time nuclio function ("myapi")
deployment = project.deploy_function("myapi")

# invoke the deployed function (using HTTP request) 
resp = deployment.function.invoke("/do")

You can provide the env dict with: extra environment variables; models list to specify specific models and their attributes (in the case of serving functions); builder environment; and source credentials (builder_env).

Example of using deploy_function inside a pipeline, after the train step, to generate a model:

# Deploy the trained model (from the "train" step) as a serverless serving function
serving_fn = mlrun.new_function("serving", image="mlrun/mlrun", kind="serving")
          "key": model_name,
          "model_path": train.outputs["model"],
          "class_name": 'mlrun.frameworks.sklearn.SklearnModelServer',


If you want to create a simulated (mock) function instead of a real Kubernetes service, set the mock flag is set to True. See deploy_function api.

Default image#

You can set a default image for the project. This image will be used for deploying and running any function that does not have an explicit image assigned, and replaces MLRun's default image of mlrun/mlrun. To set the default image use the set_default_image() method with the name of the image.

The default image is applied to the functions in the process of enriching the function prior to running or deploying. Functions will therefore use the default image set in the project at the time of their execution, not the image that was set when the function was added to the project.

For example:

 project = mlrun.new_project(project_name, "./proj")
 # use v1 of a pre-built image as default
 # set function without an image, will use the project's default image
 project.set_function("", "prep")

 # function will run with the "myrepo/my-prebuilt-image:v1" image
 run1 = project.run_function("prep", params={"x": 7}, inputs={'data': data_url})


 # replace the default image with a newer v2
 # function will now run using the v2 version of the image 
 run2 = project.run_function("prep", params={"x": 7}, inputs={'data': data_url})

Read more about Images and their usage in MLRun.

Image build configuration#

Use the set_default_image() function to configure a project to use an existing image. The configuration for building this default image can be contained within the project, by using the build_config() and build_image() functions.

The project build configuration is maintained in the project object. When saving, exporting and importing the project these configurations are carried over with it. This makes it simple to transport a project between systems while ensuring that the needed runtime images are built and are ready for execution.

When using build_config(), build configurations can be passed along with the resulting image name, and these are used to build the image. The image name is assigned following these rules, based on the project configuration and provided parameters:

  1. If provided, the name passed in the image parameter of build_config().

  2. The project's default image name, if configured using set_default_image().

  3. The value set in MLRun's default_project_image_name config parameter - by default this value is .mlrun-project-image-{name} with the project name as template parameter.

For example:

 # Set image config for current project object, using base mlrun image with additional requirements. 
 image_name = ".my-project-image"

 # Export the project configuration. The yaml file will contain the build configuration
 proj_file_path = "~/mlrun/my-project/project.yaml"

This project can then be imported and the default image can be built:

 # Import the project as a new project with a different name
 new_project = mlrun.load_project("~/mlrun/my-project", name="my-other-project")
 # Build the default image for the project, based on project build config

 # Set a new function and run it (new function uses the my-project-image image built previously)
 new_project.set_function("", name="scores", kind="job", handler="handler")


The build_image() function builds an image using the existing build configuration. This method can also be used to set the build configuration and build the image based on it - in a single step.

If you set a source for the project (for example, git source) and set pull_at_runtime = False, then the generated image contains the project source in it. For example, this code builds .some-project-image image with the source in it.

project = mlrun.get_or_create_project(
    name="project-name", context="./"



And now you can run a function based on the project code without having to specify an image:

func = project.set_function(handler="package.function", name="func", kind="job")
project.run_function("func", params={...})

When using set_as_default=False any build config provided is still kept in the project object but the generated image name is not set as the default image for this project. For example:

image_name = ".temporary-image"
project.build_image(image=image_name, set_as_default=False)

# Create a function using the temp image name
project.set_function("", name="scores", kind="job", handler="handler", image=image_name)