mlrun.db#
- class mlrun.db.httpdb.HTTPRunDB(url)[source]#
Bases:
RunDBInterface
Interface for accessing and manipulating the
mlrun
persistent store, maintaining the full state and catalog of objects that MLRun uses. TheHTTPRunDB
class serves as a client-side proxy to the MLRun API service which maintains the actual data-store, accesses the server through REST APIs.The class provides functions for accessing and modifying the various objects that are used by MLRun in its operation. The functions provided follow some standard guidelines, which are:
Every object in MLRun exists in the context of a project (except projects themselves). When referencing an object through any API, a project name must be provided. The default for most APIs is for an empty project name, which will be replaced by the name of the default project (usually
default
). Therefore, if performing an API to list functions, for example, and not providing a project name - the result will not be functions from all projects but rather from thedefault
project.Many objects can be assigned labels, and listed/queried by label. The label parameter for query APIs allows for listing objects that:
Have a specific label, by asking for
label="<label_name>"
. In this case the actual value of the label doesn't matter and every object with that label will be returnedHave a label with a specific value. This is done by specifying
label="<label_name>=<label_value>"
. In this case only objects whose label matches the value will be returned
Most objects have a
create
method as well as astore
method. Create can only be called when such an does not exist yet, while store allows for either creating a new object or overwriting an existing object.Some objects have a
versioned
option, in which case overwriting the same object with a different version of it does not delete the previous version, but rather creates a new version of the object and keeps both versions. Versioned objects usually have auid
property which is based on their content and allows to reference a specific version of an object (other than tagging objects, which also allows for easy referencing).Many objects have both a
store
function and apatch
function. These are used in the same way as the corresponding REST verbs - astore
is passed a full object and will basically perform a PUT operation, replacing the full object (if it exists) whilepatch
receives just a dictionary containing the differences to be applied to the object, and will merge those changes to the existing object. Thepatch
operation also has a strategy assigned to it which determines how the merge logic should behave. The strategy can be eitherreplace
oradditive
. For further details on those strategies, refer to https://pypi.org/project/mergedeep/
- RETRIABLE_POST_PATHS = ['\\/?projects\\/.+\\/artifacts\\/.+\\/.+', '\\/?run\\/.+\\/.+']#
- abort_run(uid, project='', iter=0, timeout=45, status_text='')[source]#
Abort a running run - will remove the run's runtime resources and mark its state as aborted. :returns:
BackgroundTask
.
- api_call(method, path, error=None, params=None, body=None, json=None, headers=None, timeout=45, version=None) Response [source]#
Perform a direct REST API call on the
mlrun
API server.Caution
For advanced usage - prefer using the various APIs exposed through this class, rather than directly invoking REST calls.
- Parameters:
method -- REST method (POST, GET, PUT...)
path -- Path to endpoint executed, for example
"projects"
error -- Error to return if API invocation fails
params -- Rest parameters, passed as a dictionary:
{"<param-name>": <"param-value">}
body -- Payload to be passed in the call. If using JSON objects, prefer using the
json
paramjson -- JSON payload to be passed in the call
headers -- REST headers, passed as a dictionary:
{"<header-name>": "<header-value>"}
timeout -- API call timeout
version -- API version to use, None (the default) will mean to use the default value from config, for un-versioned api set an empty string.
- Returns:
requests.Response HTTP response object
- connect(secrets=None)[source]#
Connect to the MLRun API server. Must be called prior to executing any other method. The code utilizes the URL for the API server from the configuration -
config.dbpath
.For example:
config.dbpath = config.dbpath or "http://mlrun-api:8080" db = get_run_db().connect()
- create_feature_set(feature_set: dict | FeatureSet | FeatureSet, project='', versioned=True) dict [source]#
Create a new
FeatureSet
and save in themlrun
DB. The feature-set must not previously exist in the DB.- Parameters:
feature_set -- The new
FeatureSet
to create.project -- Name of project this feature-set belongs to.
versioned -- Whether to maintain versions for this feature-set. All versions of a versioned object will be kept in the DB and can be retrieved until explicitly deleted.
- Returns:
The
FeatureSet
object (as dict).
- create_feature_vector(feature_vector: dict | FeatureVector | FeatureVector, project='', versioned=True) dict [source]#
Create a new
FeatureVector
and save in themlrun
DB.- Parameters:
feature_vector -- The new
FeatureVector
to create.project -- Name of project this feature-vector belongs to.
versioned -- Whether to maintain versions for this feature-vector. All versions of a versioned object will be kept in the DB and can be retrieved until explicitly deleted.
- Returns:
The
FeatureVector
object (as dict).
- create_hub_source(source: dict | IndexedHubSource)[source]#
Add a new hub source.
MLRun maintains an ordered list of hub sources (“sources”) Each source has its details registered and its order within the list. When creating a new source, the special order
-1
can be used to mark this source as last in the list. However, once the source is in the MLRun list, its order will always be>0
.The global hub source always exists in the list, and is always the last source (
order = -1
). It cannot be modified nor can it be moved to another order in the list.The source object may contain credentials which are needed to access the datastore where the source is stored. These credentials are not kept in the MLRun DB, but are stored inside a kubernetes secret object maintained by MLRun. They are not returned through any API from MLRun.
Example:
import mlrun.common.schemas # Add a private source as the last one (will be #1 in the list) private_source = mlrun.common.schemas.IndexedHubSource( order=-1, source=mlrun.common.schemas.HubSource( metadata=mlrun.common.schemas.HubObjectMetadata( name="priv", description="a private source" ), spec=mlrun.common.schemas.HubSourceSpec( path="/local/path/to/source", channel="development" ), ), ) db.create_hub_source(private_source) # Add another source as 1st in the list - will push previous one to be #2 another_source = mlrun.common.schemas.IndexedHubSource( order=1, source=mlrun.common.schemas.HubSource( metadata=mlrun.common.schemas.HubObjectMetadata( name="priv-2", description="another source" ), spec=mlrun.common.schemas.HubSourceSpec( path="/local/path/to/source/2", channel="development", credentials={...}, ), ), ) db.create_hub_source(another_source)
- Parameters:
source -- The source and its order, of type
IndexedHubSource
, or in dictionary form.- Returns:
The source object as inserted into the database, with credentials stripped.
- create_model_endpoint(project: str, endpoint_id: str, model_endpoint: ModelEndpoint | dict)[source]#
Creates a DB record with the given model_endpoint record.
- Parameters:
project -- The name of the project.
endpoint_id -- The id of the endpoint.
model_endpoint -- An object representing the model endpoint.
- create_project(project: dict | MlrunProject | Project) MlrunProject [source]#
Create a new project. A project with the same name must not exist prior to creation.
- create_project_secrets(project: str, provider: str | SecretProviderName = SecretProviderName.kubernetes, secrets: dict | None = None)[source]#
Create project-context secrets using either
vault
orkubernetes
provider. When using with Vault, this will create needed Vault structures for storing secrets in project-context, and store a set of secret values. The method generates Kubernetes service-account and the Vault authentication structures that are required for function Pods to authenticate with Vault and be able to extract secret values passed as part of their context.Note
This method used with Vault is currently in technical preview, and requires a HashiCorp Vault infrastructure properly set up and connected to the MLRun API server.
When used with Kubernetes, this will make sure that the project-specific k8s secret is created, and will populate it with the secrets provided, replacing their values if they exist.
- Parameters:
project -- The project context for which to generate the infra and store secrets.
provider -- The name of the secrets-provider to work with. Accepts a
SecretProviderName
enum.secrets --
A set of secret values to store. Example:
secrets = {"password": "myPassw0rd", "aws_key": "111222333"} db.create_project_secrets( "project1", provider=mlrun.common.schemas.SecretProviderName.kubernetes, secrets=secrets, )
- create_schedule(project: str, schedule: ScheduleInput)[source]#
Create a new schedule on the given project. The details on the actual object to schedule as well as the schedule itself are within the schedule object provided. The
ScheduleCronTrigger
follows the guidelines in https://apscheduler.readthedocs.io/en/3.x/modules/triggers/cron.html. It also supports afrom_crontab()
function that accepts a crontab-formatted string (see https://en.wikipedia.org/wiki/Cron for more information on the format and note that the 0 weekday is always monday).Example:
from mlrun.common import schemas # Execute the get_data_func function every Tuesday at 15:30 schedule = schemas.ScheduleInput( name="run_func_on_tuesdays", kind="job", scheduled_object=get_data_func, cron_trigger=schemas.ScheduleCronTrigger( day_of_week="tue", hour=15, minute=30 ), ) db.create_schedule(project_name, schedule)
- create_user_secrets(user: str, provider: str | SecretProviderName = SecretProviderName.vault, secrets: dict | None = None)[source]#
Create user-context secret in Vault. Please refer to
create_project_secrets()
for more details and status of this functionality.Note
This method is currently in technical preview, and requires a HashiCorp Vault infrastructure properly set up and connected to the MLRun API server.
- Parameters:
user -- The user context for which to generate the infra and store secrets.
provider -- The name of the secrets-provider to work with. Currently only
vault
is supported.secrets -- A set of secret values to store within the Vault.
- del_artifact(key, tag=None, project='', tree=None, uid=None, deletion_strategy: ArtifactsDeletionStrategies = ArtifactsDeletionStrategies.metadata_only, secrets: dict | None = None, iter=None)[source]#
Delete an artifact.
- Parameters:
key -- Identifying key of the artifact.
tag -- Tag of the artifact.
project -- Project that the artifact belongs to.
tree -- The tree which generated this artifact.
uid -- A unique ID for this specific version of the artifact (the uid that was generated in the backend)
deletion_strategy -- The artifact deletion strategy types.
secrets -- Credentials needed to access the artifact data.
- del_artifacts(name=None, project=None, tag=None, labels=None, days_ago=0, tree=None)[source]#
Delete artifacts referenced by the parameters.
- Parameters:
name -- Name of artifacts to delete. Note that this is a like query, and is case-insensitive. See
list_artifacts()
for more details.project -- Project that artifacts belong to.
tag -- Choose artifacts who are assigned this tag.
labels -- Choose artifacts which are labeled.
days_ago -- This parameter is deprecated and not used.
- del_run(uid, project='', iter=0)[source]#
Delete details of a specific run from DB.
- Parameters:
uid -- Unique ID for the specific run to delete.
project -- Project that the run belongs to.
iter -- Iteration within a specific task.
- del_runs(name=None, project=None, labels=None, state=None, days_ago=0)[source]#
Delete a group of runs identified by the parameters of the function.
Example:
db.del_runs(state="completed")
- Parameters:
name -- Name of the task which the runs belong to.
project -- Project to which the runs belong.
labels -- Filter runs that are labeled using these specific label values.
state -- Filter only runs which are in this state.
days_ago -- Filter runs whose start time is newer than this parameter.
- delete_alert_config(alert_name: str, project='')[source]#
Delete an alert. :param alert_name: The name of the alert to delete. :param project: The project that the alert belongs to.
- delete_api_gateway(name, project=None)[source]#
Deletes an API gateway
- Parameters:
name -- API gateway name
project -- Project name
- delete_artifacts_tags(artifacts, project: str, tag_name: str)[source]#
Delete tag from a list of artifacts.
- Parameters:
artifacts -- The artifacts to delete the tag from. Can be a list of
Artifact
objects or dictionaries, or a single object.project -- Project which contains the artifacts.
tag_name -- The tag to set on the artifacts.
- delete_feature_set(name, project='', tag=None, uid=None)[source]#
Delete a
FeatureSet
object from the DB. Iftag
oruid
are specified, then just the version referenced by them will be deleted. Using both is not allowed. If none are specified, then all instances of the object whose name isname
will be deleted.
- delete_feature_vector(name, project='', tag=None, uid=None)[source]#
Delete a
FeatureVector
object from the DB. Iftag
oruid
are specified, then just the version referenced by them will be deleted. Using both is not allowed. If none are specified, then all instances of the object whose name isname
will be deleted.
- delete_function(name: str, project: str = '')[source]#
Delete a function belonging to a specific project.
- delete_hub_source(source_name: str)[source]#
Delete a hub source from the DB. The source will be deleted from the list, and any following sources will be promoted - for example, if the 1st source is deleted, the 2nd source will become #1 in the list. The global hub source cannot be deleted.
- Parameters:
source_name -- Name of the hub source to delete.
- delete_model_endpoint(project: str, endpoint_id: str)[source]#
Deletes the DB record of a given model endpoint, project and endpoint_id are used for lookup
- Parameters:
project -- The name of the project
endpoint_id -- The id of the endpoint
- delete_model_monitoring_function(project: str, functions: list[str]) bool [source]#
Delete a model monitoring application.
- Parameters:
functions -- List of the model monitoring function to delete.
project -- Project name.
- Returns:
True if the deletion was successful, False otherwise.
- delete_objects_tag(project: str, tag_name: str, tag_objects: TagObjects | dict)[source]#
Delete a tag from a list of objects.
- Parameters:
project -- Project which contains the objects.
tag_name -- The tag to delete from the objects.
tag_objects -- The objects to delete the tag from.
- delete_project(name: str, deletion_strategy: str | DeletionStrategy = DeletionStrategy.restricted) None [source]#
Delete a project.
- Parameters:
name -- Name of the project to delete.
deletion_strategy --
How to treat resources related to the project. Possible values are:
restrict
(default) - Project must not have any related resources when deleted. If using this mode while related resources exist, the operation will fail.cascade
- Automatically delete all related resources when deleting the project.
- delete_project_secrets(project: str, provider: str | SecretProviderName = SecretProviderName.kubernetes, secrets: list[str] | None = None)[source]#
Delete project-context secrets from Kubernetes.
- Parameters:
project -- The project name.
provider -- The name of the secrets-provider to work with. Currently only
kubernetes
is supported.secrets -- A list of secret names to delete. An empty list will delete all secrets assigned to this specific project.
- delete_runtime_resources(project: str | None = None, label_selector: str | None = None, kind: str | None = None, object_id: str | None = None, force: bool = False, grace_period: int | None = None) dict[str, dict[str, mlrun.common.schemas.runtime_resource.RuntimeResources]] [source]#
Delete all runtime resources which are in terminal state.
- Parameters:
project -- Delete only runtime resources of a specific project, by default None, which will delete only from the projects you're authorized to delete from.
label_selector -- Delete only runtime resources matching the label selector.
kind -- The kind of runtime to delete. May be one of ['dask', 'job', 'spark', 'remote-spark', 'mpijob']
object_id -- The identifier of the mlrun object to delete its runtime resources. for most function runtimes, runtime resources are per Run, for which the identifier is the Run's UID. For dask runtime, the runtime resources are per Function, for which the identifier is the Function's name.
force -- Force deletion - delete the runtime resource even if it's not in terminal state or if the grace period didn't pass.
grace_period -- Grace period given to the runtime resource before they are actually removed, counted from the moment they moved to terminal state (defaults to mlrun.mlconf.runtime_resources_deletion_grace_period).
- Returns:
GroupedByProjectRuntimeResourcesOutput
listing the runtime resources that were removed.
- deploy_histogram_data_drift_app(project: str, image: str = 'mlrun/mlrun') None [source]#
Deploy the histogram data drift application.
- Parameters:
project -- Project name.
image -- The image on which the application will run.
- deploy_nuclio_function(func: RemoteRuntime, builder_env: dict | None = None)[source]#
Deploy a Nuclio function.
- Parameters:
func -- Function to build.
builder_env -- Kaniko builder pod env vars dict (for config/credentials)
- disable_model_monitoring(project: str, delete_resources: bool = True, delete_stream_function: bool = False, delete_histogram_data_drift_app: bool = True, delete_user_applications: bool = False, user_application_list: list[str] | None = None) bool [source]#
Disable model monitoring application controller, writer, stream, histogram data drift application and the user's applications functions, according to the given params.
- Parameters:
project -- Project name.
delete_resources -- If True, it would delete the model monitoring controller & writer functions. Default True
delete_stream_function -- If True, it would delete model monitoring stream function, need to use wisely because if you're deleting this function this can cause data loss in case you will want to enable the model monitoring capability to the project. Default False.
delete_histogram_data_drift_app -- If True, it would delete the default histogram-based data drift application. Default False.
delete_user_applications -- If True, it would delete the user's model monitoring application according to user_application_list, Default False.
user_application_list -- List of the user's model monitoring application to disable. Default all the applications. Note: you have to set delete_user_applications to True in order to delete the desired application.
- Returns:
True if the deletion was successful, False otherwise.
- enable_model_monitoring(project: str, base_period: int = 10, image: str = 'mlrun/mlrun', deploy_histogram_data_drift_app: bool = True, rebuild_images: bool = False, fetch_credentials_from_sys_config: bool = False) None [source]#
Deploy model monitoring application controller, writer and stream functions. While the main goal of the controller function is to handle the monitoring processing and triggering applications, the goal of the model monitoring writer function is to write all the monitoring application results to the databases. The stream function goal is to monitor the log of the data stream. It is triggered when a new log entry is detected. It processes the new events into statistics that are then written to statistics databases.
- Parameters:
project -- Project name.
base_period -- The time period in minutes in which the model monitoring controller function triggers. By default, the base period is 10 minutes.
image -- The image of the model monitoring controller, writer & monitoring stream functions, which are real time nuclio functions. By default, the image is mlrun/mlrun.
deploy_histogram_data_drift_app -- If true, deploy the default histogram-based data drift application.
rebuild_images -- If true, force rebuild of model monitoring infrastructure images.
fetch_credentials_from_sys_config -- If true, fetch the credentials from the system configuration.
- function_status(project, name, kind, selector)[source]#
Retrieve status of a function being executed remotely (relevant to
dask
functions).- Parameters:
project -- The project of the function
name -- The name of the function
kind -- The kind of the function, currently
dask
is supported.selector -- Selector clause to be applied to the Kubernetes status query to filter the results.
- generate_event(name: str, event_data: dict | Event, project='')[source]#
Generate an event.
- Parameters:
name -- The name of the event.
event_data -- The data of the event.
project -- The project that the event belongs to.
- get_alert_config(alert_name: str, project='') AlertConfig [source]#
Retrieve an alert.
- Parameters:
alert_name -- The name of the alert to retrieve.
project -- The project that the alert belongs to.
- Returns:
The alert object.
- get_alert_template(template_name: str) AlertTemplate [source]#
Retrieve a specific alert template.
- Parameters:
template_name -- The name of the template to retrieve.
- Returns:
The template object.
- get_api_gateway(name, project=None) APIGateway [source]#
Returns an API gateway
- Parameters:
name -- API gateway name
project -- optional str parameter to filter by project, if not passed, default project value is taken
- Returns:
APIGateway
.
- static get_api_path_prefix(version: str | None = None) str [source]#
- Parameters:
version -- API version to use, None (the default) will mean to use the default value from mlrun.config, for un-versioned api set an empty string.
- get_background_task(name: str) BackgroundTask [source]#
Retrieve updated information on a background task being executed.
- get_builder_status(func: BaseRuntime, offset: int = 0, logs: bool = True, last_log_timestamp: float = 0.0, verbose: bool = False)[source]#
Retrieve the status of a build operation currently in progress.
- Parameters:
func -- Function object that is being built.
offset -- Offset into the build logs to retrieve logs from.
logs -- Should build logs be retrieved.
last_log_timestamp -- Last timestamp of logs that were already retrieved. Function will return only logs later than this parameter.
verbose -- Add verbose logs into the output.
- Returns:
The following parameters:
Text of builder logs.
Timestamp of last log retrieved, to be used in subsequent calls to this function.
The function also updates internal members of the
func
object to reflect build process info.
- get_feature_set(name: str, project: str = '', tag: str | None = None, uid: str | None = None) FeatureSet [source]#
Retrieve a ~mlrun.feature_store.FeatureSet` object. If both
tag
anduid
are not specified, then the object taggedlatest
will be retrieved.- Parameters:
name -- Name of object to retrieve.
project -- Project the FeatureSet belongs to.
tag -- Tag of the specific object version to retrieve.
uid -- uid of the object to retrieve (can only be used for versioned objects).
- get_feature_vector(name: str, project: str = '', tag: str | None = None, uid: str | None = None) FeatureVector [source]#
Return a specific feature-vector referenced by its tag or uid. If none are provided,
latest
tag will be used.
- get_function(name, project='', tag=None, hash_key='')[source]#
Retrieve details of a specific function, identified by its name and potentially a tag or function hash.
- get_hub_asset(source_name: str, item_name: str, asset_name: str, version: str | None = None, tag: str = 'latest')[source]#
Get hub asset from item.
- Parameters:
source_name -- Name of source.
item_name -- Name of the item which holds the asset.
asset_name -- Name of the asset to retrieve.
version -- Get a specific version of the item. Default is
None
.tag -- Get a specific version of the item identified by tag. Default is
latest
.
- Returns:
http response with the asset in the content attribute
- get_hub_catalog(source_name: str, version: str | None = None, tag: str | None = None, force_refresh: bool = False)[source]#
Retrieve the item catalog for a specified hub source. The list of items can be filtered according to various filters, using item's metadata to filter.
- Parameters:
source_name -- Name of the source.
version -- Filter items according to their version.
tag -- Filter items based on tag.
force_refresh -- Make the server fetch the catalog from the actual hub source, rather than rely on cached information which may exist from previous get requests. For example, if the source was re-built, this will make the server get the updated information. Default is
False
.
- Returns:
HubCatalog
object, which is essentially a list ofHubItem
entries.
- get_hub_item(source_name: str, item_name: str, version: str | None = None, tag: str = 'latest', force_refresh: bool = False)[source]#
Retrieve a specific hub item.
- Parameters:
source_name -- Name of source.
item_name -- Name of the item to retrieve, as it appears in the catalog.
version -- Get a specific version of the item. Default is
None
.tag -- Get a specific version of the item identified by tag. Default is
latest
.force_refresh -- Make the server fetch the information from the actual hub source, rather than rely on cached information. Default is
False
.
- Returns:
HubItem
.
- get_hub_source(source_name: str)[source]#
Retrieve a hub source from the DB.
- Parameters:
source_name -- Name of the hub source to retrieve.
- get_log(uid, project='', offset=0, size=None)[source]#
Retrieve 1 MB data of log.
- Parameters:
uid -- Log unique ID
project -- Project name for which the log belongs
offset -- Retrieve partial log, get up to
size
bytes starting at offsetoffset
from beginning of log (must be >= 0)size -- If set to
-1
will retrieve and print all data to end of the log by chunks of 1MB each.
- Returns:
The following objects:
state - The state of the runtime object which generates this log, if it exists. In case no known state exists, this will be
unknown
.content - The actual log content.
in case size = -1, return the state and the final offset
- get_log_size(uid, project='')[source]#
Retrieve log size in bytes.
- Parameters:
uid -- Run UID
project -- Project name for which the log belongs
- Returns:
The log file size in bytes for the given run UID.
- get_model_endpoint(project: str, endpoint_id: str, start: str | None = None, end: str | None = None, metrics: list[str] | None = None, feature_analysis: bool = False) ModelEndpoint [source]#
Returns a single ModelEndpoint object with additional metrics and feature related data.
- Parameters:
project -- The name of the project
endpoint_id -- The unique id of the model endpoint.
start -- The start time of the metrics. Can be represented by a string containing an RFC 3339 time, a Unix timestamp in milliseconds, a relative time ('now' or 'now-[0-9]+[mhd]', where m = minutes, h = hours, 'd' = days, and 's' = seconds), or 0 for the earliest time.
end -- The end time of the metrics. Can be represented by a string containing an RFC 3339 time, a Unix timestamp in milliseconds, a relative time ('now' or 'now-[0-9]+[mhd]', where m = minutes, h = hours, 'd' = days, and 's' = seconds), or 0 for the earliest time.
metrics -- A list of metrics to return for the model endpoint. There are pre-defined metrics for model endpoints such as predictions_per_second and latency_avg_5m but also custom metrics defined by the user. Please note that these metrics are stored in the time series DB and the results will be appeared under model_endpoint.spec.metrics.
feature_analysis -- When True, the base feature statistics and current feature statistics will be added to the output of the resulting object.
- Returns:
A ModelEndpoint object.
- get_nuclio_deploy_status(func: RemoteRuntime, last_log_timestamp: float = 0.0, verbose: bool = False)[source]#
Retrieve the status of a deploy operation currently in progress.
- Parameters:
func -- Function object that is being built.
last_log_timestamp -- Last timestamp of logs that were already retrieved. Function will return only logs later than this parameter.
verbose -- Add verbose logs into the output.
- Returns:
The following parameters:
Text of builder logs.
Timestamp of last log retrieved, to be used in subsequent calls to this function.
- get_pipeline(run_id: str, namespace: str | None = None, timeout: int = 30, format_: str | PipelineFormat = PipelineFormat.summary, project: str | None = None)[source]#
Retrieve details of a specific pipeline using its run ID (as provided when the pipeline was executed).
- get_project(name: str) MlrunProject [source]#
Get details for a specific project.
- get_project_background_task(project: str, name: str) BackgroundTask [source]#
Retrieve updated information on a project background task being executed.
- get_schedule(project: str, name: str, include_last_run: bool = False) ScheduleOutput [source]#
Retrieve details of the schedule in question. Besides returning the details of the schedule object itself, this function also returns the next scheduled run for this specific schedule, as well as potentially the results of the last run executed through this schedule.
- Parameters:
project -- Project name.
name -- Name of the schedule object to query.
include_last_run -- Whether to include the results of the schedule's last run in the response.
- get_workflow_id(project: str, name: str, run_id: str, engine: str = '')[source]#
Retrieve workflow id from the uid of the workflow runner.
- Parameters:
project -- project name
name -- workflow name
run_id -- the id of the workflow runner - the job that runs the workflow
engine -- pipeline runner
- Returns:
GetWorkflowResponse
.
- invoke_schedule(project: str, name: str)[source]#
Execute the object referenced by the schedule immediately.
- kind = 'http'#
- list_alert_templates() list[mlrun.common.schemas.alert.AlertTemplate] [source]#
Retrieve list of all alert templates.
- Returns:
All the alert template objects in the database.
- list_alerts_configs(project='') list[mlrun.alerts.alert.AlertConfig] [source]#
Retrieve list of alerts of a project.
- Parameters:
project -- The project name.
- Returns:
All the alerts objects of the project.
- list_api_gateways(project=None) APIGatewaysOutput [source]#
Returns a list of Nuclio api gateways
- Parameters:
project -- optional str parameter to filter by project, if not passed, default project value is taken
- Returns:
APIGateways
.
- list_artifact_tags(project=None, category: str | ArtifactCategories | None = None) list[str] [source]#
Return a list of all the tags assigned to artifacts in the scope of the given project.
- list_artifacts(name=None, project=None, tag=None, labels: dict[str, str] | list[str] | None = None, since: datetime | None = None, until: datetime | None = None, iter: int | None = None, best_iteration: bool = False, kind: str | None = None, category: str | ArtifactCategories | None = None, tree: str | None = None, producer_uri: str | None = None, format_: ArtifactFormat = 'full', limit: int | None = None) ArtifactList [source]#
List artifacts filtered by various parameters.
Examples:
# Show latest version of all artifacts in project latest_artifacts = db.list_artifacts("", tag="latest", project="iris") # check different artifact versions for a specific artifact result_versions = db.list_artifacts("results", tag="*", project="iris") # Show artifacts with label filters - both uploaded and of binary type result_labels = db.list_artifacts( "results", tag="*", project="iris", labels=["uploaded", "type=binary"] )
- Parameters:
name -- Name of artifacts to retrieve. Name with '~' prefix is used as a like query, and is not case-sensitive. This means that querying for
~name
may return artifacts namedmy_Name_1
orsurname
.project -- Project name.
tag -- Return artifacts assigned this tag.
labels -- Return artifacts that have these labels. Labels can either be a dictionary {"label": "value"} or a list of "label=value" (match label key and value) or "label" (match just label key) strings.
since -- Return artifacts updated after this date (as datetime object).
until -- Return artifacts updated before this date (as datetime object).
iter -- Return artifacts from a specific iteration (where
iter=0
means the root iteration). IfNone
(default) return artifacts from all iterations.best_iteration -- Returns the artifact which belongs to the best iteration of a given run, in the case of artifacts generated from a hyper-param run. If only a single iteration exists, will return the artifact from that iteration. If using
best_iter
, theiter
parameter must not be used.kind -- Return artifacts of the requested kind.
category -- Return artifacts of the requested category.
tree -- Return artifacts of the requested tree.
producer_uri -- Return artifacts produced by the requested producer URI. Producer URI usually points to a run and is used to filter artifacts by the run that produced them when the artifact producer id is a workflow id (artifact was created as part of a workflow).
format -- The format in which to return the artifacts. Default is 'full'.
limit -- Maximum number of artifacts to return.
- list_datastore_profiles(project: str) list[mlrun.common.schemas.datastore_profile.DatastoreProfile] [source]#
- list_entities(project: str, name: str | None = None, tag: str | None = None, labels: list[str] | None = None) list[dict] [source]#
Retrieve a list of entities and their mapping to the containing feature-sets. This function is similar to the
list_features()
function, and uses the same logic. However, the entities are matched against the name rather than the features.
- list_entities_v2(project: str, name: str | None = None, tag: str | None = None, labels: list[str] | None = None) dict[str, list[dict]] [source]#
Retrieve a list of entities and their mapping to the containing feature-sets. This function is similar to the
list_features_v2()
function, and uses the same logic. However, the entities are matched against the name rather than the features.
- list_feature_sets(project: str = '', name: str | None = None, tag: str | None = None, state: str | None = None, entities: list[str] | None = None, features: list[str] | None = None, labels: list[str] | None = None, partition_by: FeatureStorePartitionByField | str | None = None, rows_per_partition: int = 1, partition_sort_by: SortField | str | None = None, partition_order: OrderType | str = OrderType.desc, format_: str | FeatureSetFormat = 'full') list[mlrun.feature_store.feature_set.FeatureSet] [source]#
Retrieve a list of feature-sets matching the criteria provided.
- Parameters:
project -- Project name.
name -- Name of feature-set to match. This is a like query, and is case-insensitive.
tag -- Match feature-sets with specific tag.
state -- Match feature-sets with a specific state.
entities -- Match feature-sets which contain entities whose name is in this list.
features -- Match feature-sets which contain features whose name is in this list.
labels -- Match feature-sets which have these labels.
partition_by -- Field to group results by. Only allowed value is name. When partition_by is specified, the partition_sort_by parameter must be provided as well.
rows_per_partition -- How many top rows (per sorting defined by partition_sort_by and partition_order) to return per group. Default value is 1.
partition_sort_by -- What field to sort the results by, within each partition defined by partition_by. Currently the only allowed value are created and updated.
partition_order -- Order of sorting within partitions - asc or desc. Default is desc.
format -- Format of the results. Possible values are: -
minimal
- Return minimal feature set objects, not including stats and preview for each feature set. -full
- Return full feature set objects.
- Returns:
List of matching
FeatureSet
objects.
- list_feature_vectors(project: str = '', name: str | None = None, tag: str | None = None, state: str | None = None, labels: list[str] | None = None, partition_by: FeatureStorePartitionByField | str | None = None, rows_per_partition: int = 1, partition_sort_by: SortField | str | None = None, partition_order: OrderType | str = OrderType.desc) list[mlrun.feature_store.feature_vector.FeatureVector] [source]#
Retrieve a list of feature-vectors matching the criteria provided.
- Parameters:
project -- Project name.
name -- Name of feature-vector to match. This is a like query, and is case-insensitive.
tag -- Match feature-vectors with specific tag.
state -- Match feature-vectors with a specific state.
labels -- Match feature-vectors which have these labels.
partition_by -- Field to group results by. Only allowed value is name. When partition_by is specified, the partition_sort_by parameter must be provided as well.
rows_per_partition -- How many top rows (per sorting defined by partition_sort_by and partition_order) to return per group. Default value is 1.
partition_sort_by -- What field to sort the results by, within each partition defined by partition_by. Currently the only allowed values are created and updated.
partition_order -- Order of sorting within partitions - asc or desc. Default is desc.
- Returns:
List of matching
FeatureVector
objects.
- list_features(project: str, name: str | None = None, tag: str | None = None, entities: list[str] | None = None, labels: list[str] | None = None) list[dict] [source]#
List feature-sets which contain specific features. This function may return multiple versions of the same feature-set if a specific tag is not requested. Note that the various filters of this function actually refer to the feature-set object containing the features, not to the features themselves.
- Parameters:
project -- Project which contains these features.
name -- Name of the feature to look for. The name is used in a like query, and is not case-sensitive. For example, looking for
feat
will return features which are namedMyFeature
as well asdefeat
.tag -- Return feature-sets which contain the features looked for, and are tagged with the specific tag.
entities -- Return only feature-sets which contain an entity whose name is contained in this list.
labels -- Return only feature-sets which are labeled as requested.
- Returns:
A list of mapping from feature to a digest of the feature-set, which contains the feature-set meta-data. Multiple entries may be returned for any specific feature due to multiple tags or versions of the feature-set.
- list_features_v2(project: str, name: str | None = None, tag: str | None = None, entities: list[str] | None = None, labels: list[str] | None = None) dict[str, list[dict]] [source]#
List feature-sets which contain specific features. This function may return multiple versions of the same feature-set if a specific tag is not requested. Note that the various filters of this function actually refer to the feature-set object containing the features, not to the features themselves.
- Parameters:
project -- Project which contains these features.
name -- Name of the feature to look for. The name is used in a like query, and is not case-sensitive. For example, looking for
feat
will return features which are namedMyFeature
as well asdefeat
.tag -- Return feature-sets which contain the features looked for, and are tagged with the specific tag.
entities -- Return only feature-sets which contain an entity whose name is contained in this list.
labels -- Return only feature-sets which are labeled as requested.
- Returns:
A list of features, and a list of their corresponding feature sets.
- list_functions(name=None, project=None, tag=None, labels=None, since=None, until=None)[source]#
Retrieve a list of functions, filtered by specific criteria.
- Parameters:
name -- Return only functions with a specific name.
project -- Return functions belonging to this project. If not specified, the default project is used.
tag -- Return function versions with specific tags. To return only tagged functions, set tag to
"*"
.labels -- Return functions that have specific labels assigned to them.
since -- Return functions updated after this date (as datetime object).
until -- Return functions updated before this date (as datetime object).
- Returns:
List of function objects (as dictionary).
- list_hub_sources(item_name: str | None = None, tag: str | None = None, version: str | None = None) list[mlrun.common.schemas.hub.IndexedHubSource] [source]#
List hub sources in the MLRun DB.
- Parameters:
item_name -- Sources contain this item will be returned, If not provided all sources will be returned.
tag -- Item tag to filter by, supported only if item name is provided.
version -- Item version to filter by, supported only if item name is provided and tag is not.
- Returns:
List of indexed hub sources.
- list_model_endpoints(project: str, model: str | None = None, function: str | None = None, labels: list[str] | None = None, start: str = 'now-1h', end: str = 'now', metrics: list[str] | None = None, top_level: bool = False, uids: list[str] | None = None) list[mlrun.model_monitoring.model_endpoint.ModelEndpoint] [source]#
Returns a list of ModelEndpoint objects. Each ModelEndpoint object represents the current state of a model endpoint. This functions supports filtering by the following parameters: 1) model 2) function 3) labels 4) top level 5) uids By default, when no filters are applied, all available endpoints for the given project will be listed.
In addition, this functions provides a facade for listing endpoint related metrics. This facade is time-based and depends on the 'start' and 'end' parameters. By default, when the metrics parameter is None, no metrics are added to the output of this function.
- Parameters:
project -- The name of the project
model -- The name of the model to filter by
function -- The name of the function to filter by
labels -- A list of labels to filter by. Label filters work by either filtering a specific value of a label (i.e. list("key=value")) or by looking for the existence of a given key (i.e. "key")
metrics -- A list of metrics to return for each endpoint, read more in 'TimeMetric'
start -- The start time of the metrics. Can be represented by a string containing an RFC 3339 time, a Unix timestamp in milliseconds, a relative time ('now' or 'now-[0-9]+[mhd]', where m = minutes, h = hours, 'd' = days, and 's' = seconds), or 0 for the earliest time.
end -- The end time of the metrics. Can be represented by a string containing an RFC 3339 time, a Unix timestamp in milliseconds, a relative time ('now' or 'now-[0-9]+[mhd]', where m = minutes, h = hours, 'd' = days, and 's' = seconds), or 0 for the earliest time.
top_level -- if true will return only routers and endpoint that are NOT children of any router
uids -- if passed will return a list ModelEndpoint object with uid in uids
- list_pipelines(project: str, namespace: str | None = None, sort_by: str = '', page_token: str = '', filter_: str = '', format_: str | PipelineFormat = PipelineFormat.metadata_only, page_size: int | None = None) PipelinesOutput [source]#
Retrieve a list of KFP pipelines. This function can be invoked to get all pipelines from all projects, by specifying
project=*
, in which case pagination can be used and the various sorting and pagination properties can be applied. If a specific project is requested, then the pagination options cannot be used and pagination is not applied.- Parameters:
project -- Project name. Can be
*
for query across all projects.namespace -- Kubernetes namespace in which the pipelines are executing.
sort_by -- Field to sort the results by.
page_token -- Use for pagination, to retrieve next page.
filter -- Kubernetes filter to apply to the query, can be used to filter on specific object fields.
format --
Result format. Can be one of:
full
- return the full objects.metadata_only
(default) - return just metadata of the pipelines objects.name_only
- return just the names of the pipeline objects.
page_size -- Size of a single page when applying pagination.
- list_project_background_tasks(project: str | None = None, state: str | None = None, created_from: datetime | None = None, created_to: datetime | None = None, last_update_time_from: datetime | None = None, last_update_time_to: datetime | None = None) list[mlrun.common.schemas.background_task.BackgroundTask] [source]#
Retrieve updated information on project background tasks being executed. If no filter is provided, will return background tasks from the last week.
- Parameters:
project -- Project name (defaults to mlrun.mlconf.default_project).
state -- List only background tasks whose state is specified.
created_from -- Filter by background task created time in
[created_from, created_to]
.created_to -- Filter by background task created time in
[created_from, created_to]
.last_update_time_from -- Filter by background task last update time in
(last_update_time_from, last_update_time_to)
.last_update_time_to -- Filter by background task last update time in
(last_update_time_from, last_update_time_to)
.
- list_project_secret_keys(project: str, provider: str | SecretProviderName = SecretProviderName.kubernetes, token: str | None = None) SecretKeysData [source]#
Retrieve project-context secret keys from Vault or Kubernetes.
Note
This method for Vault functionality is currently in technical preview, and requires a HashiCorp Vault infrastructure properly set up and connected to the MLRun API server.
- Parameters:
project -- The project name.
provider -- The name of the secrets-provider to work with. Accepts a
SecretProviderName
enum.token -- Vault token to use for retrieving secrets. Only in use if
provider
isvault
. Must be a valid Vault token, with permissions to retrieve secrets of the project in question.
- list_project_secrets(project: str, token: str | None = None, provider: str | SecretProviderName = SecretProviderName.kubernetes, secrets: list[str] | None = None) SecretsData [source]#
Retrieve project-context secrets from Vault.
Note
This method for Vault functionality is currently in technical preview, and requires a HashiCorp Vault infrastructure properly set up and connected to the MLRun API server.
- Parameters:
project -- The project name.
token -- Vault token to use for retrieving secrets. Must be a valid Vault token, with permissions to retrieve secrets of the project in question.
provider -- The name of the secrets-provider to work with. Currently only
vault
is accepted.secrets -- A list of secret names to retrieve. An empty list
[]
will retrieve all secrets assigned to this specific project.kubernetes
provider only supports an empty list.
- list_projects(owner: str | None = None, format_: str | ProjectFormat = ProjectFormat.name_only, labels: list[str] | None = None, state: str | ProjectState | None = None) list[Union[mlrun.projects.project.MlrunProject, str]] [source]#
Return a list of the existing projects, potentially filtered by specific criteria.
- Parameters:
owner -- List only projects belonging to this specific owner.
format --
Format of the results. Possible values are:
name_only
(default value) - Return just the names of the projects.minimal
- Return minimal project objects (minimization happens in the BE).full
- Return full project objects.
labels -- Filter by labels attached to the project.
state -- Filter by project's state. Can be either
online
orarchived
.
- list_runs(name: str | None = None, uid: str | list[str] | None = None, project: str | None = None, labels: str | list[str] | None = None, state: RunStates | None = None, states: list[mlrun.common.runtimes.constants.RunStates] | None = None, sort: bool = True, last: int = 0, iter: bool = False, start_time_from: datetime | None = None, start_time_to: datetime | None = None, last_update_time_from: datetime | None = None, last_update_time_to: datetime | None = None, partition_by: RunPartitionByField | str | None = None, rows_per_partition: int = 1, partition_sort_by: SortField | str | None = None, partition_order: OrderType | str = OrderType.desc, max_partitions: int = 0, with_notifications: bool = False) RunList [source]#
Retrieve a list of runs, filtered by various options. If no filter is provided, will return runs from the last week.
Example:
runs = db.list_runs( name="download", project="iris", labels=["owner=admin", "kind=job"] ) # If running in Jupyter, can use the .show() function to display the results db.list_runs(name="", project=project_name).show()
- Parameters:
name -- Name of the run to retrieve.
uid -- Unique ID of the run, or a list of run UIDs.
project -- Project that the runs belongs to.
labels -- A list of labels to filter by. Label filters work by either filtering a specific value of a label (i.e. list("key=value")) or by looking for the existence of a given key (i.e. "key").
state -- Deprecated - List only runs whose state is specified (will be removed in 1.9.0)
states -- List only runs whose state is one of the provided states.
sort -- Whether to sort the result according to their start time. Otherwise, results will be returned by their internal order in the DB (order will not be guaranteed).
last -- Deprecated - currently not used (will be removed in 1.8.0).
iter -- If
True
return runs from all iterations. Otherwise, return only runs whoseiter
is 0.start_time_from -- Filter by run start time in
[start_time_from, start_time_to]
.start_time_to -- Filter by run start time in
[start_time_from, start_time_to]
.last_update_time_from -- Filter by run last update time in
(last_update_time_from, last_update_time_to)
.last_update_time_to -- Filter by run last update time in
(last_update_time_from, last_update_time_to)
.partition_by -- Field to group results by. Only allowed value is name. When partition_by is specified, the partition_sort_by parameter must be provided as well.
rows_per_partition -- How many top rows (per sorting defined by partition_sort_by and partition_order) to return per group. Default value is 1.
partition_sort_by -- What field to sort the results by, within each partition defined by partition_by. Currently the only allowed values are created and updated.
partition_order -- Order of sorting within partitions - asc or desc. Default is desc.
max_partitions -- Maximal number of partitions to include in the result. Default is 0 which means no limit.
with_notifications -- Return runs with notifications, and join them to the response. Default is False.
- list_runtime_resources(project: str | None = None, label_selector: str | None = None, kind: str | None = None, object_id: str | None = None, group_by: ListRuntimeResourcesGroupByField | None = None) list[mlrun.common.schemas.runtime_resource.KindRuntimeResources] | dict[str, dict[str, mlrun.common.schemas.runtime_resource.RuntimeResources]] [source]#
List current runtime resources, which are usually (but not limited to) Kubernetes pods or CRDs. Function applies for runs of type ['dask', 'job', 'spark', 'remote-spark', 'mpijob'], and will return per runtime kind a list of the runtime resources (which may have already completed their execution).
- Parameters:
project -- Get only runtime resources of a specific project, by default None, which will return only the projects you're authorized to see.
label_selector -- A label filter that will be passed to Kubernetes for filtering the results according to their labels.
kind -- The kind of runtime to query. May be one of ['dask', 'job', 'spark', 'remote-spark', 'mpijob']
object_id -- The identifier of the mlrun object to query its runtime resources. for most function runtimes, runtime resources are per Run, for which the identifier is the Run's UID. For dask runtime, the runtime resources are per Function, for which the identifier is the Function's name.
group_by -- Object to group results by. Allowed values are job and project.
- list_schedules(project: str, name: str | None = None, kind: ScheduleKinds | None = None, include_last_run: bool = False) SchedulesOutput [source]#
Retrieve list of schedules of specific name or kind.
- Parameters:
project -- Project name.
name -- Name of schedule to retrieve. Can be omitted to list all schedules.
kind -- Kind of schedule objects to retrieve, can be either
job
orpipeline
.include_last_run -- Whether to return for each schedule returned also the results of the last run of that schedule.
- load_project(name: str, url: str, secrets: dict | None = None, save_secrets: bool = True) str [source]#
Loading a project remotely from the given source.
- Parameters:
name -- project name
url -- git or tar.gz or .zip sources archive path e.g.: git://github.com/mlrun/demo-xgb-project.git http://mysite/archived-project.zip The git project should include the project yaml file.
secrets -- Secrets to store in project in order to load it from the provided url. For more information see
mlrun.load_project()
function.save_secrets -- Whether to store secrets in the loaded project. Setting to False will cause waiting for the process completion.
- Returns:
The terminal state of load project process.
- paginated_api_call(method, path, error=None, params=None, body=None, json=None, headers=None, timeout=45, version=None) Generator[Response, None, None] [source]#
Calls the api with pagination, yielding each page of the response
- patch_feature_set(name, feature_set_update: dict, project='', tag=None, uid=None, patch_mode: str | PatchMode = PatchMode.replace)[source]#
Modify (patch) an existing
FeatureSet
object. The object is identified by its name (and project it belongs to), as well as optionally atag
or itsuid
(for versioned object). If bothtag
anduid
are omitted then the object with taglatest
is modified.- Parameters:
name -- Name of the object to patch.
feature_set_update --
The modifications needed in the object. This parameter only has the changes in it, not a full object. Example:
feature_set_update = {"status": {"processed": True}}
Will apply the field
status.processed
to the existing object.project -- Project which contains the modified object.
tag -- The tag of the object to modify.
uid -- uid of the object to modify.
patch_mode -- The strategy for merging the changes with the existing object. Can be either
replace
oradditive
.
- patch_feature_vector(name, feature_vector_update: dict, project='', tag=None, uid=None, patch_mode: str | PatchMode = PatchMode.replace)[source]#
Modify (patch) an existing
FeatureVector
object. The object is identified by its name (and project it belongs to), as well as optionally atag
or itsuid
(for versioned object). If bothtag
anduid
are omitted then the object with taglatest
is modified.- Parameters:
name -- Name of the object to patch.
feature_vector_update -- The modifications needed in the object. This parameter only has the changes in it, not a full object.
project -- Project which contains the modified object.
tag -- The tag of the object to modify.
uid -- uid of the object to modify.
patch_mode -- The strategy for merging the changes with the existing object. Can be either
replace
oradditive
.
- patch_model_endpoint(project: str, endpoint_id: str, attributes: dict)[source]#
Updates model endpoint with provided attributes.
- Parameters:
project -- The name of the project.
endpoint_id -- The id of the endpoint.
attributes -- Dictionary of attributes that will be used for update the model endpoint. The keys of this dictionary should exist in the target table. Note that the values should be from type string or from a valid numerical type such as int or float. More details about the model endpoint available attributes can be found under
ModelEndpoint
.
Example:
# Generate current stats for two features current_stats = {'tvd_sum': 2.2, 'tvd_mean': 0.5, 'hellinger_sum': 3.6, 'hellinger_mean': 0.9, 'kld_sum': 24.2, 'kld_mean': 6.0, 'f1': {'tvd': 0.5, 'hellinger': 1.0, 'kld': 6.4}, 'f2': {'tvd': 0.5, 'hellinger': 1.0, 'kld': 6.5}} # Create attributes dictionary according to the required format attributes = {`current_stats`: json.dumps(current_stats), `drift_status`: "DRIFT_DETECTED"}
- patch_project(name: str, project: dict, patch_mode: str | PatchMode = PatchMode.replace) MlrunProject [source]#
Patch an existing project object.
- Parameters:
name -- Name of project to patch.
project -- The actual changes to the project object.
patch_mode -- The strategy for merging the changes with the existing object. Can be either
replace
oradditive
.
- static process_paginated_responses(responses: Generator[Response, None, None], key: str = 'data') list[Any] [source]#
Processes the paginated responses and returns the combined data
- read_artifact(key, tag=None, iter=None, project='', tree=None, uid=None, format_: ArtifactFormat = 'full')[source]#
Read an artifact, identified by its key, tag, tree and iteration.
- Parameters:
key -- Identifying key of the artifact.
tag -- Tag of the artifact.
iter -- The iteration which generated this artifact (where
iter=0
means the root iteration).project -- Project that the artifact belongs to.
tree -- The tree which generated this artifact.
uid -- A unique ID for this specific version of the artifact (the uid that was generated in the backend)
format -- The format in which to return the artifact. Default is 'full'.
- read_run(uid, project='', iter=0, format_: RunFormat = RunFormat.full)[source]#
Read the details of a stored run from the DB.
- Parameters:
uid -- The run's unique ID.
project -- Project name.
iter -- Iteration within a specific execution.
format -- The format in which to return the run details.
- remote_builder(func: BaseRuntime, with_mlrun: bool, mlrun_version_specifier: str | None = None, skip_deployed: bool = False, builder_env: dict | None = None, force_build: bool = False)[source]#
Build the pod image for a function, for execution on a remote cluster. This is executed by the MLRun API server, and creates a Docker image out of the function provided and any specific build instructions provided within. This is a pre-requisite for remotely executing a function, unless using a pre-deployed image.
- Parameters:
func -- Function to build.
with_mlrun -- Whether to add MLRun package to the built package. This is not required if using a base image that already has MLRun in it.
mlrun_version_specifier -- Version of MLRun to include in the built image.
skip_deployed -- Skip the build if we already have an image for the function.
builder_env -- Kaniko builder pod env vars dict (for config/credentials)
force_build -- Force building the image, even when no changes were made
- reset_alert_config(alert_name: str, project='')[source]#
Reset an alert.
- Parameters:
alert_name -- The name of the alert to reset.
project -- The project that the alert belongs to.
- set_model_monitoring_credentials(project: str, credentials: dict[str, str], replace_creds: bool) None [source]#
Set the credentials for the model monitoring application.
- Parameters:
project -- Project name.
credentials -- Credentials to set.
replace_creds -- If True, will override the existing credentials.
- set_run_notifications(project: str, run_uid: str, notifications: list[mlrun.model.Notification] | None = None)[source]#
Set notifications on a run. This will override any existing notifications on the run.
- Parameters:
project -- Project containing the run.
run_uid -- UID of the run.
notifications -- List of notifications to set on the run. Default is an empty list.
- set_schedule_notifications(project: str, schedule_name: str, notifications: list[mlrun.model.Notification] | None = None)[source]#
Set notifications on a schedule. This will override any existing notifications on the schedule.
- Parameters:
project -- Project containing the schedule.
schedule_name -- Name of the schedule.
notifications -- List of notifications to set on the schedule. Default is an empty list.
- start_function(func_url: str | None = None, function: BaseRuntime | None = None) BackgroundTask [source]#
Execute a function remotely, Used for
dask
functions.- Parameters:
func_url -- URL to the function to be executed.
function -- The function object to start, not needed here.
- Returns:
A BackgroundTask object, with details on execution process and its status.
- store_alert_config(alert_name: str, alert_data: dict | AlertConfig, project='') AlertConfig [source]#
Create/modify an alert.
- Parameters:
alert_name -- The name of the alert.
alert_data -- The data of the alert.
project -- The project that the alert belongs to.
- Returns:
The created/modified alert.
- store_alert_notifications(session, notification_objects: list[mlrun.model.Notification], alert_id: str, project: str, mask_params: bool = True)[source]#
- store_api_gateway(api_gateway: APIGateway | APIGateway, project: str | None = None) APIGateway [source]#
Stores an API Gateway.
- Parameters:
api_gateway --
APIGateway
orAPIGateway
: API Gateway entity.project -- project name. Mandatory if api_gateway is mlrun.common.schemas.APIGateway.
- Returns:
APIGateway
.
- store_artifact(key, artifact, uid=None, iter=None, tag=None, project='', tree=None)[source]#
Store an artifact in the DB.
- Parameters:
key -- Identifying key of the artifact.
artifact -- The
Artifact
to store.uid -- A unique ID for this specific version of the artifact (deprecated, artifact uid is generated in the backend use tree instead)
iter -- The task iteration which generated this artifact. If
iter
is notNone
the iteration will be added to the key provided to generate a unique key for the artifact of the specific iteration.tag -- Tag of the artifact.
project -- Project that the artifact belongs to.
tree -- The tree (producer id) which generated this artifact.
- store_datastore_profile(profile: DatastoreProfile, project: str)[source]#
Create or replace a datastore profile. :returns: None
- store_feature_set(feature_set: dict | FeatureSet | FeatureSet, name=None, project='', tag=None, uid=None, versioned=True) dict [source]#
Save a
FeatureSet
object in themlrun
DB. The feature-set can be either a new object or a modification to existing object referenced by the params of the function.- Parameters:
feature_set -- The
FeatureSet
to store.name -- Name of feature set.
project -- Name of project this feature-set belongs to.
tag -- The
tag
of the object to replace in the DB, for examplelatest
.uid -- The
uid
of the object to replace in the DB. If using this parameter, the modified object must have the sameuid
of the previously-existing object. This cannot be used for non-versioned objects.versioned -- Whether to maintain versions for this feature-set. All versions of a versioned object will be kept in the DB and can be retrieved until explicitly deleted.
- Returns:
The
FeatureSet
object (as dict).
- store_feature_vector(feature_vector: dict | FeatureVector | FeatureVector, name=None, project='', tag=None, uid=None, versioned=True) dict [source]#
Store a
FeatureVector
object in themlrun
DB. The feature-vector can be either a new object or a modification to existing object referenced by the params of the function.- Parameters:
feature_vector -- The
FeatureVector
to store.name -- Name of feature vector.
project -- Name of project this feature-vector belongs to.
tag -- The
tag
of the object to replace in the DB, for examplelatest
.uid -- The
uid
of the object to replace in the DB. If using this parameter, the modified object must have the sameuid
of the previously-existing object. This cannot be used for non-versioned objects.versioned -- Whether to maintain versions for this feature-vector. All versions of a versioned object will be kept in the DB and can be retrieved until explicitly deleted.
- Returns:
The
FeatureVector
object (as dict).
- store_function(function: BaseRuntime | dict, name, project='', tag=None, versioned=False)[source]#
Store a function object. Function is identified by its name and tag, and can be versioned.
- store_hub_source(source_name: str, source: dict | IndexedHubSource)[source]#
Create or replace a hub source. For an example of the source format and explanation of the source order logic, please see
create_hub_source()
. This method can be used to modify the source itself or its order in the list of sources.- Parameters:
source_name -- Name of the source object to modify/create. It must match the
source.metadata.name
parameter in the source itself.source -- Source object to store in the database.
- Returns:
The source object as stored in the DB.
- store_log(uid, project='', body=None, append=False)[source]#
Save a log persistently.
- Parameters:
uid -- Log unique ID
project -- Project name for which this log belongs
body -- The actual log to store
append -- Whether to append the log provided in
body
to an existing log with the sameuid
or to create a new log. If set toFalse
, an existing log with sameuid
will be overwritten
- store_project(name: str, project: dict | MlrunProject | Project) MlrunProject [source]#
Store a project in the DB. This operation will overwrite existing project of the same name if exists.
- store_run(struct, uid, project='', iter=0)[source]#
Store run details in the DB. This method is usually called from within other
mlrun
flows and not called directly by the user.
- store_run_notifications(notification_objects: list[mlrun.model.Notification], run_uid: str, project: str | None = None, mask_params: bool = True)[source]#
For internal use. The notification mechanism may run "locally" for certain runtimes. However, the updates occur in the API so nothing to do here.
- submit_job(runspec, schedule: str | ScheduleCronTrigger | None = None)[source]#
Submit a job for remote execution.
- Parameters:
runspec -- The runtime object spec (Task) to execute.
schedule -- Whether to schedule this job using a Cron trigger. If not specified, the job will be submitted immediately.
- submit_pipeline(project, pipeline, arguments=None, experiment=None, run=None, namespace=None, artifact_path=None, ops=None, cleanup_ttl=None, timeout=60)[source]#
Submit a KFP pipeline for execution.
- Parameters:
project -- The project of the pipeline
pipeline -- Pipeline function or path to .yaml/.zip pipeline file.
arguments -- A dictionary of arguments to pass to the pipeline.
experiment -- A name to assign for the specific experiment.
run -- A name for this specific run.
namespace -- Kubernetes namespace to execute the pipeline in.
artifact_path -- A path to artifacts used by this pipeline.
ops -- Transformers to apply on all ops in the pipeline.
cleanup_ttl -- Pipeline cleanup ttl in secs (time to wait after workflow completion, at which point the workflow and all its resources are deleted)
timeout -- Timeout for the API call.
- submit_workflow(project: str, name: str, workflow_spec: WorkflowSpec | WorkflowSpec | dict, arguments: dict | None = None, artifact_path: str | None = None, source: str | None = None, run_name: str | None = None, namespace: str | None = None, notifications: list[mlrun.model.Notification] | None = None) WorkflowResponse [source]#
Submitting workflow for a remote execution.
- Parameters:
project -- project name
name -- workflow name
workflow_spec -- the workflow spec to execute
arguments -- arguments for the workflow
artifact_path -- artifact target path of the workflow
source -- source url of the project
run_name -- run name to override the default: 'workflow-runner-<workflow name>'
namespace -- kubernetes namespace if other than default
notifications -- list of notifications to send when workflow execution is completed
- Returns:
WorkflowResponse
.
- tag_artifacts(artifacts: list[mlrun.artifacts.base.Artifact] | list[dict] | Artifact | dict, project: str, tag_name: str, replace: bool = False)[source]#
Tag a list of artifacts.
- Parameters:
artifacts -- The artifacts to tag. Can be a list of
Artifact
objects or dictionaries, or a single object.project -- Project which contains the artifacts.
tag_name -- The tag to set on the artifacts.
replace -- If True, replace existing tags, otherwise append to existing tags.
- tag_objects(project: str, tag_name: str, objects: TagObjects | dict, replace: bool = False)[source]#
Tag a list of objects.
- Parameters:
project -- Project which contains the objects.
tag_name -- The tag to set on the objects.
objects -- The objects to tag.
replace -- Whether to replace the existing tags of the objects or to add the new tag to them.
- trigger_migrations() BackgroundTask | None [source]#
Trigger migrations (will do nothing if no migrations are needed) and wait for them to finish if actually triggered
- Returns:
BackgroundTask
.
- update_model_monitoring_controller(project: str, base_period: int = 10, image: str = 'mlrun/mlrun') None [source]#
Redeploy model monitoring application controller function.
- Parameters:
project -- Project name.
base_period -- The time period in minutes in which the model monitoring controller function triggers. By default, the base period is 10 minutes.
image -- The image of the model monitoring controller function. By default, the image is mlrun/mlrun.
- update_run(updates: dict, uid, project='', iter=0, timeout=45)[source]#
Update the details of a stored run in the DB.
- update_schedule(project: str, name: str, schedule: ScheduleUpdate)[source]#
Update an existing schedule, replace it with the details contained in the schedule object.
- verify_authorization(authorization_verification_input: AuthorizationVerificationInput)[source]#
Verifies authorization for the provided action on the provided resource.
- Parameters:
authorization_verification_input -- Instance of
AuthorizationVerificationInput
that includes all the needed parameters for the auth verification
- watch_log(uid, project='', watch=True, offset=0)[source]#
Retrieve logs of a running process by chunks of 1MB, and watch the progress of the execution until it completes. This method will print out the logs and continue to periodically poll for, and print, new logs as long as the state of the runtime which generates this log is either
pending
orrunning
.- Parameters:
uid -- The uid of the log object to watch.
project -- Project that the log belongs to.
watch -- If set to
True
will continue tracking the log as described above. Otherwise this function is practically equivalent to theget_log()
function.offset -- Minimal offset in the log to watch.
- Returns:
The final state of the log being watched and the final offset.