mlrun.config#

Configuration system.

Configuration can be in either a configuration file specified by MLRUN_CONFIG_FILE environment variable or by environment variables.

Environment variables are in the format "MLRUN_HTTPDB__PORT=8080". This will be mapped to config.httpdb.port. Values should be in JSON format.

class mlrun.config.Config(cfg=None)[source]#

Bases: object

property dbpath#
static decode_base64_config_and_load_to_object(attribute_path: str, expected_type=<class 'dict'>)[source]#

decodes and loads the config attribute to expected type

Parameters:
  • attribute_path -- the path in the default_config e.g. preemptible_nodes.node_selector

  • expected_type -- the object type valid values are : dict, list etc...

Returns:

the expected type instance

dump_yaml(stream=None)[source]#
force_api_gateway_ssl_redirect()[source]#

Get the default value for the ssl_redirect configuration. In Iguazio we always want to redirect to HTTPS, in other cases we don't. :return: True if we should redirect to HTTPS, False otherwise.

classmethod from_dict(dict_)[source]#
static get_build_args()[source]#
get_default_function_node_selector() dict[source]#
static get_default_function_pod_requirement_resources(requirement: str, with_gpu: bool = True)[source]#
Parameters:
Returns:

a dict containing the defaults resources (cpu, memory, nvidia.com/gpu)

get_default_function_pod_resources(with_gpu_requests=False, with_gpu_limits=False)[source]#
get_default_function_security_context() dict[source]#
static get_default_hub_source() str[source]#
get_model_monitoring_file_target_path(project: str = '', kind: str = '', target: str = 'online', artifact_path: str | None = None, function_name: str | None = None, **kwargs) str[source]#

Get the full path from the configuration based on the provided project and kind.

Parameters:
  • project -- Project name.

  • kind -- Kind of target path (e.g. events, log_stream, endpoints, etc.)

  • target -- Can be either online or offline. If the target is online, then we try to get a specific path for the provided kind. If it doesn't exist, use the default path. If the target path is offline and the offline path is already a full path in the configuration, then the result will be that path as-is. If the offline path is a relative path, then the result will be based on the project artifact path and the offline relative path. If project artifact path wasn't provided, then we use MLRun artifact path instead.

  • artifact_path -- Optional artifact path that will be used as a relative path. If not provided, the relative artifact path will be taken from the global MLRun artifact path.

  • function_name -- Application name, None for model_monitoring_stream.

Returns:

Full configured path for the provided kind.

static get_parsed_igz_version() Version | None[source]#
get_preemptible_node_selector() dict[source]#
get_preemptible_tolerations() list[source]#
get_s3_storage_options() dict[str, Any][source]#

Generate storage options dictionary as required for handling S3 path in fsspec. The model monitoring stream graph uses this method for generating the storage options for S3 parquet target path. :return: A storage options dictionary in which each key-value pair represents a particular configuration, such as endpoint_url or aws access key.

static get_security_context_enrichment_group_id(user_unix_id: int) int[source]#
static get_storage_auto_mount_params()[source]#
get_v3io_access_key() str | None[source]#
static get_valid_function_priority_class_names()[source]#
static internal_labels()[source]#
is_api_running_on_k8s()[source]#
is_ce_mode() bool[source]#
is_explicit_ack_enabled() bool[source]#
is_nuclio_detected()[source]#
static is_pip_ca_configured()[source]#
is_preemption_nodes_configured()[source]#
static is_running_on_iguazio() bool[source]#
static reload()[source]#
resolve_chief_api_url() str[source]#
resolve_runs_monitoring_missing_runtime_resources_debouncing_interval()[source]#
static resolve_ui_url()[source]#
to_dict()[source]#
update(cfg, skip_errors=False)[source]#
use_nuclio_mock(force_mock=None)[source]#
verify_security_context_enrichment_mode_is_allowed()[source]#
property version#
mlrun.config.is_running_as_api()[source]#
mlrun.config.read_env(env=None, prefix='MLRUN_')[source]#

Read configuration from environment