mlrun.frameworks.tf_keras#

mlrun.frameworks.tf_keras.apply_mlrun(model: Optional[tensorflow.keras.Model] = None, model_name: Optional[str] = None, tag: str = '', model_path: Optional[str] = None, model_format: str = 'SavedModel', save_traces: bool = False, modules_map: Optional[Union[Dict[str, Union[None, str, List[str]]], str]] = None, custom_objects_map: Optional[Union[Dict[str, Union[str, List[str]]], str]] = None, custom_objects_directory: Optional[str] = None, context: Optional[mlrun.execution.MLClientCtx] = None, auto_log: bool = True, tensorboard_directory: Optional[str] = None, mlrun_callback_kwargs: Optional[Dict[str, Any]] = None, tensorboard_callback_kwargs: Optional[Dict[str, Any]] = None, use_horovod: Optional[bool] = None, **kwargs) mlrun.frameworks.tf_keras.model_handler.TFKerasModelHandler[source]#

Wrap the given model with MLRun’s interface providing it with mlrun’s additional features.

Parameters
  • model – The model to wrap. Can be loaded from the model path given as well.

  • model_name – The model name to use for storing the model artifact. If not given, the tf.keras.Model.name will be used.

  • tag – The model’s tag to log with.

  • model_path – The model’s store object path. Mandatory for evaluation (to know which model to update). If model is not provided, it will be loaded from this path.

  • model_format – The format to use for saving and loading the model. Should be passed as a member of the class ‘ModelFormats’. Default: ‘ModelFormats.SAVED_MODEL’.

  • save_traces – Whether or not to use functions saving (only available for the ‘SavedModel’ format) for loading the model later without the custom objects dictionary. Only from tensorflow version >= 2.4.0. Using this setting will increase the model saving size.

  • modules_map

    A dictionary of all the modules required for loading the model. Each key is a path to a module and its value is the object name to import from it. All the modules will be imported globally. If multiple objects needed to be imported from the same module a list can be given. The map can be passed as a path to a json file as well. For example:

    {
        "module1": None,  # import module1
        "module2": ["func1", "func2"],  # from module2 import func1, func2
        "module3.sub_module": "func3",  # from module3.sub_module import func3
    }
    

    If the model path given is of a store object, the modules map will be read from the logged modules map artifact of the model.

  • custom_objects_map

    A dictionary of all the custom objects required for loading the model. Each key is a path to a python file and its value is the custom object name to import from it. If multiple objects needed to be imported from the same py file a list can be given. The map can be passed as a path to a json file as well. For example:

    {
        "/.../custom_optimizer.py": "optimizer",
        "/.../custom_layers.py": ["layer1", "layer2"]
    }
    

    All the paths will be accessed from the given ‘custom_objects_directory’, meaning each py file will be read from ‘custom_objects_directory/<MAP VALUE>’. If the model path given is of a store object, the custom objects map will be read from the logged custom object map artifact of the model. Notice: The custom objects will be imported in the order they came in this dictionary (or json). If a custom object is depended on another, make sure to put it below the one it relies on.

  • custom_objects_directory – Path to the directory with all the python files required for the custom objects. Can be passed as a zip file as well (will be extracted during the run before loading the model). If the model path given is of a store object, the custom objects files will be read from the logged custom object artifact of the model.

  • context – MLRun context to work with. If no context is given it will be retrieved via ‘mlrun.get_or_create_ctx(None)’

  • auto_log – Whether or not to apply MLRun’s auto logging on the model. Default: True.

  • tensorboard_directory – If context is not given, or if wished to set the directory even with context, this will be the output for the event logs of tensorboard. If not given, the ‘tensorboard_dir’ parameter will be tried to be taken from the provided context. If not found in the context, the default tensorboard output directory will be: /User/.tensorboard/<PROJECT_NAME> or if working on local, the set artifacts path.

  • mlrun_callback_kwargs – Key word arguments for the MLRun callback. For further information see the documentation of the class ‘MLRunLoggingCallback’. Note that both ‘context’ and ‘auto_log’ parameters are already given here.

  • tensorboard_callback_kwargs – Key word arguments for the tensorboard callback. For further information see the documentation of the class ‘TensorboardLoggingCallback’. Note that both ‘context’ and ‘auto_log’ parameters are already given here.

  • use_horovod – Whether or not to use horovod - a distributed training framework. Default: None, meaning it will be read from context if available and if not - False.

Returns

The model with MLRun’s interface.