mlrun.frameworks.pytorch#

mlrun.frameworks.pytorch.evaluate(model_path: str, dataset: torch.utils.data.DataLoader, model: torch.nn.Module | None = None, loss_function: torch.nn.Module | None = None, metric_functions: list[Union[Callable[[torch.Tensor, torch.Tensor], Union[int, float, numpy.ndarray, torch.Tensor]], torch.nn.Module]] | None = None, iterations: int | None = None, callbacks_list: list[mlrun.frameworks.pytorch.callbacks.callback.Callback] | None = None, use_cuda: bool = True, use_horovod: bool = False, auto_log: bool = True, model_name: str | None = None, modules_map: dict[str, Union[NoneType, str, list[str]]] | str | None = None, custom_objects_map: dict[str, Union[str, list[str]]] | str | None = None, custom_objects_directory: str | None = None, mlrun_callback_kwargs: dict[str, Any] | None = None, context: MLClientCtx | None = None) tuple[mlrun.frameworks.pytorch.model_handler.PyTorchModelHandler, list[Union[int, float, numpy.ndarray, torch.Tensor]]][source]#

Use MLRun's PyTorch interface to evaluate the model with the given parameters. For more information and further options regarding the auto logging, see 'PyTorchMLRunInterface' documentation. Notice for auto-logging: In order to log the model to MLRun, its class (torch.Module) must be in the custom objects map or the modules map.

Parameters:
  • model_path -- The model's store object path. Mandatory for evaluation (to know which model to update).

  • dataset -- A data loader for the validation process.

  • model -- The model to evaluate. IF None, the model will be loaded from the given store model path.

  • loss_function -- The loss function to use during training.

  • metric_functions -- The metrics to use on training and validation.

  • iterations -- Amount of iterations (batches) to perform on the dataset. If 'None' the entire dataset will be used.

  • callbacks_list -- The callbacks to use on this run.

  • use_cuda -- Whether or not to use cuda. Only relevant if cuda is available. Default: True.

  • use_horovod -- Whether or not to use horovod - a distributed training framework. Default: False.

  • auto_log -- Whether or not to apply auto-logging to MLRun. Default: True.

  • model_name -- The model name to use for storing the model artifact. If not given, the model's class name will be used.

  • modules_map --

    A dictionary of all the modules required for loading the model. Each key is a path to a module and its value is the object name to import from it. All the modules will be imported globally. If multiple objects needed to be imported from the same module a list can be given. The map can be passed as a path to a json file as well. For example:

    {
        "module1": None,  # import module1
        "module2": ["func1", "func2"],  # from module2 import func1, func2
        "module3.sub_module": "func3",  # from module3.sub_module import func3
    }
    

    If the model path given is of a store object, the modules map will be read from the logged modules map artifact of the model.

  • custom_objects_map --

    A dictionary of all the custom objects required for loading the model. Each key is a path to a python file and its value is the custom object name to import from it. If multiple objects needed to be imported from the same py file a list can be given. The map can be passed as a path to a json file as well. For example:

    {
        "/.../custom_optimizer.py": "optimizer",
        "/.../custom_layers.py": ["layer1", "layer2"],
    }
    

    All the paths will be accessed from the given 'custom_objects_directory', meaning each py file will be read from 'custom_objects_directory/<MAP VALUE>'. If the model path given is of a store object, the custom objects map will be read from the logged custom object map artifact of the model. Notice: The custom objects will be imported in the order they came in this dictionary (or json). If a custom object is depended on another, make sure to put it below the one it relies on.

  • custom_objects_directory -- Path to the directory with all the python files required for the custom objects. Can be passed as a zip file as well (will be extracted during the run before loading the model). If the model path given is of a store object, the custom objects files will be read from the logged custom object artifact of the model.

  • mlrun_callback_kwargs -- Key word arguments for the MLRun callback. For further information see the documentation of the class 'MLRunLoggingCallback'. Note that both 'context', 'custom_objects' and 'auto_log' parameters are already given here.

  • context -- The context to use for the logs.

Returns:

A tuple of: [0] = Initialized model handler with the evaluated model. [1] = The evaluation metrics results list.

mlrun.frameworks.pytorch.train(model: torch.nn.Module, training_set: torch.utils.data.DataLoader, loss_function: torch.nn.Module, optimizer: torch.optim.Optimizer, validation_set: torch.utils.data.DataLoader | None = None, metric_functions: list[Union[Callable[[torch.Tensor, torch.Tensor], Union[int, float, numpy.ndarray, torch.Tensor]], torch.nn.Module]] | None = None, scheduler=None, scheduler_step_frequency: int | float | str = 'epoch', epochs: int = 1, training_iterations: int | None = None, validation_iterations: int | None = None, callbacks_list: list[mlrun.frameworks.pytorch.callbacks.callback.Callback] | None = None, use_cuda: bool = True, use_horovod: bool | None = None, auto_log: bool = True, model_name: str | None = None, modules_map: dict[str, Union[NoneType, str, list[str]]] | str | None = None, custom_objects_map: dict[str, Union[str, list[str]]] | str | None = None, custom_objects_directory: str | None = None, tensorboard_directory: str | None = None, mlrun_callback_kwargs: dict[str, Any] | None = None, tensorboard_callback_kwargs: dict[str, Any] | None = None, context: MLClientCtx | None = None) PyTorchModelHandler[source]#

Use MLRun's PyTorch interface to train the model with the given parameters. For more information and further options regarding the auto logging, see 'PyTorchMLRunInterface' documentation. Notice for auto-logging: In order to log the model to MLRun, its class (torch.Module) must be in the custom objects map or the modules map.

Parameters:
  • model -- The model to train.

  • training_set -- A data loader for the training process.

  • loss_function -- The loss function to use during training.

  • optimizer -- The optimizer to use during the training.

  • validation_set -- A data loader for the validation process.

  • metric_functions -- The metrics to use on training and validation.

  • scheduler -- Scheduler to use on the optimizer at the end of each epoch. The scheduler must have a 'step' method with no input.

  • scheduler_step_frequency -- The frequency in which to step the given scheduler. Can be equal to one of the strings 'epoch' (for at the end of every epoch) and 'batch' (for at the end of every batch), or an integer that specify per how many iterations to step or a float percentage (0.0 < x < 1.0) for per x / iterations to step. Default: 'epoch'.

  • epochs -- Amount of epochs to perform. Default: a single epoch.

  • training_iterations -- Amount of iterations (batches) to perform on each epoch's training. If 'None' the entire training set will be used.

  • validation_iterations -- Amount of iterations (batches) to perform on each epoch's validation. If 'None' the entire validation set will be used.

  • callbacks_list -- The callbacks to use on this run.

  • use_cuda -- Whether or not to use cuda. Only relevant if cuda is available. Default: True.

  • use_horovod -- Whether or not to use horovod - a distributed training framework. Default: False.

  • auto_log -- Whether or not to apply auto-logging (to both MLRun and Tensorboard). Default: True. IF True, the custom objects are not optional.

  • model_name -- The model name to use for storing the model artifact. If not given, the model's class name will be used.

  • modules_map --

    A dictionary of all the modules required for loading the model. Each key is a path to a module and its value is the object name to import from it. All the modules will be imported globally. If multiple objects needed to be imported from the same module a list can be given. The map can be passed as a path to a json file as well. For example:

    {
        "module1": None,  # import module1
        "module2": ["func1", "func2"],  # from module2 import func1, func2
        "module3.sub_module": "func3",  # from module3.sub_module import func3
    }
    

    If the model path given is of a store object, the modules map will be read from the logged modules map artifact of the model.

  • custom_objects_map --

    A dictionary of all the custom objects required for loading the model. Each key is a path to a python file and its value is the custom object name to import from it. If multiple objects needed to be imported from the same py file a list can be given. The map can be passed as a path to a json file as well. For example:

    {
        "/.../custom_optimizer.py": "optimizer",
        "/.../custom_layers.py": ["layer1", "layer2"],
    }
    

    All the paths will be accessed from the given 'custom_objects_directory', meaning each py file will be read from 'custom_objects_directory/<MAP VALUE>'. If the model path given is of a store object, the custom objects map will be read from the logged custom object map artifact of the model. Notice: The custom objects will be imported in the order they came in this dictionary (or json). If a custom object is depended on another, make sure to put it below the one it relies on.

  • custom_objects_directory -- Path to the directory with all the python files required for the custom objects. Can be passed as a zip file as well (will be extracted during the run before loading the model). If the model path given is of a store object, the custom objects files will be read from the logged custom object artifact of the model.

  • tensorboard_directory -- If context is not given, or if wished to set the directory even with context, this will be the output for the event logs of tensorboard. If not given, the 'tensorboard_dir' parameter will be tried to be taken from the provided context. If not found in the context, the default tensorboard output directory will be: /User/.tensorboard/<PROJECT_NAME> or if working on local, the set artifacts path.

  • mlrun_callback_kwargs -- Key word arguments for the MLRun callback. For further information see the documentation of the class 'MLRunLoggingCallback'. Note that both 'context', 'custom_objects' and 'auto_log' parameters are already given here.

  • tensorboard_callback_kwargs -- Key word arguments for the tensorboard callback. For further information see the documentation of the class 'TensorboardLoggingCallback'. Note that both 'context' and 'auto_log' parameters are already given here.

  • context -- The context to use for the logs.

Returns:

A model handler with the provided model and parameters.

Raises:

ValueError -- If 'auto_log' is set to True and one all of the custom objects or modules parameters given is None.