logo

MLRun basics

  • What is MLRun?
  • MLOps development flow
  • Tutorials and Examples
    • Quick Start Tutorial
    • Train, Compare, and Register Models
    • Serving ML/DL models
    • Projects and Automated ML Pipeline
    • Feature store example (stocks)
    • MLRun demos repository
    • MLRun Katakoda Scenarios
  • Installation and setup guide
    • Set up your client environment
    • Install MLRun locally using Docker
    • Install MLRun on a Kubernetes Cluster

Concepts

  • Projects
    • Create and load projects
    • Using projects
    • Project workflows and automation
    • Working with secrets
  • MLRun serverless functions
    • Function Runtimes
    • Distributed Functions
      • Dask Distributed Runtime
        • Running Dask on the cluster with mlrun
        • Pipelines Using Dask, Kubeflow and MLRun
      • MPIJob and Horovod Runtime
      • Spark Operator Runtime
    • Nuclio real-time functions
  • Data Stores and Data Items
    • Data stores
    • Data items
  • Feature store
  • Runs, functions, and workflows
    • MLRun execution context
    • Submitting tasks/jobs to functions
    • Multi-stage workflows
    • Automated Logging and MLOps with apply_mlrun()
  • Artifacts and models
  • Deployment and monitoring

Working with data

  • Feature Store: Data ingestion
    • Feature sets
    • Feature set transformations
    • Using the Spark execution engine
  • Feature Store: Data retrieval
    • Creating and using feature vectors
    • Retrieve offline data and use it for training
    • Online access and serving
  • Feature Store tutorials
    • Feature store example (stocks)
    • Feature store end-to-end demo
      • Part 1: Data Ingestion
      • Part 2: Training
      • Part 3: Serving
      • Part 4: Automated ML pipeline

Develop functions and models

  • Functions
    • Configuring Functions
    • Converting Notebook Code to a Function
    • Using code from archives or file shares
    • Build and use function images (Kubernetes)
    • Managing job resources
    • Attach storage to functions
    • MLRun Functions Marketplace
  • Run, track, and compare jobs
    • Running simple jobs
    • Hyperparam and iterative jobs

Deploy ML applications

  • Real-time serving pipelines (graphs)
    • Getting started
    • Use cases
    • Graph concepts and state machine
    • Writing custom steps
    • Built-in steps
    • Demos and Tutorials
      • Distributed (Multi-function) Pipeline Example
      • Advanced Model Serving Graph - Notebook Example
    • Serving graph high availability configuration
  • Model serving pipelines
    • Getting started with model serving
    • Creating a custom model serving class
    • Model Server API
  • Model monitoring overview (beta)
    • Model monitoring overview (beta)
    • Enable model monitoring (beta)
  • CI/CD, rolling upgrades, git
    • Github/Gitlab/Jenkins and CI/CD integration

References

  • API Index
  • API By Module
    • mlrun.frameworks
      • AutoMLRun
      • TensorFlow.Keras
      • PyTorch
      • SciKit-Learn
      • XGBoost
      • LightGBM
    • mlrun
    • mlrun.artifacts
    • mlrun.config
    • mlrun.datastore
    • mlrun.db
    • mlrun.execution
    • mlrun.feature_store
    • mlrun.model
    • mlrun.platforms
    • mlrun.projects
    • mlrun.run
    • mlrun.runtimes
    • mlrun.serving
    • storey.transformations - Graph transformations
  • Command-Line Interface (Tech preview)
  • Glossary
By Iguazio
  • repository
  • open issue
  • suggest edit
  • .md

Run, track, and compare jobs

Run, track, and compare jobs#

In this section

  • Running simple jobs
    • Run with CLI locally: CSV output
    • Run with CLI locally: python output
    • Run with CLI locally: parquet output
    • Run with SDK
    • Viewing the output
  • Hyperparam and iterative jobs
    • Basic code
    • Review the results
    • Examples
    • Parallel execution over containers

See also:

  • Automated Logging and MLOps with apply_mlrun()

previous

MLRun Functions Marketplace

next

Running simple jobs

By Iguazio
© Copyright 2022, Iguazio.