logo

MLRun basics

  • What is MLRun?
  • MLOps development flow
  • Quick-Start Guide
  • Tutorials
    • Getting-Started Tutorial
      • Part 1: MLRun Basics
      • Part 2: Training an ML Model
      • Part 3: Serving an ML Model
      • Part 4: Projects and Automated ML Pipeline
    • Converting Research Notebook to Operational Pipeline With MLRun
      • Original NYC Taxi ML Notebook
      • Refactored As Operational Pipeline (with MLRun)
      • Model Serving Function
  • Installation and setup guide
    • Installing MLRun on a Kubernetes Cluster
    • Install MLRun on a local Docker registry
    • Setting a remote environment

Concepts

  • Projects
    • Create and load projects
    • Using projects
    • Project workflows and automation
    • Working with secrets
  • MLRun serverless functions
    • Overview
    • Distributed Functions
      • Dask Distributed Runtime
        • Running Dask on the cluster with mlrun
        • Pipelines Using Dask, Kubeflow and MLRun
      • MPIJob and Horovod Runtime
      • Spark Operator Runtime
    • Nuclio real-time functions
    • Node affinity for MLRun jobs
  • Data stores and feature store
    • Data stores
    • Data items
    • Feature store
  • Runs, functions, and workflows
    • MLRun execution context
    • Submitting tasks/jobs to functions
    • Multi-stage workflows
    • Automated Logging and MLOps with apply_mlrun()
  • Artifacts and models
  • Deployment and monitoring

Working with data

  • Feature Store: Data ingestion
    • Feature sets
    • Feature set transformations
    • Using the Spark execution engine
  • Feature Store: Data retrieval
    • Creating and using feature vectors
    • Retrieve offline data and use it for training
    • Online access and serving
  • Feature Store tutorials
    • Feature store example (stocks)
    • Feature store end-to-end demo
      • Part 1: Data Ingestion
      • Part 2: Training
      • Part 3: Serving
      • Part 4: Automated ML pipeline

Develop functions and models

  • Creating and using functions
    • Configuring Functions
    • Converting Notebook Code to a Function
    • Using code from archives or file shares
    • Attach storage to functions
    • MLRun Functions Marketplace
    • Images and their usage in MLRun
  • Run, track, and compare jobs
    • Running simple jobs
    • Hyperparam and iterative jobs
    • Build and use function images (Kubernetes)

Deploy ML applications

  • Real-time serving pipelines (graphs)
    • Getting started
    • Use cases
    • Graph concepts and state machine
    • Writing custom steps
    • Built-in steps
    • Demos and Tutorials
      • Distributed (Multi-function) Pipeline Example
      • Advanced Model Serving Graph - Notebook Example
    • Serving graph high availability configuration
  • Model serving pipelines
    • Getting started with model serving
    • Creating a custom model serving class
    • Model Server API
  • Model monitoring overview (beta)
    • Model monitoring overview (beta)
    • Enable model monitoring (beta)
  • CI/CD, rolling upgrades, git
    • Github/Gitlab and CI/CD integration

References

  • API Index
  • API By Module
    • mlrun.frameworks
      • AutoMLRun
      • TensorFlow.Keras
      • PyTorch
      • SciKit-Learn
      • XGBoost
      • LightGBM
    • mlrun
    • mlrun.artifacts
    • mlrun.config
    • mlrun.datastore
    • mlrun.db
    • mlrun.execution
    • mlrun.feature_store
    • mlrun.model
    • mlrun.platforms
    • mlrun.projects
    • mlrun.run
    • mlrun.runtimes
    • mlrun.serving
    • storey.transformations - Graph transformations
  • Command-Line Interface (Tech preview)
  • Examples
By Iguazio

mlrun.frameworks.pytorchΒΆ

mlrun.frameworks.tf_keras mlrun.frameworks.sklearn

By Iguazio
© Copyright 2022, Iguazio.