Toggle navigation sidebar
Toggle in-page Table of Contents
MLRun basics
Using MLRun
MLRun architecture
MLRun ecosystem
MLOps development workflow
Tutorials and Examples
Quick start tutorial
Train, compare, and register models
Serving pre-trained ML/DL models
Projects and automated ML pipeline
Model monitoring and drift detection
Add MLOps to existing code
Batch inference and drift detection
Feature store example (stocks)
MLRun demos repository
MLRun cheat sheet
Installation and setup guide
Install MLRun locally using Docker
Install MLRun on Kubernetes
Install MLRun on AWS
Set up your environment
Core components
Projects and automation
Create, save, and use projects
Git best practices
Load and run projects
Run, build, and deploy functions
Build and run workflows/pipelines
CI/CD integration
Working with secrets
Serverless functions
Functions architecture
Kinds of functions (runtimes)
Dask distributed runtime
Running Dask on the cluster with MLrun
Pipelines using Dask, Kubeflow and MLRun
MPIJob and Horovod runtime
Spark Operator runtime
Nuclio real-time functions
Create and use functions
Converting notebooks to function
Attach storage to functions
Images and their usage in MLRun
Build function image
Node affinity
Managing job resources
Function Hub
Data and artifacts
Data stores
Data items
Artifacts
Model Artifacts
Feature store
Feature store overview
Feature sets
Feature set transformations
Creating and using feature vectors
Feature store end-to-end demo
Part 1: Data ingestion
Part 2: Training
Part 3: Serving
Part 4: Automated ML pipeline
Batch runs and workflows
MLRun execution context
Decorators and auto-logging
Running a task (job)
Running a multi-stage workflow
Scheduled jobs and workflows
Real-time serving pipelines (graphs)
Getting started
Use cases
Graph concepts and state machine
Model serving graph
Writing custom steps
Built-in steps
Demos and tutorials
Distributed (multi-function) pipeline example
Advanced model serving graph - notebook example
Serving graph high availability configuration
Error handling
Model monitoring
MLOps tasks
Ingest and process data
Using data sources and items
Logging datasets
Spark Operator runtime
Ingest data using the feature store
Ingest features with Spark
Develop and train models
Model training and tracking
Create a basic training job
Working with data and model artifacts
Automated experiment tracking
Using the built-in training function
Hyperparameter tuning optimization
Training with the feature store
Deploy models and applications
Real-time serving
Using built-in model serving classes
Build your own model serving class
Test and deploy a model server
Model serving API
Serving with the feature store
Batch inference
Canary and rolling upgrades
Monitor and alert
Model monitoring overview
Enable model monitoring
References
API index
API by module
mlrun.frameworks
AutoMLRun
TensorFlow.Keras
PyTorch
SciKit-Learn
XGBoost
LightGBM
mlrun
mlrun.artifacts
mlrun.config
mlrun.datastore
mlrun.db
mlrun.execution
mlrun.feature_store
mlrun.model
mlrun.platforms
mlrun.projects
mlrun.run
mlrun.runtimes
mlrun.serving
storey.transformations - Graph transformations
Command-Line Interface
Glossary
Change log
Change log
repository
open issue
suggest edit
.md
.pdf
Batch runs and workflows
Batch runs and workflows
#
In this section
MLRun execution context
Decorators and auto-logging
Running a task (job)
Running a multi-stage workflow
Scheduled jobs and workflows