Gen AI liveOps#

The liveOps stage ensures that the models are always performing optimally and adapting to new data.

Model Monitoring#

MLRun includes tools for monitoring the performance of deployed models in real-time. This helps in identifying issues like model performance, operational performance, and concept and data drift.
On top of the out-of-the-box analyses, you can easily create model-monitoring applications of your own, tailored to meet your needs.
Based on the monitoring data, MLRun can trigger automated retraining of models to ensure they remain accurate and effective over time.
See full details in Model monitoring.

Alerts#

Alerts inform you about potential or actual problem situations. Alerts can evaluate the same metrics as model mointoring: model performance, operational performance, concept/data drift, and on metrics that you define. Alerts use Git, Slack, and webhook, notifications. See full details in Alerts and Notifications.

Guardrails#

Guardrails are measures, guidelines, and frameworks designed to ensure the safe, reliable, and ethical use of AI-generated content. Typical goals are: aligning LLM functionalities with various legal and regulatory standards to avoid regulatory non-compliance; ensuring outputs are unbiased and fair, avoiding perpetuation of stereotypes or discriminatory practices; preventing toxicity: filtering out and preventing the generation of harmful or offensive content; preventing hallucination: minimizing the risk of LLMs generating factually incorrect or misleading information.

See