Machine Learning (ML) plays an increasingly important role in many Big Data applications in Industry 5.0: predictive maintenance, zero defect manufacturing, process and/or supply chain optimization, etc. However, the dynamic and high-stakes nature of the manufacturing environment requires ML models to be maintained through continuous monitoring, periodical reevaluation, and possible retraining to ensure they remain accurate and relevant to the actual context. In addition, to match the desired performance (as well as security and safety) requirements ML models need to be executed in different locations along the edge-to-Cloud continuum (and possibly migrated in case of need), on dedicated serving runtimes that suit the specific needs of the use case. To address these issues, we realized an MLOps platform that is capable of managing ML models through their entire lifecycle and enabling their deployment in different ML serving runtimes. More specifically, the initial experimental evaluation presented in the paper focuses on Bento Yatai and TorchServe serving runtimes. It demonstrates that our platform is capable of effectively running ML models on both runtimes and provides a comparative evaluation at both the quantitative and qualitative levels.

A Machine Learning Operations Platform for Streamlined Model Serving in Industry 5.0

Colombi, Lorenzo
;
Gilli, Alessandro;Dahdal, Simon;Boleac, Ion;Tortonesi, Mauro;Stefanelli, Cesare;
2024

Abstract

Machine Learning (ML) plays an increasingly important role in many Big Data applications in Industry 5.0: predictive maintenance, zero defect manufacturing, process and/or supply chain optimization, etc. However, the dynamic and high-stakes nature of the manufacturing environment requires ML models to be maintained through continuous monitoring, periodical reevaluation, and possible retraining to ensure they remain accurate and relevant to the actual context. In addition, to match the desired performance (as well as security and safety) requirements ML models need to be executed in different locations along the edge-to-Cloud continuum (and possibly migrated in case of need), on dedicated serving runtimes that suit the specific needs of the use case. To address these issues, we realized an MLOps platform that is capable of managing ML models through their entire lifecycle and enabling their deployment in different ML serving runtimes. More specifically, the initial experimental evaluation presented in the paper focuses on Bento Yatai and TorchServe serving runtimes. It demonstrates that our platform is capable of effectively running ML models on both runtimes and provides a comparative evaluation at both the quantitative and qualitative levels.
2024
Big Data
Industry 5.0
Machine Learning
Machine Learning Operations (MLOps)
Service Management
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/2574902
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact