Machine Learning (ML) plays an increasingly important role in many Big Data applications in Industry 5.0: predictive maintenance, zero defect manufacturing, process and/or supply chain optimization, etc. However, the dynamic and high-stakes nature of the manufacturing environment requires ML models to be maintained through continuous monitoring, periodical reevaluation, and possible retraining to ensure they remain accurate and relevant to the actual context. In addition, to match the desired performance (as well as security and safety) requirements ML models need to be executed in different locations along the edge-to-Cloud continuum (and possibly migrated in case of need), on dedicated serving runtimes that suit the specific needs of the use case. To address these issues, we realized an MLOps platform that is capable of managing ML models through their entire lifecycle and enabling their deployment in different ML serving runtimes. More specifically, the initial experimental evaluation presented in the paper focuses on Bento Yatai and TorchServe serving runtimes. It demonstrates that our platform is capable of effectively running ML models on both runtimes and provides a comparative evaluation at both the quantitative and qualitative levels.

A Machine Learning Operations Platform for Streamlined Model Serving in Industry 5.0

Colombi, Lorenzo
Primo
;
Gilli, Alessandro;Dahdal, Simon;Boleac, Ion;Tortonesi, Mauro;Stefanelli, Cesare;
2024

Abstract

Machine Learning (ML) plays an increasingly important role in many Big Data applications in Industry 5.0: predictive maintenance, zero defect manufacturing, process and/or supply chain optimization, etc. However, the dynamic and high-stakes nature of the manufacturing environment requires ML models to be maintained through continuous monitoring, periodical reevaluation, and possible retraining to ensure they remain accurate and relevant to the actual context. In addition, to match the desired performance (as well as security and safety) requirements ML models need to be executed in different locations along the edge-to-Cloud continuum (and possibly migrated in case of need), on dedicated serving runtimes that suit the specific needs of the use case. To address these issues, we realized an MLOps platform that is capable of managing ML models through their entire lifecycle and enabling their deployment in different ML serving runtimes. More specifically, the initial experimental evaluation presented in the paper focuses on Bento Yatai and TorchServe serving runtimes. It demonstrates that our platform is capable of effectively running ML models on both runtimes and provides a comparative evaluation at both the quantitative and qualitative levels.
2024
9798350327939
Big Data; Industry 5.0; Machine Learning; Machine Learning Operations (MLOps); Service Management
File in questo prodotto:
File Dimensione Formato  
A_Machine_Learning_Operations_Platform_for_Streamlined_Model_Serving_in_Industry_5.0.pdf

solo gestori archivio

Tipologia: Full text (versione editoriale)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 2.01 MB
Formato Adobe PDF
2.01 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/2574902
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 9
  • ???jsp.display-item.citation.isi??? 8
social impact