In this work we extend the class of Consensus-Based Optimization (CBO) metaheuristic methods by considering memory effects and a random selection strategy. The proposed algorithm iteratively updates a population of particles according to a consensus dynamics inspired by social interactions among individuals. The consensus point is computed taking into account the past positions of all particles. While sharing features with the popular Particle Swarm Optimization (PSO) method, the exploratory behavior is fundamentally different and allows better control over the convergence of the particle system. We discuss some implementation aspects which lead to increased efficiency while preserving the success rate in the optimization process. In particular, we show how employing a random selection strategy to discard particles during the computation improves the overall performance. Several benchmark problems and applications to image segmentation and Neural Networks training are used to validate and test the proposed method. A theoretical analysis allows to recover convergence guarantees under mild assumptions on the objective function. This is done by first approximating the evolution of the particles with a continuous-in-time dynamics, and then by taking the mean-field limit of such dynamics. Convergence to a global minimizer is finally proved at the mean-field level.

Consensus based optimization with memory effects: random selection and applications

Pareschi, Lorenzo
Ultimo
2023

Abstract

In this work we extend the class of Consensus-Based Optimization (CBO) metaheuristic methods by considering memory effects and a random selection strategy. The proposed algorithm iteratively updates a population of particles according to a consensus dynamics inspired by social interactions among individuals. The consensus point is computed taking into account the past positions of all particles. While sharing features with the popular Particle Swarm Optimization (PSO) method, the exploratory behavior is fundamentally different and allows better control over the convergence of the particle system. We discuss some implementation aspects which lead to increased efficiency while preserving the success rate in the optimization process. In particular, we show how employing a random selection strategy to discard particles during the computation improves the overall performance. Several benchmark problems and applications to image segmentation and Neural Networks training are used to validate and test the proposed method. A theoretical analysis allows to recover convergence guarantees under mild assumptions on the objective function. This is done by first approximating the evolution of the particles with a continuous-in-time dynamics, and then by taking the mean-field limit of such dynamics. Convergence to a global minimizer is finally proved at the mean-field level.
2023
Borghi, Giacomo; Grassi, Sara; Pareschi, Lorenzo
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0960077923007609-main.pdf

solo gestori archivio

Tipologia: Full text (versione editoriale)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 2.45 MB
Formato Adobe PDF
2.45 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
2301.13242.pdf

accesso aperto

Tipologia: Pre-print
Licenza: PUBBLICO - Pubblico con Copyright
Dimensione 2.97 MB
Formato Adobe PDF
2.97 MB Adobe PDF Visualizza/Apri

I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/2530423
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact