It is well known that there is a strong connection between time integration and convex optimization. In this work, inspired by the equivalence between the forward Euler scheme and the gradient descent method, we broaden our analysis to the family of Runge–Kutta methods and show that they enjoy a natural interpretation as first-order optimization algorithms. The strategies intrinsically suggested by Runge–Kutta methods are exploited in order to detail novel proposal for either scaling or preconditioning gradient-like approaches, whose convergence is ensured by the stability condition for Runge–Kutta schemes. The theoretical analysis is supported by the numerical experiments carried out on some test problems arising from suitable applications where the proposed techniques can be efficiently employed.
Runge–Kutta-like scaling techniques for first-order methods in convex optimization
PORTA, Federica;RUGGIERO, Valeria
2017
Abstract
It is well known that there is a strong connection between time integration and convex optimization. In this work, inspired by the equivalence between the forward Euler scheme and the gradient descent method, we broaden our analysis to the family of Runge–Kutta methods and show that they enjoy a natural interpretation as first-order optimization algorithms. The strategies intrinsically suggested by Runge–Kutta methods are exploited in order to detail novel proposal for either scaling or preconditioning gradient-like approaches, whose convergence is ensured by the stability condition for Runge–Kutta schemes. The theoretical analysis is supported by the numerical experiments carried out on some test problems arising from suitable applications where the proposed techniques can be efficiently employed.I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.