A Novel Distributed ADMM Method in Large-Scale Applications

Authors

  • Heric Tsang

DOI:

https://doi.org/10.62306/ANDAMILSA

Keywords:

distributed optimization, ADMM, variance reduction, Hessian approximation, flexibility

Abstract

Stochastic alternating direction method of multiplier (ADMM) has shown great promise in distributed environments, and improvements in algorithmic flexibility are expected to result in significant advantages. In this paper, we provide Flex-SADMM, a novel stochastic optimization method based on distributed ADMM. To address the subproblems of ADMM, we specifically integrate variance-reduced first-order information and approximate second-order information in order to achieve steady convergence and improve the search direction's precision. Moreover, in contrast to other ADMM-based approaches that require updates from every computational node in each iteration, our framework only requires nodes to update once, at a predetermined iteration interval, greatly increasing flexibility. Experiments validate the effectiveness and improved flexibility of our suggested approach.

Published

2024-06-17

Issue

Section

Articles