Abstract
In this paper we study the multi-agent saddle-point problems where multiple agents try to collectively optimize a sum of local convex–concave functions, each of which is available to one specific agent in the network. We propose a distributed primal–dual subgradient method under the constraint that agents can only communicate quantized information. The method can be implemented over a time-varying network while satisfying some standard connectivity conditions. We provide convergence results and convergence rate estimates for the proposed method. In particular, we provide an error bound on its convergence rate to highlight the dependence on the quantization resolution. We have also provided a numerical example to show the effectiveness of the proposed method.
Keywords
Get full access to this article
View all access options for this article.
