Distributed optimization of nonconvex multiagent systems: Theory and applications
MetadataShow full item record
The focus of this dissertation is to provide a unified and efficient solution method to the general nonconvex optimization problems arise from the multiagent systems. A new class of distributed and parallel algorithms with provable convergence to local optimal solutions of the nonconvex problem is proposed. More specifically, in this work the study of the nonconvex optimization problem is gradually generalized starting from nonconvex objective function but convex privative constraints and ending up with the general formulation including nonconvex coupling constraints. First, we propose a novel decomposition framework for the distributed optimization of general nonconvex sum-utility functions arising naturally from the design of wireless multi-user interfering systems. The first class of (inexact) Jacobi best-response algorithms with provable convergence is derived, where all the users simultaneously and iteratively solve a suitably convexified version of the original sum-utility optimization problem. It can be interpreted as a general dynamic pricing mechanism which provides a unified view of existing pricing schemes that are based on heuristics. The proposed algorithmic framework can be easily particularized to well-known applications, giving rise to very efficient practical (Jacobi or Gauss-Seidel) algorithms that outperform existing ad-hoc methods proposed for very specific problems. Then, the problem formulation is generalized to allow the existence of convex coupling constraints in the feasible set. By choosing properly the convex approximates, a new class of distributed Successive Convex Approximation (SCA)-based algorithms are proposed hinging on the primal and dual decomposition techniques. Finally, a general algorithmic framework for the minimization of a nonconvex smooth objective function subject to nonconvex smooth constraints is proposed. The algorithm solves a sequence of (separable) strongly convex problems and maintains feasibility at each iteration. Convergence to a stationary solution of the original nonconvex optimization problem is established. Our framework is very general and flexible; it unifies several existing SCA-based algorithms such as (proximal) gradient or Newton type methods, block coordinate (parallel) descent schemes, D.C.-function methods, and improves on their convergence properties. More importantly, and differently from current SCA approaches, it naturally leads to distributed and parallelizable implementations for a large class of nonconvex problems.