|
||||||||
| Web Mail Mailing Lists Computing Resources Site Map |
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Stephen Boyd, Neal Parikh, Eric Chu, and Borja Peleato Friday, November 12, 201011:30 AM to 12:30 PM Thomas 206 Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features, training examples, or both. Decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. We argue that the alternating direction method of multipliers is well-suited for these problems. The method was developed in the 1970s, with roots in the 1950s, and is closely related to other algorithms such as dual decomposition, the method of multipliers, Douglas-Rachford splitting, Spingarn's method of partial inverses, and proximal methods. After briefly surveying the algorithm's theory and history, we focus on applications to distributed model fitting problems. |
|||||||
|