|Author||: John MacLaren Walsh|
|Total Pages||: 306|
|ISBN 10||: CORNELL:31924105483980|
|Language||: EN, FR, DE, ES & NL|
This dissertation discusses the performance and convergence of a family of algorithms for distributed iterative approximate statistical inference called expectation propagation algorithms [1, 2]. Notable examples of expectation propagation include the turbo decoder [3, 4, 5, 6, 7], Gallager's algorithm for the soft decoding of LDPC codes[8, 9, 10], the Kalman filter , the forward backward algorithm, and belief propagation [13, 14]. These algorithms were heuristically proposed and neither their convergence behavior nor the mechanism behind their good performance is well understood. After covering special cases in which the algorithms can be shown to converge to the optimal values, we provide a novel performance framework which shows that the stationary points of the expectation propagation algorithms solve a constrained maximum likelihood optimization problem. We also show that the stationary points of expectation propagation are critical points of a constrained statistics based Bethe free energy. We then discuss duality and the relationship between the two generic optimality frameworks. Continuing on, we next study the mechanism of convergence behind expectation propagation, and discover that it may be interpreted both as a nonlinear block Gauss Seidel method [15, 16], on the gradient of the Lagrangian as well as a variant of Dykstra's algorithm  with iterated Bregman projections. Some limited convergence results are provided via the nonlinear block Gauss Seidel interpretation. Throughout the dissertation, we take care to apply the abstract theory to particular well known members of the expectation propagation family, most notably the belief propagation decoder and the turbo decoder. The dissertation concludes with a discussion of several avenues that the work therein has opened up for further research.