In recent years, the amount of available information has become so vast in certain fields of applications that it is infeasible or undesirable to carry out the computations on a single server. This has motivated the design and study of distributed statistical or learning methods. In distributed methods, the data is split amongst different administrative units and computations are carried out locally in parallel to each other. The outcome of the local computations are then aggregated into a final result on a central machine.
First, we will compare the theoretical properties of various (Bayesian) distributed methods proposed in the literature on the benchmark signal in Gaussian white noise model. Then we consider thelimitations and guarantees of distributed methods in general under communication constraints on the same benchmark nonparametric model.
This is an ongoing joint work with Harry van Zanten.