Abstract
We study the problem of cooperative inference where a group of agents interact over a network and seeks to estimate a joint parameter that best explains a set of network-wide observations using local information only. Agents do not know the network topology or the observations of other agents. We explore a variational interpretation of the Bayesian posterior and its relation to the stochastic mirror descent algorithm to prove that, under appropriate assumptions, the beliefs generated by the proposed algorithm concentrate around the true parameter exponentially fast. In Part I of this two-part paper series, we focus on providing a variation approach to distributed Bayesian filtering. Moreover, we develop computationally efficient algorithms for observation models in exponential families. We provide a novel non-asymptotic belief concentration analysis for distributed non-Bayesian learning on finite hypothesis sets. This new analysis is the basis for the results presented in Part II. We provide the first non-asymptotic belief concentration rate analysis for distributed non-Bayesian learning over networks on compact hypothesis sets in Part II.
Original language | English (US) |
---|---|
Journal | IEEE Transactions on Control of Network Systems |
DOIs | |
State | Published - Sep 1 2022 |
Keywords
- Bayes methods
- Computational modeling
- Distributed inference
- Maximum likelihood estimation
- Mirrors
- Optimization
- Random variables
- Stochastic processes
- estimation over networks
- non-Bayesian social learning
- non-asymptotic rates
ASJC Scopus subject areas
- Control and Optimization
- Signal Processing
- Control and Systems Engineering
- Computer Networks and Communications