Finitetime Analysis of the Distributed Detection Problem
Abstract
This paper addresses the problem of distributed detection in fixed and switching networks. A network of agents observe partially informative signals about the unknown state of the world. Hence, they collaborate with each other to identify the true state. We propose an update rule building on distributed, stochastic optimization methods. Our main focus is on the finitetime analysis of the problem. For fixed networks, we bring forward the notion of KullbackLeibler cost to measure the efficiency of the algorithm versus its centralized analog. We bound the cost in terms of the network size, spectral gap and relative entropy of agents’ signal structures. We further consider the problem in random networks where the structure is realized according to a stationary distribution. We then prove that the convergence is exponentially fast (with high probability), and the nonasymptotic rate scales inversely in the spectral gap of the expected network.
I Introduction
Distributed detection, estimation, prediction and optimization has been an interesting subject of study in science and engineering for so many years [1, 2, 3, 4, 5, 6]. Decentralized algorithms are ubiquitous in numerous scenarios ranging from sensor and robot to social and economic networks [7, 8, 9, 10]. In these applications, a network of agents aim to accomplish a team task for which they only have partial knowledge. Therefore, they must communicate with each other to benefit from local observations. In fact, the global spread of information in the network is sufficient for agents to achieve the network goal. In most of these schemes, consensus protocols are employed to converge agents to a common value [11, 12].
We would like to focus on the distributed detection problem in this work. The problem was first considered in the case that each agent sends its private observations to a fusion center [1, 2, 7]. The fusion center then faces a classical hypothesis testing to make a decision over the value of the parameter. While collecting data is decentralized in these cases, the decision making part is done in a centralized fashion.
Distributed detection has also been considered in works where no fusion center is necessary [13, 14, 15]. These works mostly focus on asymptotic analysis of the problem. Cattivelli et al. [13] develop a fully distributed algorithm based on the connection of NeymanPearson detection and minimumvariance estimation to tackle the problem. Jakovetić et al. [14] propose a consensus+innovations algorithm for detection in the case of Gaussian observations. Their method possesses an asymptotic exponential error rate even under noisy communication of agents. In [15], the consensus+innovations method is extended to generic (nonGaussian) observations over random networks.
Another social learning model inspiring several works in the literature is proposed by Jadbabaie et al. [16]. In this model, a fixed true state or hypothesis is aimed to be recovered by a network of agents. The state might be a decision or opinion of an individual, the correct price of a product or any quantity of interest. The state is assumed to belong to a finite collection of states. Agents receive a stream of private signals about the true state. However, the signals do not provide enough information for agents to detect the underlying state of the world. Hence, agents engage in local interactions to compensate for their imperfect knowledge about the states. Numerous works build on this approach to study social learning and distributed detection [17, 18, 19]. The focus of theses works is the asymptotic behavior of the model. Though appealing in certain cases, asymptotic analysis might not unveil all important parameters for learning. Therefore, finitetime analysis of the problem is also an interesting complementary direction to study.
Let us first provide more details on the asymptotic analysis of the problem. The authors in [16] propose a nonBayesian update rule for social learning applications. Each individual updates her Bayesian prior, and averages the result with the opinion of her neighbors. Under mild technical assumptions, the authors prove that agents’ beliefs converge to a delta distribution on the true state. The convergence occurs exponentially fast and in almost sure sense. Shahrampour and Jadbabaie [17] consider an optimizationbased approach to the problem, inspired by the work of Duchi et al. [20] on distributed dual averaging. Their proposed update rule is essentially a distributed stochastic optimization. They establish that the sequence of beliefs is weakly consistent (convergence in probability) when agents employ a gossip communication protocol. A communicationefficient variant of the problem is considered in [19] in which agents switch between Bayesian and nonBayesian regimes to detect the true state. Furthermore, Lalitha et al. [18] develop a strategy where agents perform a Bayesian update, and geometrically average the posteriors in their local neighborhood. The authors then provide the convergence and rate analysis of their method. In [16, 17, 19, 18], the convergence occurs exponentially fast, and the asymptotic rate is characterized via the relative entropy of individuals’ signal structures and their eigenvector centralities (in directed networks).
The asymptotic analysis presented in the works above only characterizes the dominant factors in the long run. In real world applications, however, the decision has to be made in a finite time horizon. Hence, it is important to study the finitetime version of the problem to understand the role of network parameters in detection quality. Serving this goal, the works of [21, 22, 23] study the nonasymptotic problem. While the network structure in [21] is assumed to be fixed, the works of [22, 23] address switching protocols that are deterministic.
In this paper we build on the results of [21] to extend our setup to random networks. For fixed networks, we define the notion of KullbackLeibler (KL) cost to compare the performance of distributed setting to its centralized counterpart. We provide an upper bound on the cost in terms of the spectral gap of the network. Our cost bound is independent of time with high probability. We further consider the stochastic communication setting in which the structure is realized randomly at each iteration. We prove that in this case, the rate scales inversely in the spectral gap of the expected network. Our result also guarantees the almost sure learning in random networks.
The rest of the paper is organized as follows: we formalize the notation, problem and the distributed detection scheme in Section II. Section III is devoted to the finitetime analysis of the problem. We consider both fixed and switching network topologies and provide nonasymptotic results. Section IV provides the concluding remarks, and proofs are included in the Appendix.
Ii The Problem Description and Algorithm
Iia Notation
We use the following notation throughout the paper:
The set for any integer  

Transpose of the vector  
The th element of vector  
Delta distribution on th component  
Vector of all ones  
Total variation distance between  
KLdivergence of from  
The th largest singular value of matrix 
Furthermore, all the vectors are assumed to be in column format.
IiB Observation Model
We consider a setting in which denotes a finite set of states of the world. A network of agents seek the unique, true state of the world (unknown of the problem). At each time , the belief of agent is represented by , where is a probability distribution over the set . For instance, denotes the prior belief of agent about the states of the world, and it is assumed to be uniform^{1}^{1}1The assumption of uniform prior only avoids notational clutter. The analysis in the paper holds for any prior with full support..
The detection model is defined with a conditional likelihood function parametrized by some state of the world . For each , denotes the th marginal of , and we use the vector representation to stack all states. At each time , the signal is generated based on the true state . As a result, for each , the signal is a sample drawn from the likelihood where is the sample space.
The signals are i.i.d. over time horizon, and also the marginals are independent across agents, i.e., for any . For simplicity, we define the logmarginal which is a sample corresponding to for any .
 A1.

We assume that all logmarginals are uniformly bounded such that for any , i.e., we have for any and .
Based on assumption A1, every private signal has bounded information content [24]. This bound can be found, for instance, when the signal space is discrete and provides a full support for distribution. Let us define as the set of states that are observationally equivalent to for agent ; in other words, almost surely with respect to the signal space. As evident from the definition, any state in the set is not distinguishable from the true state by observation of samples from the th marginal. Let be the set of states that are observationally equivalent to from all agents perspective.
 A2.

We assume that no state in the world is observationally equivalent to the true state from the standpoint of the network, i.e., the true state is globally identifiable, and we have .
Assumption A2 guarantees that agents can benefit from collaboration with each other. In other words, for any false state , there must exist an agent who is able to distinguish from .
Finally, the operator denotes the expectation with respect to signal space throughout the paper.
IiC Network Model
Private signals are not informative enough for agents, so they interact with each other to learn the true state of the world. For any time , the graph captures the network structure, i.e. is the set of nodes corresponding to agents (), and is the set of edges for that particular round. Agent receives information from only if the pair . We also define the neighborhood of agent at any time as . We represent by the selfreliance of agent at time , and by the weight that agent assigns to information received from agent in round . The matrix is then defined such that denotes the entry in its th row and th column. By construction, has nonnegative entries, and only if . We further assume that is doubly stochastic and symmetric; hence,
To form the network structure, is drawn independently over time from a stationary distribution , i.e. the elements of the sequence are i.i.d. samples. Note importantly that the source of randomness is independent of signal space. We distinguish any expectation with respect to the network randomness from signal space using .
 A3.

The network is connected in expectation sense. That is, for matrix there exists a bidirectional path from any agent to any agent .
The assumption guarantees that (in expectation sense) information can be propagated properly throughout the network. It further implies that is unique, and the other singular values of are strictly less than one in magnitude [25]. Since the matrix is symmetric, each agent in the network is equally central, and
Therefore, is the stationary distribution of . Assumption A3 entails that the Markov chain is irreducible and aperiodic [25].
A wellknown example of the communication protocol detailed above is gossip algorithm. The communication protocol works based on a Poisson clock. Once the clock ticks, an agent from the set is picked uniformly at random. The agent then selects a neighboring agent (with respect to a fixed, predefined structure) at random, and they average their information. For a thorough review of gossip algorithms we refer the interested reader to [26].
IiD Centralized vs. Decentralized Detection
Decentralized detection is constructed based on its centralized analog. In the centralized scenario there is only one agent in the world (no network exists), and the agent has global information to learn the true state. In other words, the agent has full access to the sequence of signals . At any round , the agent accumulates an empirical average of logmarginals, and forms the belief about the states, where denotes the dimensional probability simplex. Defining
(1) 
the following updates capture the centralized detection
(2) 
It can be seen from above that the centralized detector aggregates a geometric average of marginals in . The parameter is called the learning rate.
One can prove that (see e.g. [21]) the following inequality holds
for any , due to uniqueness of the true state (assumption A2). In what follows, without loss of generality, we assume the following descending order, i.e.
(3) 
The assumption will only help us to simplify the derivation of technical results.
We now describe the distributed setting which involves a network of agents. In this scenario, each agent only receives a stream of private signals generated based on the parametrized likelihood . Therefore, agent does not directly observe for any . In other words, the agent collects local information by calculating a weighted average of loglikelihoods in its neighborhood. Then, the agent forms the belief at round as follows:
(4) 
As depicted above, each agent updates its belief using purely local diffusion. Let us distinguish the centralized and decentralized detector more specifically. The centralized detector collects all logmarginals, whereas the decentralized detector receives private logmarginals, and collects a weighted average of local information. Regardless of the information collection part, both algorithms are special cases of the wellknown Exponential Weights algorithm.
It can be verified (see e.g. [17]) that the closedform solution of in the decentralized update (IID) is as follows
for any . One can also combine above with (1) and (2) to observe that
since product of doubly stochastic matrices remains doubly stochastic. The identity above draws the connection between centralized and decentralized updates.
Iii Finitetime Analysis
In this section, we provide our technical results. We first specialize the problem to fixed network structures, and then consider switching topologies, and prove nonasymptotic results for convergence of beliefs.
Iiia Fixed Network Topology
One can investigate the convergence of beliefs in fixed networks; however, we present a more general result in this section. We measure the efficiency of the distributed algorithm versus its centralized counterpart using the notion of decentralization cost. Throughout this section we assume that with probability one for all .
At any round , we quantify the cost which agent needs to pay to have the same opinion as a centralized agent with ; then, the agent suffers a total decentralization cost of
(5) 
after rounds. In general, the KLdivergence measures the dissimilarity of two probability distributions; hence, it could be a reasonable metric to capture the difference between two algorithms as they both output a probability distribution over state space.
More formally, the cost quantifies the difference between a decentralized agent that observes private signals and a centralized agent that has available. In other words, it shows how well the decentralized algorithm copes with the partial information. It is important to note that is still a random quantity as it depends on signals. Our goal is to find a bound on the cost in the high probability sense.
The connectivity of network plays an important role in the learning as as . We shall see that the cost bound is governed by the mixture behavior of Markov chain . We now present the main result of this section in the following theorem. The proof can be found in [21].
Theorem 1
Let the sequence of beliefs for each agent be generated by the Distributed Detection algorithm with the choice of learning rate . Given bounded logmarginals (assumption A1), global identifiability of the true state (assumption A2), and connectivity of the network (assumption A3 with a.e.), we have
with probability at least .
We remark that the special choice of learning rate only simplifies the bound. We do not tune the learning rate for optimization purposes; otherwise, using the same for both algorithms would not provide a fair comparison. One can also work with for both algorithms, and derive a bound which looks slightly more complicated than the bound in Theorem 1.
The dependence of bound to the inverse of is quite natural since it can be seen as the asymptotic convergence rate of beliefs (see e.g. [17, 19, 18]). It simply means that when observations under (the second likeliest state) are as likely as observations under (the true state), the cost of the algorithm increases. Intuitively, when signals hardly reveal the difference between the best two candidates for the true state, agents must spend more effort to discern the two. Hence, this results in suffering a larger cost caused by slower learning rate.
The cost scales logarithmically with the network size and the number of states . It further scales inversely in
defined as the spectral gap of the network. Interestingly, in a fixed network, the detection cost is timeindependent (with high probability), showing the best behavior with respect to time. Therefore, the average expected cost (per iteration cost) asymptotically tends to zero. We finally indicate that dependence of the bound to is important from network design perspective [21].
IiiB Switching Network Topology
In this section, we investigate the convergence of agents’ beliefs in timevarying networks. As described in Section IIC, at every time the network structure is realized with a random matrix . Agents then communicate with each other following . We assume that the matrix is generated from a stationary distribution, and therefore, we have for all . Assumption A3 guarantees that the network is connected in expectation sense. We show that even in timevarying networks the mixture behavior of (expected network) affects convergence of beliefs with high probability.
We now establish that agents have arbitrarily close opinions in a network that is connected in expectation sense. The following proposition proves that the convergence rate is governed by relative entropy of signals and network characteristics.
Proposition 2
Let the sequence of beliefs for each agent be generated by the Distributed Detection algorithm (IID) with the learning rate . Given bounded logmarginals (assumption A1), global identifiability of the true state (assumption A2), and connectivity of the expected network (assumption A3), for each individual agent it holds that
with probability at least , where for
The proposition provides an anytime bound on the logdistance in the high probability sense. It also verifies that the belief of each agent is strongly consistent, i.e., it converges almost surely to a delta distribution on the true state. It is evident that the dominant term (asymptotic rate) depends on relative entropy of signals through . This is consistent with previous asymptotic results found in [17, 19, 18] for other updates similar to our update. Proposition 2 complements those results by providing a nonasymptotic convergence rate. Finally, the inverse scaling with spectral gap also remains in effect (for expected network) similar to the case of fixed topologies.
Iv Conclusion
We considered the distributed detection problem in fixed and switching network topologies. A network of agents observe a stream of private signals which do not provide enough information about the true state of the world. Therefore, agents must communicate with each other to augment their imperfect knowledge with local observations. Iteratively forming a belief about the states, each agent uses purely local diffusion to update itself. We analyzed the detection problem in finitetime domain. We first specialized to the case of fixed networks, and study the efficiency of our algorithm versus its centralized analog. We introduced a KL cost to measure the dissimilarity of the two algorithms, and bounded the cost in terms of relative entropy of signals, network size and spectral gap. We further extended our results to switching network topologies. We investigated convergence of beliefs, and provided an anytime bound on the detection error in the high probability sense. In this case, the spectral gap of the expected network proves to be crucial in convergence rate. As a future direction, we would like to consider the scenario where the signal distribution is not stationary. Studying drifting distributions allows us to examine the robustness of detection in dynamic framework.
Appendix : Omitted Proofs
Proof of Proposition 2. Letting in (IID), we write
(6) 
using the simple inequality that for any . Since we know
we can combine above with (6) to get
(7) 
For any , define
and note that given , is a function of random variables for all . To use McDiarmid’s inequality in Lemma 3, set , fix the samples for random variables, and draw two different samples and for some and some . The fixed samples are simply cancelled in the subtraction, and we derive
where we applied assumption A1. Since the product of doubly stochastic matrices remains doubly stochastic, summing over and , we obtain
Let us define the event as
where is expectation over all sources of randomness (signals and network structures). We then have
where the expectation is taken with respect to network randomness. We now apply McDiarmid’s inequality in Lemma 3 to obtain
for each fixed . Letting the probability above equal to and taking a union bound over , the following event holds (given )
(8) 
simultaneously for all , with probability at least , which implies
On the other hand, in view of assumption A1 and the fact that for all , we have
where we applied Lemma 2 in [21] to derive the last step. Using (3), we simplify above to get
(9) 
for any . Substituting (9) into (8) and combining with (7), we have
with probability at least , and the proof is completed.
Lemma 3
(McDiarmid’s Inequality) Let be independent random variables and consider the mapping . If for , and every sample , the function satisfies
then for all ,
References
 [1] R. R. Tenney and N. R. Sandell Jr, “Detection with distributed sensors,” IEEE Transactions on Aerospace Electronic Systems, vol. 17, pp. 501–510, 1981.
 [2] J. N. Tsitsiklis et al., “Decentralized detection,” Advances in Statistical Signal Processing, vol. 2, no. 2, pp. 297–344, 1993.
 [3] V. Borkar and P. P. Varaiya, “Asymptotic agreement in distributed estimation,” IEEE Transactions on Automatic Control, vol. 27, no. 3, pp. 650–655, 1982.
 [4] A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multiagent optimization,” IEEE Transactions on Automatic Control, vol. 54, no. 1, pp. 48–61, 2009.
 [5] S. Kar, J. M. Moura, and K. Ramanan, “Distributed parameter estimation in sensor networks: Nonlinear observation models and imperfect communication,” IEEE Transactions on Information Theory, vol. 58, no. 6, pp. 3575–3605, 2012.
 [6] C. Eksin and A. Ribeiro, “Distributed network optimization with heuristic rational agents,” IEEE Transactions on Signal Processing, vol. 60, no. 10, pp. 5396–5411, 2012.
 [7] J.F. Chamberland and V. V. Veeravalli, “Decentralized detection in sensor networks,” IEEE Transactions on Signal Processing, vol. 51, no. 2, pp. 407–416, 2003.
 [8] F. Bullo, J. Cortés, and S. Martinez, Distributed control of robotic networks: a mathematical approach to motion coordination algorithms. Princeton University Press, 2009.
 [9] N. A. Atanasov, J. Le Ny, and G. J. Pappas, “Distributed algorithms for stochastic source seeking with mobile robot networks,” Journal of Dynamic Systems, Measurement, and Control, 2014.
 [10] S. Shahrampour, S. Rakhlin, and A. Jadbabaie, “Online learning of dynamic parameters in social networks,” in Advances in Neural Information Processing Systems, 2013.
 [11] A. Jadbabaie, J. Lin, and A. S. Morse, “Coordination of groups of mobile autonomous agents using nearest neighbor rules,” IEEE Transactions on Automatic Control, vol. 48, no. 6, pp. 988–1001, 2003.
 [12] R. OlfatiSaber and R. M. Murray, “Consensus problems in networks of agents with switching topology and timedelays,” IEEE Transactions on Automatic Control, vol. 49, no. 9, pp. 1520–1533, 2004.
 [13] F. S. Cattivelli and A. H. Sayed, “Distributed detection over adaptive networks using diffusion adaptation,” IEEE Transactions on Signal Processing, vol. 59, no. 5, pp. 1917–1932, 2011.
 [14] D. Jakovetic, J. M. Moura, and J. Xavier, “Distributed detection over noisy networks: Large deviations analysis,” IEEE Transactions on Signal Processing, vol. 60, no. 8, pp. 4306–4320, 2012.
 [15] D. Bajovic, D. Jakovetic, J. M. Moura, J. Xavier, and B. Sinopoli, “Large deviations performance of consensus+ innovations distributed detection with nongaussian observations,” IEEE Transactions on Signal Processing, vol. 60, no. 11, pp. 5987–6002, 2012.
 [16] A. Jadbabaie, P. Molavi, A. Sandroni, and A. TahbazSalehi, “Nonbayesian social learning,” Games and Economic Behavior, vol. 76, no. 1, pp. 210–225, 2012.
 [17] S. Shahrampour and A. Jadbabaie, “Exponentially fast parameter estimation in networks using distributed dual averaging,” in IEEE Conference on Decision and Control (CDC), 2013, pp. 6196–6201.
 [18] A. Lalitha, A. Sarwate, and T. Javidi, “Social learning and distributed hypothesis testing,” in International Symposium on Information Theory (ISIT), 2014, pp. 551–555.
 [19] S. Shahrampour, M. A. Rahimian, and A. Jadbabaie, “Switching to learn,” in American Control Conference (ACC), July 2015, pp. 2918–2923.
 [20] J. C. Duchi, A. Agarwal, and M. J. Wainwright, “Dual averaging for distributed optimization: convergence analysis and network scaling,” IEEE Transactions on Automatic Control, vol. 57, no. 3, pp. 592–606, 2012.
 [21] S. Shahrampour, A. Rakhlin, and A. Jadbabaie, “Distributed detection: Finitetime analysis and impact of network topology,” arXiv preprint arXiv:1409.8606, 2014.
 [22] A. Nedic, A. Olshevsky, and C. Uribe, “Nonasymptotic convergence rates for cooperative learning over timevarying directed graphs,” in American Control Conference (ACC), July 2015, pp. 5884–5889.
 [23] A. Nedić, A. Olshevsky, and C. A. Uribe, “Fast convergence rates for distributed nonbayesian learning,” arXiv preprint arXiv:1508.05161, 2015.
 [24] K. Drakopoulos, A. Ozdaglar, and J. N. Tsitsiklis, “On learning with finite memory,” IEEE Transactions on Information Theory, vol. 59, no. 10, pp. 6859–6872, 2013.
 [25] J. S. Rosenthal, “Convergence rates for markov chains,” SIAM Review, vol. 37, no. 3, pp. 387–405, 1995.
 [26] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah, “Randomized gossip algorithms,” IEEE Transactions on Information Theory, vol. 52, no. 6, pp. 2508–2530, 2006.