Abstract
In distributed sensing systems with constrained communication capabilities, sensors' noisy measurements must be quantized locally before transmitting to the fusion centre. When the same parameter is observed by a number of sensors, the local quantization rules must be jointly designed to optimize a global objective function. In this work we jointly design the local quantizers by maximizing the mutual information as the optimization criterion, so that the quantized measurements carry the most information about the unknown parameter. A low-complexity iterative approach is suggested for finding the local quantization rules. Using the mutual information as the design criterion, we can easily integrate the effect of communication channels in the design and consequently design channel-aware quantization rules. We observe that the optimal design depends on both the measurement and channel noises. Moreover, our algorithm can be used to design quantizers that can be deployed in different applications. We demonstrate the success of our technique through simulating estimation and detection applications, where our method achieves estimation and detection errors as low as when designing for those special purposes.
1. Introduction
A random source with continuous amplitude requires infinite number of bits to be described. However, due to practical constraints in communication systems, for example, limited storage or channel capacity, only finite number of bits can be accommodated. To compress a continuous-amplitude random source into limited amount of information its amplitude X should be quantized to
Quantization theory has been studied long ago [1, 2]. The rate-distortion theory [3] describes the relation between the amount of distortion caused by the quantization and the rate by which the quantized source can be presented. The theoretical limits described by the rate-distortion function can only be asymptotically achieved by optimal source encoding. For designing optimal quantizers, a practical method has been investigated by Lloyd and Max [1, 4]. They propose an iterative algorithm for finding the best quantization rule for a random source to achieve the lowest distortion. The distortion measure they use is the mean squared error (MSE). Using the Lloyd-Max algorithm, the optimal (it must be mentioned that all iterative algorithms for quantization design find a local optimal solution which depends on the initial quantization rules)
A more interesting quantization scenario is when the continuous-amplitude source is not observable and only a noisy version of it can be measured as
A relatively recent and more challenging problem appears in distributed sensing systems, for example, sensor networks. The problem can be described as distributed noisy source quantization, also referred to as the multiterminal source coding or CEO problem [10, 11]. In a distributed sensing system, the same unknown source is observed by different measurement devices, each having a noisy observation as
Design of the optimal distributed quantizer for the above scenario has been considered by the authors of [13–18], who suggest cyclic algorithms based on alternating minimization [19], to find the optimal N quantization rules. The algorithm starts by initial guesses about the N quantizers, that is,
Compared with the previous design algorithms for distributed quantization, that is, [13–16], in this work we use the mutual information (MI) as the optimization criterion for the distributed quantization design. We jointly design quantizers that maximize the MI of the quantized data and the unknown parameter. Our motivation for using the MI is that it is a fundamental measure showing how much information one variable contains about another variable. We design a set of quantizers for the noisy measurements, in a way that the quantized variables contain the most information about the unknown parameter.
A theory of designing quantizers based on optimizing mutual information measures has been discussed as the information bottleneck method by [17]. The information bottleneck method has been mostly used in clustering and classification applications. In this work we take the channel noise effect into account and design distributed quantizers that are optimum in presence of imperfect communication channels.
The MI measure has the following benefits. It allows designing the quantizers independent of the choice of a decoder or estimator in the FC. Also, as we will discuss later, when using the MI measure the global optimization criterion can be broken down into smaller criteria. Finally, it allows incorporating the effect of communication channels in the design of optimal quantizers. Hence, obtain the optimal distributed channel-aware quantizer. By maximizing the MI of the received data at the FC and the unknown parameter, we observe that depending on the channel noise the optimal quantizers can be different from the channel-unaware quantizers.
Performance evaluation through simulating different scenarios shows great results for our distributed quantizers design based on MI maximization. This is evaluated for two applications, that is, estimation and detection. We will show that the quantization rules obtained by maximizing the MI achieve the same, and in some cases better, performance when compared with those quantizers specifically designed for the estimation or the detection purpose, that is, using MSE or Ali-Silvey distances as the optimization criteria.
The paper is organized based on two cases. First, we assume perfect communication channels between the sensors and the FC and develop our method; then we apply the method to the general case where channel is not perfect. In Section 2, we first justify the choice of MI to be used as the design criterion. In Section 3, the problem is defined and formulated based on MI. Consequently, a design algorithm is devised in Section 4 assuming ideal communication channels. In Section 5, the algorithm is modified to include channel effect. Finally, the numerical results are presented and discussed in Section 6.
2. Mutual Information as the Optimization Criterion
Most of the literature on optimal quantizer design has used distortion measures, such as MSE, to design the optimal quantizer [1, 4, 7–9, 13]. However, other measures have also been used as criteria to design quantizers; among them are Ali-Silvey distances [16, 20, 21], Cramer-Rao lower bound, and Fisher information [15, 22, 23]. The motivation for using these measures is the fact that they work better in some applications. For example, Ali-Silvey distance measures are shown to design better quantizers for detection applications [16, 20].
A fundamental measure, showing how much information about the unknown is conveyed in the quantized data, is the MI of the unknown and the quantized data. Therefore, in this work, we base the design of distributed quantizers on maximizing the MI and will show, in Section 6, that the MI criteria result in designing quantizers with the same and even higher performance than other measures including distortion measures and Fisher information. Also, choice of MI as the optimization measure has some computation benefits in solving joint optimization of quantizers as it allows for breaking down the global bigger equation into smaller ones, as explained in Section 3. Keeping the number of quantized levels per sensor constant, we achieve the highest information rate
A benefit of using the MI is to make the quantizer design independent of the estimation method or decoder. In design solutions based on distortion measures, such as squared error or Hamming error [3], the estimation method is fixed, for example, minimum mean squared error (MMSE) or maximum likelihood, and the optimization of the quantizers is achieved depending on the estimator type. Using the MI measure, however, following the design of quantizers, an estimation or detection method can be developed in the FC, based on each specific application. This enables us to design a quantizer that is useful for estimation, detection, classification, or feature extraction. Specifically for estimation purposes, the optimal quantizers designed based on minimizing the MSE (the Lloyd-Max algorithm) are those also with high MI [1, 24]. This makes sense, because when the quantized data carry more information about the unknown parameter, the FC has a better representation of the unknown; hence, it can estimate it more accurately. The performance of our MI-based algorithm in estimation and detection applications is discussed in Sections 6.1 and 6.2, respectively.
Using the MI measure in the distributed quantization design enables breaking down an N-sensor quantization problem into smaller problems. In fact, since the formula of the MI can be recursively broken down using the chain rule of MI, a simpler suboptimal solution can be derived by maximizing each component. The related formulations are discussed in the following section.
3. Problem Formulation Based on Mutual Information
The distributed quantization problem addressed in this work is defined as follows.
Suppose X (for the brevity of notations, we use the same symbol to address a random variable and its value) is a random scalar which takes values in
Due to communication constraints, the continuous-amplitude measurements have to be quantized before transmission. Therefore,

The complete model of the problem.
Let
From this point to Section 5, the communication channels are assumed to be ideal. The goal is to derive
The MI in (2) can be recursively written based on the chain rule of mutual information [3]:
Finding the nth quantization rule, for
4. Design Algorithm
To find
To solve the optimization problem in (9), motivated by [25] we use the double maxima approach by converting (9) to a larger maximization problem. The maximization in (9) can be achieved following the next three steps, which is proven in the Appendix. The maximum of the objective function in (9), namely, Now, for a fixed p, And, for a fixed f,
It is shown in the Appendix that these procedures find
5. Channel-Aware Optimal Quantizers
The discussions up to this point have assumed ideal communication channels between the sensors and the FC. In real distributed sensing systems, due to the nonideal communication channels the quantized data generated by the sensors might not be received correctly at the FC. This will affect the overall performance of the system. Hence, considering the channel effect in designing the quantizers is crucial [26]. For centralized quantization [27–29], revise the MSE to include the channel effect. Then, they jointly optimize the source encoders and the reconstruction levels at the receiver by minimizing this new MSE. For distributed quantization, channel-optimized quantizer design has been developed for hypothesis testing by minimizing the Bayesian cost [30, 31]. Recently, the distributed channel-aware quantizer design for multiple correlated sources has been addressed by [32, 33], where M source encoders are designed to quantize M correlated sources, in presence of noisy communication channels. Reference [34] discusses the problem under a total power constraint and designs the quantizers by minimizing the signal distortion in the receiver. For multimedia applications in distributed networks, the multiple description coding has been used to fight the channel loss [33]. References [35–37] have addressed the effect of imperfect transmission channels in the multiple description coding algorithms for the application of distributed video transmission. In this section, we are considering the channel into our quantizer design to recover the signal more accurately in the destination.
In this section, we design optimal channel-aware quantizers for the distributed quantization of a noisy source using MI measure. We assume that communication channels between each piece of quantized data and the FC are independent. In presence of these noisy channels, we now optimize the quantizers' design by maximizing the MI of the unknown parameter and the channels' outputs. We use the Markov chain property in (1), and to solve the optimization problem we follow an approach similar to Section 4.
Due to channel errors, the received symbol at the FC,
6. Simulation Results
In this section the performance of our proposed algorithm is demonstrated using computer simulations and compared with other methods. In particular, we examine the performance of our MI-based quantization design for the estimation applications and detection applications, in Sections 6.1 and 6.2, respectively. The effect of nonideal channels on the optimal quantization rules is investigated in Section 6.3.
6.1. Estimation Application
For a distributed sensing system with estimation purposes, the quantized values are used in the FC to estimate the unknown. To compare with [15], where the quantization rules are obtained by minimizing the MSE, we use a similar simulation scenario. Therefore, the unknown parameter X is distributed according to
Our algorithm finds the optimal quantizers

Designing the quantization rules by maximizing the MI.
At each iteration of the algorithm, the current quantization rules are used to quantize the measurements

MSE change at each iteration.
The optimal quantization rules at the end of iterations are represented by the set of breakpoints as
MSE for estimation.
6.2. Detection Application
In a distributed sensing system with detection purposes, the FC uses the quantized data to perform a hypothesis testing. We use our method of maximizing the MI to find the optimal quantization rules for the detection scenario and compare the performance with that of Poor algorithm [20], where Ali-Silvey distances [38] are used as the optimization criterion.
To simulate the detection scenario we assume that the unknown X is a Bernoulli random variable which represents the absence (
To compare with Poor [20], we assume equally likely
Probability of error for signal detection.
6.3. Channel Effect
The presence of a nonideal communication channel between each sensor and the FC affects the design of optimal local quantizers for each sensor. Using the design algorithm developed in Section 5, we find the channel-aware local quantizers. The simulation results confirm that the optimal quantizers assuming ideal channels are different from the optimal quantizers in the presence of nonideal channels.
To compare the channel-aware and channel-unaware quantization schemes we consider an estimation application. We assume that sensor n's quantized data
Figures 4 and 5 show maximization of the MI and minimization of the MSE, respectively, during the iterations and for different values of ϵ. The final quantizers are given in Table 3. It can be seen from Table 3 that the optimal quantization solution changes depending on the channel error probability. Consequently, if, for instance, one deploys the quantizers designed for
Optimal quantizers in presence of nonideal channel.

Maximizing the MI in presence of nonideal communication channels.

MSE of estimation at each iteration of the algorithm.
For the error detection application in nonideal communication channels, we have compared our results with that of [31], for different problem setups. Reference [31] develops an iterative algorithm by minimizing the error probability at the fusion centre after finding the optimal fusion rule for each iteration based on the quantizers of that iteration. For all setups, we choose
Error detection in presence of nonideal channel.
7. Conclusion
In this paper, we proposed an algorithm based on maximizing the MI measure for jointly designing optimal channel-aware local quantization rules for a distributed sensing system. The MI allows us to design general purpose quantizers that later can be deployed for different applications, for example, estimation or detection. We have shown that the performance of the optimal quantizers based on the MI is essentially the same as the performance of optimal quantizers of other methods that specifically target the estimation or detection application. We also observed that the optimal local quantizers in the presence of nonideal channels are different from the local quantizers that are optimized without considering the channel effect.
Footnotes
Appendix
Competing Interests
The authors declare that they have no competing interests.
