A commonly encountered problem in wireless sensor networks (WSNs) applications is to reconstruct the state of nature, that is, distributed estimation of a parameter of interest through WSNs' observations. However, the distributed estimation in autonomous clustered WSNs faces a vital problem of sensors' selfishness. Each sensor autonomously decides whether or not to transmit its observations to the fusion center (FC) and not be controlled by the fusion center (FC) any more. Thus, to encourage cooperation within selfish sensors, infinitely and finitely repeated games are firstly modeled to depict sensors' behaviors. Then, the existences of Nash equilibriums for infinitely and finitely repeated games are discussed. Finally, simulation results show that the proposed Nash equilibrium strategies are effective.
1. Introduction
Wireless sensor networks (WSNs) have increasingly attracted attention due to their wide range of applications, such as industrial control and monitoring, home automation, military surveillance, environment monitoring, and health care. WSNs usually comprise a large number of small-size and energy-limited sensor nodes [1–7]. Different from traditional WSNs with fully cooperated nodes [8], some WSNs consist of selfish and autonomous nodes. In such WSNs, the selfishness nature of nodes that manage to achieve their own aims is considered to be common. In other words, all the nodes are not willing to cooperate and accomplish the network task. However, such noncooperation can deteriorate the network performances.
Specifically for the traditional distributed estimation problem [8, 9], nodes are required to cooperate fully and estimate a scalar parameter under the inherent limitations, such as limited energy and limited network bandwidth. In a practical WSN, these limitations impose a constraint on the design of estimation methods. Generally, the main goal is to save the total energy while achieving given estimation performance under these limitations. For example, in recent literature, the distributed estimation problem in the presence of attacks is discussed and joint estimation schemes of the statistical description of the attacks and the parameter to be estimated are proposed to deal with the attacked observations [10]. Additionally, a novel distributed estimation method based on observations prediction is focused on, and the innovations of sensors' observations are locally predicted and transmitted to the fusion center (FC) [11]. These recent advances usually assume all sensors are selfless and can be controlled by the FC arbitrarily.
However, in autonomous WSNs with selfishness, nodes may not be willing to cooperatively estimate a parameter at the cost of consuming their own limited battery resource. Therefore, each node autonomously decides whether or not to transmit its observations to the FC and not be controlled by the FC any more. Consequently, nodes will not be of their best interest to transmit their observations to the FC. It will deteriorate the network estimation accuracy of the interested parameter and this selfish rejection of transmitting eventually impairs the nodes' own interest. Hence, to encourage cooperation within selfish nodes and improve the final estimation accuracy, it is necessary to design rules and punishment mechanisms to self-enforce nodes' behaviors.
It is noted that such rules and punishment mechanisms usually are modeled as repeated games, in which the selfish nodes know when and how to cooperate in order to obtain potential interests over multiple periods. For example, the repeated game model has been adopted for packet forwarding problems in ad hoc networks. In [12, 13], the interactions of nodes' forwarding and rejection are modeled as repeated games. In [12], as a punishment strategy, a generous tit-for-tat (TFT) is proposed to enforce the nodes to cooperate. Meanwhile, in [13], three learning algorithms for different information structures are proposed to achieve the desired efficient cooperation equilibrium. Additionally, the repeated game model has been applied to address selfish behavior in the media access control (MAC) problem of sensor networks. For example, in [14], a contention window select game (CWSG) is defined, and a penalizing mechanism based on repeated games is proposed to prevent nodes' noncooperation.
We propose two simply repeated games instead of the extensive game [15] to meet the given estimation performance requirement. Different from the decentralized method [16–18], our game-theoretic approach is distributed and each node is selfish. To avoid the selfishness of nodes, a grim trigger strategy and the tit-for-tat strategy for the infinitely repeated estimation game are introduced in which each sensor is voluntarily cooperative. Meanwhile, multiple subgame-perfect Nash equilibriums for the finitely repeated estimation game are discussed to depict the cooperation behaviors.
Our main contributions are shown as follows: (1) the two kinds of repeated game models for distributed estimation in WSNs have been formulated: the infinitely repeated estimation game and the finitely repeated estimation game, respectively; (2) their Nash equilibriums and subgame-perfect Nash equilibriums are simply proposed; and (3) some conclusions of strategies have been verified to be effective in simulations.
2. System Model
2.1. Distributed Estimation Problem
Let us consider a distributed WSN with an FC as shown in Figure 1. This sensor network consists of K selfish nodes to observe a physical phenomenon θ (a scalar parameter of interest), such as temperature and moisture of soil. The nodes are selfish in the sense that the FC does not dictate to the local nodes any scheduling policies. Instead, all the local nodes choose their transmission policy by themselves to selfishly maximize their interest. Within, the network channel is assumed to be error-free and can be implemented by orthogonal time/frequency/code division multiple access (TDMA/FDMA/CDMA).
Sensor networks with selfish nodes.
As shown in Figure 1, where two virtual cluster heads (CHs) and () cluster nodes (CNs) are grouped into two clusters via using a distributed clustering algorithm. It is assumed that each virtual cluster is regarded as a community of interests and CNs are inclined to be scheduled by their virtual CHs to maximize their community interests. In other words, there are two different communities of interests. There are two jobs for each CH: (1) negotiating with the FC and (2) scheduling the actions of its CNs including itself.
It is assumed that the observation of node k at time t is described as
where is zero-mean additive white Gaussian noise (AWGN) with the variance . Additionally, are independent and identically distributed (i.i.d.) across time and independent and identically distributed across nodes. Due to channels' bandwidth constraint problem, the same one-bit quantizer with threshold τ is commonly adopted for each node.
Here, we review a key result from [16] concerning the distributed estimation problem in cooperative WSNs. It is assumed that a set of indicator variables (binary observations) will be spontaneously transmitted by local nodes and the classical maximum likelihood estimator (MLE) [16] is adopted at the FC. According to Proposition in [16], the Cramer-Rao lower bound (CRLB) varies inversely to parameter K. As the benchmark of the estimation variances, the smaller the CRLB, the better the estimation performance. To meet the given estimation performance , a certain number of nodes exist, which is the required minimum number of participants (transmitting observation voluntarily).
The problem in distributed estimation arises because these selfish nodes have their own authorities to decide whether to transmit the binary information at each estimation stage. The FC can not make unilateral decisions and dictate nodes' behaviors. For example, in [17, 18], the decentralized power optimization schemes of the observation flow through solving Karush-Kuhn-Tucker (KKT) systems are not suitable for the autonomous WSNs any more. It is naturally assumed that all the nodes selfishly optimize their own interest, such as maximizing their energy efficiency.
It is worthwhile underlining that interactions among nodes happen not just once but repeatedly many times. Different from the extensive form game in [19], this special class of extensive form games, called repeated games, can explain why ongoing estimation tasks produce behavior very different from those observed in the one-time interaction in [19]. Additionally, it is worth mentioning that the extensive form game in [19] is assumed that all the nodes are required to cooperate fully. In other words, the refined Nash equilibrium in [19] is not suitable for depicting nodes' selfishness and autonomy. Meanwhile, due to the punishment mechanisms in [12, 13], the estimation problem in autonomous WSNs will be reformulated as a repeated game to depict the autonomy and then improve local nodes' energy efficiency.
2.2. Repeated Game
The repeated game theory is considered as a formal framework to model a multiplayer sequential decision making process. The model of repeated games has two versions: the horizon may be finite or infinite. It is noted that the results in the above two cases are different. Thus, in order to apply the model of repeated games in distributed estimation problems, an appropriate horizon (finite or infinite horizon) is required to be determined. In the following, some concepts of a repeated game are firstly introduced. Then, we formulate the distributed estimation system into an appropriate repeated game.
The stage game G is the basic component of a repeated game and can be represented by the three elements . Within, , , and denote the total number of players, a finite action space, and a payoff function for player i, respectively. Additionally, denotes the same stage game for T periods. If T approaches infinite, the game is called infinite repeated game. The infinitely repeated game is formally defined following [20]. Within, the notation is the action profile in period t and is the discount factor δ raised to the power t. It is assumed that the same δ is adopted for all the players.
Definition 1.
The infinitely repeated game of for the discount factor δ is the extensive game with perfect information and simultaneous moves in which
the set of players is N,
the set of terminal histories is the set of infinite sequences of action profiles in ,
the player function assigns the set of all players to every proper subhistory of every terminal history,
the set of actions available to player i after any history is ,
each player i evaluates each terminal history according to its discounted average .
The formal description of finite repeated games is very similar to the definition of infinite repeated games and can be defined as the following.
Definition 2.
For any positive integer T, the T-period finitely repeated game of is the extensive game with perfect information and simultaneous moves that satisfies all the conditions of Definition 1 when the symbol ∞ is replaced by T. Meanwhile, it is assumed that the preferences is the mean payoff .
3. Repeated Estimation Game
It is noted that the CRLB varies inversely to the parameter K and depends also on these parameters in the distributed estimation problem, such as θ, τ, and σ [16]. In other words, the estimation performance of the MLE depends on the parameters like K, θ, τ, and σ, and so forth. The energy of selfish nodes is supplied by battery once exhausted and they can not charge up. Therefore, parameter K varies with the times of estimation task. Additionally, it is usually assumed that the physical phenomenon is stable and the same MLE is adopted at each stage of the multiple estimation tasks.
To improve and maintain the performance of the MLE as long as possible, K selfish nodes should live as long as possible. However, the cooperation problem among selfish nodes in sequential estimation tasks has not been introduced in the traditional estimation methods. Meanwhile, the repeated games can deal with the problem of nodes' survival, in which the selfish nodes know when and how to cooperate in order to evenly keep the selfish nodes alive over many periods [20]. Thus, the following repeated estimation game is introduced to explore the impact of nodes' selfishness on the estimation performance.
3.1. Stage Game
To be concrete, in the case of the estimation problem, we need to review several notions, namely, a stage game, the game history, and the strategy of a player. The stage game usually consists of a set of players, a set of actions, and a payoff function for each player. Thus, the set of players for the stage game is (i.e., the two virtual clusters shown in Section 2.1). Additionally, the actions for clusters are assumed to be , where strategy denotes that there are no nodes that will transmit their observations in the cluster at the current. Instead, strategy denotes that there are a number of nodes () that transmit their observations in cluster and () that transmit their observations in cluster . and can be expressed as
Assume all these nodes are rational and aim at maximizing the cluster's interest. Thus, the set of players is the two clusters and the clusters' strategies space can be defined as . Within, is “Cooperation” and is “Defection.” As shown in (2), if both clusters choose the “C” strategy, there are a number of nodes that transmit their observations in cluster and there are a number of nodes that transmit their observations in cluster . If both clusters choose the “D” strategy, there are no nodes that transmit observations in the network. If one of the clusters chooses the “D” strategy and the other cluster chooses the “C” strategy, then there are no nodes that transmit observations in the cluster with “D” and there are nodes that transmit observations in the cluster with “C.”
According to results in Section 2.1, a certain number of nodes transmitting their observations voluntarily exist. Thus, if one of clusters chooses strategy “C,” there are a total of nodes that transmit their observations to the FC and the given estimation performance will be satisfied. Then, the interest of the cluster with strategy “C” is improved instead of nothing.
Players' payoff function can be given as
It is noted that the payoff function represents a player's preference. For example, if strategy is adopted, its estimation performance is satisfied. While strategy is adopted, its estimation performance is not satisfied. Thus, for every player if and only if players prefer strategy to strategy . Similarly, if strategy is adopted, the player's estimation performance is also satisfied at the cost of consuming its more residual energy ( sensors transmit their observations in cluster instead of ). Thus, the payoff of player (also the player) becomes less because it meets the performance requirement at the cost of consuming more residual energy (), but the payoff of player (also the player) becomes the greatest because it meets the performance requirement without consuming any residual energy. In other words, .
3.2. Infinitely Repeated Game
It is well known that a strategy of a player in infinitely repeated games should specify an action of the player for every sequence of outcomes. For the case of the estimation problem, a grim trigger strategy is defined as follows: and
where denotes the player k chooses C at the start of the game and denotes the player k chooses C after any history in which every previous action of player j was C. The grim trigger strategy (labeled as Grim) is illustrated as Figure 2.
A grim trigger strategy for a repeated estimation game.
Another strategy (the tit-for-tat strategy, labeled as TFT) is shown in Figure 3. The strategy can be described in a very compact way: start by cooperating and then do whatever the other player did on the previous iteration.
The tit-for-tat strategy for a repeated estimation game.
Now, suppose each player has selected a strategy for playing the infinitely repeated estimation game. The pair of strategies can be used to determine exactly how the game will proceed and then to discuss its existence of Nash equilibrium.
Proposition 3.
For the infinitely repeated estimation game, strategy profile (Grim, Grim) is a Nash equilibrium if and only if ; strategy profile (TFT, TFT) is a Nash equilibrium if and only if and .
Proof.
Suppose that player adheres to the strategy TFT. If player deviates by choosing “D” in the first estimation period, then player 1 chooses “D” in the second estimation period and continues to choose “D” until player reverts to “C.” As shown in Figure 3, player has two choices: reverting to “C” and adhering to “D.” For reverting to “C,” its corresponding payoffs are , with a discounted average of
while for adhering to “D,” its corresponding payoffs are , with a discounted average of
If player also adheres to the tit-for-tat strategy, its corresponding payoffs are , with a discounted average of α. According to formulas (5) and (6), the tit-for-tat strategy of each player is the best response to the strategy TFT of the other player if and only if
The proof of strategy profile (TFT, TFT) being a Nash equilibrium is done. Similarly, strategy profile (Grim, Grim) can be proven to be a Nash equilibrium. Then, the proof is done completely.
3.3. Finitely Repeated Game
The strategy space for repeated games is difficultly illustrated even if the game is repeated just times. To determine how to play a finitely repeated estimation game, the equilibrium in the one-shot version of the game is investigated here. For example, the simplest situation is considered, in which two players play the estimation game twice. Obviously, its players are involved repeatedly in an interaction with payoffs as shown in Table 1.
Payoff of the repeated estimation game.
P1
P2
C
D
C
α, α
β, γ
D
γ, β
ρ, ρ
The repeated estimation game () can be expressed in the extensive form. As shown in Figure 4, there are four histories at : (), (), (), and (). It is easily derived that a reduced game for any history starting at is expressed as Table 2.
Payoff of the reduced two-stage game.
P1
P2
C
D
C
D
Sum of the two stage-game payoffs.
For example, after () in the initial round, each player's payoffs are increased by ; after () in the initial round, the player's payoffs are increased by and the player's payoffs are increased by ; after () in the initial round, the player's payoffs are increased by and the player's payoffs are increased by ; after () in the initial round, each player's payoffs are increased by .
Since a player's preferences in the game of the initial round do not change when we add a constant to his payoffs, hence, the set of Nash equilibriums in the reduced estimation game is the same as the stage game (namely, the game of the initial round). It is a general result of finitely repeated game equilibriums as follows [21] and its proof is ignored here.
Lemma 4.
For the finitely repeated game , it is assumed that the stage game has a unique subgame-perfect Nash equilibrium (SPNE). Then, has a unique SPNE and is played at each round independent of the history of the previous rounds.
As shown in Table 1, the two players' sets of actions are the same and their preferences have the following characteristics:
for every action pair . This two-player strategic game at any stage is denoted as the symmetric game and has a unique mixed strategy Nash equilibrium, in which each player's mixed strategy assigns probability to C and probability to D.
In other words, there are multiple Nash equilibriums in the one-shot stage game of the finitely repeated estimation game: , , and the mixed strategy assigns probability to C and probability to D. The uniqueness condition of SPNE in Lemma 4 is untenable. Actually, there are multiple SPNEs in finitely repeated estimation game, and some versions are given as follows:
(T even rounds).
(T even rounds).
(T even rounds).
(T even rounds).
Within, the first strategy denotes that the 1st player's first move is to play C and its second move is to play C after every possible history, and the 2nd player's first move is to play D and its second move is to play D after every possible history. The average payoff for the first strategy is . Similarly, the average payoffs for the second strategy, the third strategy, and the fourth strategy are , and , respectively. These strategies are SPNEs because each of and is each player's best response to the other's strategy at each subgame.
According to Proposition 3 and Lemma 4, the Nash equilibriums of the proposed repeated estimation game deal with the problem of nodes' selfishness and maintain nodes' actions evenly. It is noted that the MLE [16] can be extended into nonideal channels [22]. Meanwhile, nonideal channels have no effect on the proposed game due to nonadditional information exchange among nodes. Thus, the results can be applied onto the nonideal channels.
4. Simulation Results
In this section, simulation results are obtained by Matlab. sensor nodes are randomly deployed in a given square area, such as the square region (200 m × 200 m). It is assumed that the minimum number of participants is equal to . The MLE is adopted by the FC (located on ). The virtual clusters are randomly divided into two clusters with the same number of members . As shown in Figure 5, the cluster with sensors () is the player and the cluster with sensors () is the player. To be more efficient and fair, in the two clusters, nodes with more residual energy are orderly selected to play the repeated estimation game. The discount factor δ is set to . γ, α, β, and ρ are set to be and , respectively.
The players' actions for the infinitely repeated estimation game: cooperation.
A similar simple energy dissipation model is adopted for nodes' radio hardware [9]. In this model, denotes the electronics energy consumption and and are energy factors. The energy consumption of the sensor i in a stage game is expressed as
where denotes the distance from the sensor i to the FC. The initial energy of nodes is set to be J. Because each sensor quantizes its local estimate by using a one-bit quantizer, its length of bits l is assumed to be with header bits for simplicity.
As shown in Figure 5, strategy profile (Grim, Grim) is adopted and the two clusters choose the “C” strategy. It is noted that strategy profile (Grim, Grim) is Nash equilibrium under the condition of these parameters (γ, α, β, and ρ), which coincides with Proposition 3. Additionally, according to the definition of the stage game in Section 3.1, there are sensors that transmit their observations ( sensors in cluster and sensors in cluster ). There are more times of cooperation for sensors () in cluster and sensors () in cluster than other sensors. Considering the requirement of energy efficiency and fairness, sensors with longer distances from the FC will consume more energy for transmitting observations and then have less times of cooperation. For example, sensors () are selected to be the actual players at stage of the infinitely repeated estimation game. At stages and , sensors () and sensors () are selected to be the actual players, respectively. At stage , sensor is the nearest from the FC for cluster , and sensor is the nearest from the FC for cluster . Additionally, as shown in Figure 6, sensors' residual energy varies with the player (cluster). The members of the player are relatively closer to the FC than the members of the player. Thus, the energy cost of the player is less than that of the player when playing the same strategy.
Sensors' residual energy for the infinitely repeated estimation game: cooperation.
To show the effectiveness of the SPNEs for the finitely repeated estimation game, strategy “” is adopted by the two players. As shown in Figures 7 and 8, the strategy has the similar distributions of cooperation times and residual energy for the infinitely repeated estimation game. For example, sensors () are selected to be the actual players at stage of the finitely estimation repeated game. At stages , , and , sensors (), sensors (), and sensors () are selected to be the actual players, respectively.
The players' actions for the finitely repeated estimation game: (SPNE, ).
Sensors' residual energy for the finitely repeated estimation game: , (SPNE, ).
Moreover, sensors' times of transmissions or cooperation are depicted in Figure 9. Whether it is the infinitely repeated estimation game or the finitely repeated estimation game, there are more times for these sensors () closely related to the FC. Meanwhile, it is assumed that the payoffs of players are evenly divided by the cluster's members under the following cases: (1) the player adopts strategy “D” and its sensors obtain the same payoff; (2) the player adopts strategy “C” and its active sensors divide the payoffs evenly. For comparison's sake, sensors' payoffs for infinitely and finitely repeated estimation games are defined to their respective algebraic sums without considering the discount factor, as shown in Figure 10. For the infinitely repeated estimation game, there are more payoffs for these sensors () closely related to the FC. More pay for more work is true. However, for the finitely repeated estimation game, payoffs are allocated evenly by the cluster's member if the cluster adopts strategy “D.” Then, payoffs of sensors are almost the same for the finitely repeated estimation game in Figure 10.
The players' times of transmissions or cooperation: infinitely and finitely repeated estimation games.
The players' payoffs: infinitely and finitely repeated estimation games.
5. Conclusions
In this paper, we focus on the repeated game for distributed estimation in WSNs. The two kinds of repeated estimation games (infinitely and finitely repeated estimation games) are investigated. Their existences of Nash equilibriums are simply proven. Particularly, the profiles (Grim, Grim) and (TFT, TFT) for the infinitely repeated estimation game and some SPNEs for the finitely repeated estimation game are discussed in detail. Finally, some simulation results show that some Nash equilibriums of the proposed infinitely and finitely repeated game are efficient.
Footnotes
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work was supported by National Natural Science Foundation, China (61403089, 61162008, and 61573153), Program for Guangzhou Municipal Colleges and Universities (1201431034), Guangdong Science & Technology Project (2013B0104) and Guangzhou Education Bureau Science and Technology Project (2012A082), and Guangzhou Science and Technology Foundation (nos. 2014J4100142, 2014J410023).
References
1.
YangQ.HeS.LiJ.ChenJ.SunY.Energy-efficient probabilistic area coverage in wireless sensor networksIEEE Transactions on Vehicular Technology201564136737710.1109/TVT.2014.2300181
2.
HeS.ChenJ.LiX.ShenX. S.SunY.Mobility and intruder prior information improving the barrier coverage of sparse sensor networksIEEE Transactions on Mobile Computing20141361268128210.1109/tmc.2013.1292-s2.0-84902184986
3.
HeS.ChenJ.YauD. K. Y.SunY.Cross-layer optimization of correlated data gathering in wireless sensor networksIEEE Transactions on Mobile Computing201211111678169110.1109/tmc.2011.2102-s2.0-84655170952
4.
HeS.ChenJ.ChengP.GuY.HeT.SunY.Maintaining quality of sensing with actors in wireless sensor networksIEEE Transactions on Parallel and Distributed Systems20122391657166710.1109/tpds.2012.1002-s2.0-84864615521
5.
ChenJ.LiS.SunY.Novel deployment schemes for mobile sensor networksSensors20077112907291910.3390/s71129072-s2.0-36849032166
6.
LiuY.LiuA.HeS.A novel joint logging and migrating traceback scheme for achieving low storage requirement and long lifetime in WSNsAEU—International Journal of Electronics and Communications201569101464148210.1016/j.aeue.2015.06.016
7.
LiuY.LiuA.ChenZ.Analysis and improvement of send-and-wait automatic repeat-request protocols for wireless sensor networksWireless Personal Communications201581392395910.1007/s11277-014-2164-62-s2.0-84910090807
8.
LiuG.LiuH.ChenH.ZhouC.ShuL.Position-based adaptive quantization for target location estimation in wireless sensor networks using one-bit dataWireless Communications and Mobile Computing201510.1002/wcm.25762-s2.0-84922552080
9.
LiuG.XuB.ChenH.ZhangC.XiangJ.ZhouC.Adaptive quantization for distributed estimation in cluster-based wireless sensor networksAEU—International Journal of Electronics and Communications201468648448810.1016/j.aeue.2013.12.0042-s2.0-84899087253
10.
ZhangJ.BlumR. S.LuX.ConusD.Asymptotically optimum distributed estimation in the presence of attacksIEEE Transactions on Signal Processing20156351086110110.1109/tsp.2014.23862812-s2.0-84922484210
11.
BouchouchaT.AhmedM. F.Al-NaffouriT. Y.AlouiniM.Distributed estimation based on observations prediction in wireless sensor networksIEEE Signal Processing Letters201522101530153310.1109/lsp.2015.2411852
12.
SrinivasanV.NuggehalliP.ChiasseriniC. F.RaoR. R.Cooperation in wireless ad hoc networks2Proceedings of the 22nd Annual Joint Conference of the IEEE Computer and Communications (INFOCOM ′03)March-April 2003San Francisco, Calif, USAIEEE80881710.1109/INFCOM.2003.1208918
13.
PandanaC.HanZ.LiuK. J. R.Cooperation enforcement and learning for optimizing packet forwarding in autonomous wireless networksIEEE Transactions on Wireless Communications2008783150316310.1109/twc.2008.0702132-s2.0-50049090186
14.
YanM.XiaoL.DuL.HuangL.On selfish behavior in wireless sensor networks: a game theoretic case studyProceedings of the 3rd International Conference on Measuring Technology and Mechatronics Automation (ICMTMA ′11)January 2011Shangshai, China75275610.1109/icmtma.2011.4722-s2.0-79952946630
15.
LiuH.LiuG.LiuY.MoL.ChenH.Adaptive quantization for distributed estimation in energy-harvesting wireless sensor networks: a game-theoretic approachInternational Journal of Distributed Sensor Networks20142014921791810.1155/2014/217918
16.
RibeiroA.GiannakisG. B.Bandwidth-constrained distributed estimation for wireless sensor networks—part I: gaussian caseIEEE Transactions on Signal Processing20065431131114310.1109/tsp.2005.8630092-s2.0-33244491229
17.
XiaoJ.-J.CuiS.LuoZ.-Q.GoldsmithA. J.Power scheduling of universal decentralized estimation in sensor networksIEEE Transactions on Signal Processing200654241342210.1109/tsp.2005.8618982-s2.0-31344455704
18.
LiuG.XuB.Energy-efficient scheduling of distributed estimation with convolutional coding and rate-compatible punctured convolutional codingIET Communications20115121650166010.1049/iet-com.2010.05602-s2.0-80053289233
19.
LiuG.ZhangX.LiuY.Distributed estimation based on game theory in energy harvesting wireless sensor networksProceedings of the 33rd Chinese Control Conference (CCC ′14)July 2014Nanjing, ChinaIEEE40140410.1109/chicc.2014.68966562-s2.0-84907931870
20.
OsborneM. J.An Introduction to Game Theory2004Oxford, UKOxford University Press
21.
OsborneM. J.RubinsteinA.A Course in Game Theory1994Cambridge, Mass, USAThe MIT Press
22.
LiuG.XuB.ChenH.Decentralized estimation over noisy channels in cluster-based wireless sensor networksInternational Journal of Communication Systems201225101313132910.1002/dac.13082-s2.0-84865541360