A major application of a distributed WSN (wireless sensor network) is to monitor a specific area for detecting some events such as disasters and enemies. In order to achieve this objective, each sensor in the network is required to collect local observations which are probably corrupted by noise, make a local decision regarding the presence or absence of an event, and then send its local decision to a fusion center. After that, the fusion center makes the final decision depending on these local decisions and a decision fusion rule, so an efficient decision fusion rule is extremely critical. It is obvious that the decision-making capability of each node is different owing to the dissimilar signal noise ratios and some other factors, so it is easy to understand that a specific sensor contribution to the global decision should be constrained by this sensor decision-making capability, and, based on this idea, we establish a novel linear decision fusion model for WSNs. Moreover, the constrained particle swarm optimization (constrained PSO) algorithm is creatively employed to control the parameters of this model in this paper and we also apply the typical penalty function to solve the constrained PSO problem. The emulation results indicate that our design is capable of achieving very high accuracy.
1. Introduction
Owing to the low cost and ease of operation, wireless sensor networks are ideal for a wide variety of applications such as environmental monitoring, smart factory instrumentation, intelligent transportation, and remote surveillance [1– 3]. When a WSN is used to monitor an area, each sensor node in the network should collect local observations and send a summary (compressed or partially processed data) to the fusion center. Then the fusion center uses these summaries and a specific decision fusion rule to make the final global decision, as shown in Figure 1.
A typical structure of a decentralized decision fusion system.
Actually, WSN issues such as node deployment, localization, energy-aware clustering, and data aggregation are always considered as optimization problems. An optimization method that requires moderate memory and computational resources and yet produces good results is desirable, especially for implementation on an individual sensor node. Bio-inspired optimization methods are computationally efficient alternatives to analytical methods. Particle swarm optimization (PSO) [4] is a popular multidimensional optimization technique. Ease of implementation, high quality of solutions, computational efficiency, and speed of convergence are the advantages of the PSO. In literatures [5– 9], PSO is used to discover the optimal WSN deployment. Static deployment is a one-time process in which solution quality is more important than fast convergence. PSO suits centralized deployment. Fast PSO variants are necessarily of dynamic deployment, so PSO limits network scalability. Literatures [10– 14] apply PSO for WSN localization. Owing to energy issues, the distributed localization is desirable. Though PSO is appropriate for distributed localization, the choice is influenced by availability of memory on the nodes. Besides, PSO is also used for energy-aware clustering in literatures [15– 17], where clustering is a centralized optimization carried out in a resource-rich base station and optimal clustering has a strong influence on the performance of WSNs. Recently, some researchers have begun to use PSO for WSN data aggregation. As discussed in literature [18], PSO has provided optimization in several aspects of data aggregation as follows.
In [19], Wimalajeewa and Jayaweera address the problem of optimal power allocation through constrained PSO. Their algorithm PSO-Opt-Alloc uses PSO to determine optimal-power allocation in the cases of both independent and correlated observations. The objective is to minimize the energy expenditure while keeping the fusion-error probability under a required threshold.
In [20], Veeramachaneni and Osadciw present a hybrid of ant-based control and PSO (ABC-PSO) for hierarchy and threshold management for decentralized serial sensor networks. PSO is used to determine the optimal thresholds and fusion rules for the sensors while the ant colony optimization algorithm determines the hierarchy of sensor decision communication.
In [21], Veeramachaneni and Osadciw present a binary multiobjective PSO BMPSO for optimal configuration for multiple sensor networks. PSO is modified to optimize two objectives: accuracy and time. The output of the algorithm is the choice of sensors, individual sensor threshold, and the optimal decision fusion rule.
As mentioned above, data fusion is a distributed repetitive process which is moderately suitable for PSO. Effective data fusion influences overall WSN performance and demands quick-convergence optimization techniques that assure high-quality solutions. Therefore, it is reasonable for us to choose PSO to control the parameters of our fusion rule.
In this paper, we design a linear decision fusion rule. In this rule, a linear equation is used to compute the integrated contribution of all the local decisions, and this equation is made up with local decision weight and all local decisions. Local decision weight can reflect a sensor decision-making capability, and the result of this equation is the summation of all the local sensor contributions. By comparing the result of the linear equation with a threshold , the final decision can be made in the fusion center. In order to get the smallest error probability, constrained PSO is employed to find out the optimal local decision weight and the threshold .
The rest of the paper is organized as follows. Section 2 discusses the constrained-PSO algorithm briefly. Section 3 describes our linear decision fusion rule in detail. The performance of our fusion rule is evaluated in Section 4, and Section 5 concludes this paper.
2. Brief Introduction of Constrained PSO Algorithm
The particle swarm optimization (PSO) algorithm is a population-based evolutionary computation originally developed by Kennedy and Eberhart [4]. This algorithm does very well in searching throughout a highly multiple modal search space resulting in an optimized solution to an n-dimensional problem. PSO, originally introduced in terms of social and cognitive behavior, is widely used as a bottom-up problem solving method in engineering and computer science.
In the basic PSO algorithm, a swarm consists of m particles flying around in an n-dimensional search space. The position of the ith particle at the tth iterations is used to evaluate the particle and represent the candidate solution for the optimization problem. It can be presented as , where is the position value of the ith particle with respect to the jth dimension . During the searching process, the position of a particle is affected by two factors: the best position visited by itself denoted as and the best position of the swarm found so far denoted as . The new velocity (denoted as ) and the position of particle i at the next iteration are calculated as
where w is called the inertia parameter, and are, respectively, cognitive and social learning parameters, and , are random numbers between (0,1). Relying on expressions (1), the particles can fly throughout search space toward and in a navigated way while still exploring new areas by the stochastic mechanism to escape from local optimal. The process for a particle to update its location is as shown in Figure 2.
The process of updating location for each iteration in PSO.
Further, a constrained PSO problem can be described as follows:
where is the objective function, is the nonlinear constraint, and is the linear constraint. In this paper, we just discuss the situation with the linear constraint. The most common approach for solving constrained PSO problems is to take advantage of a penalty function. The constrained problem can be transformed into an unconstrained one by penalizing the constraints and building a single objective function, which in turn is minimized by using an unconstrained optimization algorithm [22– 24].
Generally, penalty function can be classified into two main classes: stationary and nonstationary. Stationary penalty functions use fixed penalty values throughout the minimization, while in contrast, in nonstationary penalty functions, the penalty values are dynamically modified. In this paper, we just use the stationary penalty function as expression (3), which is generally defined as [25]
where is the original objective function of the constrained PSO problem, M is a fixed penalty value, and is a penalty factor. In this way, the constrained problem becomes unconstrained and then we can use the basic PSO to find out the smallest .
Depending on the knowledge about the constrained PSO, we would employ it to find out the optimal solution for the parameters of our decision fusion rule. In the next section, we would like to discuss this issue in detail.
3. The Linear Decision Fusion Rule
In this section, we plan to discuss this topic in three parts. In Section 3.1, we formulate the decision fusion problem regarding a binary hypothesis situation and the expressions of some crucial parameters are discussed, including the local error probabilities , , and threshold . In Section 3.2, our linear decision fusion rule is introduced and, with the expressions given by Section 3.1, the expressions of global error probabilities , , and total error probability R can be figured out. From Section 3.2, it can be seen that the total error probability R mainly depends on weights and , so we would like to discuss how to use PSO to optimize the parameters for our fusion rule in Section 3.3.
3.1. Decision Fusion Problem Formulation
We consider a binary hypothesis testing problem in an n-node distributed wireless network. The ith sensor observation under the two hypotheses is given by
where is zero-mean Gaussian observation noise with variance , is the signal to be detected, and is the decision to be made. In this paper, we consider the detection of a constant signal (e.g., ). As a brief review, the problem of radar detection can be formulated as a hypothesis testing problem where the two hypotheses are
The conditional probability density functions are and , so the decision could be made based on the following likelihood ratio test:
where is a threshold computed according to the Bayesian framework as follows:
is the error cost (: we consider there is no enemy but actually the enemy exists in the network; : we consider the enemy exists but actually there is no enemy in the network; and are the costs when we judge the existence of the enemy rightly), and the prior probabilities of the two hypotheses and are denoted by and , respectively. In real applications, the value of and depends on what kinds of error would cause more serious effect. Taking the radar detection mentioned above as an example, in the real world the error that the radar misses an enemy is much more serious than the error that the radar considers something else as an enemy, so in this example . In this paper, we set , , and , so expression (7) becomes
The values of prior probabilities p and q depend on prior experiences, so we assume that they can be known before applying this decision fusion rule. Let us consider the detection of a constant signal , and the noise mentioned above is zero-mean Gaussian observation noise with variance , so, for all i, the ith sensor observation obeys Gaussian distribution as
Consequently, the conditional probability density functions and can be calculated as
and then, with (6) and (10), the decision rule of the ith sensor can be formulated as
According to (11), the threshold of the ith sensor is
From (12), we can see that the threshold of each sensor is different owing to the different which represents the noise character of a sensor. Accordingly, taking advantage of (10), (12), and (13), the local error probabilities and can be obtained as follows:
3.2. Our Linear Decision Fusion Rule
In this paper, we design a novel decision fusion model and in this model, a linear equation is used to denote the integrated contribution of all the local decisions as
where is the local decision of the ith sensor and is the corresponding weight which is able to indicate the ith sensor decision-making capability. In order to understand this model better, we take a group of workers who are working together to check an event for example. Firstly, all the workers are required to try their best to draw a conclusion for this event, and then they send their decision to their leader. The leader makes a final decision depending on all the conclusions given by the workers. During this decision-making process, the decisions made by the more capable workers would be paid more attention by the leader. In this example, the workers correspond to the sensors in the mobile sensor networks, the leader corresponds to the fusion center, and the ability of the workers corresponds to the weight . When a sensor decision-making capability is better, its weight could be greater. In other words, if a sensor can make a decision more accurately, its contribution to is considered to be bigger, so it is reasonable to use to denote the integrated contribution of local decisions in the fusion center. Then, would be compared with a threshold , and the final decision can be made according to the following inequality expression:
Suppose all the local decisions are independent of each other; then we can get and . Besides, the error probabilities and take all the combinations of the local decisions (i.e., ) into account. Therefore, the two kinds of errors that the fusion center may make can be formulated as follows:
where
According to (13), (16), and (17), the total error cost can be calculated as
In order to understand this model better, we take the situation for example. Suppose there are only two sensors in the network and their local decisions are and , respectively. Then according to the linear decision fusion model, the integrated contribution of the two local decisions can be figured out as .
It is obvious that there are four combinations of , , and , as shown in Table 1.
Here, it can be seen that the value of the total error probability R depends upon , , and , so if the optimal , , and can be found, the smallest error probability R can be achieved. In the following content, let us discuss how to get the optimal weights and threshold with the help of constrained PSO.
3.3. Using Constrained PSO to Control the Linear Decision Fusion Rule
It is obvious that if all the local decisions are (), the final decision must be (), so we can consider that , and then, it becomes a constrained PSO problem which can be denoted as follows:
Here, is the initial objective function and is the linear constraint. The penalty function approach is used to solve this problem. According to formulation (3) the penalty function can be established as
where M is the fixed penalty value and is the penalty factor. Then we should try to find out the smallest taking advantage of the basic PSO.
Basing on the analysis above, we can map from the basic PSO to the linear decision fusion rule, as shown in Table 2.
PSO terminology and corresponding parameters of linear decision fusion rule.
PSO terminology
Linear decision fusion rule
Location
Fitness
The location in parameter space of the smallest returned for a specific particle
The location in parameter space of the smallest returned in the entire swarm
The maximum allowed velocity range in a given direction
The maximum allowed range in a given direction
After that, the PSO is carried out as follows.
Firstly, the locations of all the particles are initialized and the current location is considered as .
Secondly, the velocity and the location of each node are updated in accordance with expressions (1) and (3).
Thirdly, the new fitness is computed and the new and are generated.
Finally, if the number of iterations increases to , then stop; otherwise, proceed back to step two. In this way, the optimal is discovered and is the smallest . Therefore, the smallest total error probability can be computed as .
4. Performance Evaluation
In this section, we would like to demonstrate the performance of our decision fusion rule. On the platform Matlab, the basic PSO algorithm is realized and the objective function is set as . In order to obtain the smallest , a swarm consists of 25 particles flying around in an -dimensional search space, where n is the number of network sensors. In the end of searching process, the should be the smallest , and then, with , the smallest error probability R for our fusion rule can be obtained. Consequently, the corresponding and are the optimal parameters of the fusion rule.
Firstly, whether the parameters of PSO can affect the performance of our decision fusion rule is demonstrated in Section 4.1. Then, in Section 4.2, let us show that what would happen to our experimental results as the number of sensors increases.
4.1. Choosing PSO Parameters w, , and
It can be seen from expression (1) that the optimal performance of PSO depends on the values of parameters w, , and . The inertia parameter w controls the momentum of each particle. A larger w can increase the global exploration ability while a smaller w tends to improve the local exploration ability of particles. The cognitive learning parameter limits the contribution of a specific particle while the social learning parameter limits the contribution of the entire swarm .
At first, let us discuss the performance of our fusion rule when inertia parameter w is fixed or changes along with time. In the first case, we set while, in the second case, we set , where t and are, respectively, the current and maximum number of iterations and and are, respectively, the upper and lower bounds of w. In this paper, we set , , and . The inputs and outputs of this simulation are as shown in Table 3.
The simulation results with different inertia parameter w when the sensor number is 10.
Inputs
Outputs
Inertia parameter w
Total error probability R
Time consumed for convergence (iterations)
0.00181079
447
0.00120768
123
From Table 3 and Figure 3, we can see that when the inertia parameter w changes along with time, the total error probability R is lower and the time consumed for convergence is shorter than those when w is fixed. The reason is that if we choose the time-variable inertia parameter, particles can have good global exploration ability at the beginning phase of the searching process and good local exploration ability as the current number of iterations increases. Thereby, we decide to choose the time-variable inertia parameter w.
The simulation results with time-variable and fixed inertia parameter w.
After that, let us discuss the learning parameters and . If or , it means that we do not take the cognitive learning process or the society learning process into account. Under this situation, the simulation results are as shown in Table 4.
The simulation results with different learning parameters and when the sensor number is 10.
Inputs
Outputs
and
Total error probability R
The number of iteration
0.00120768
123
0.00274277
150
0.00121506
Cannot converge
From Table 4 and Figure 4, it can be seen that when the cognitive learning parameter is not taken into account, PSO still can get converged, but the total error probability R becomes larger and time consumed for convergence is longer comparing with the situation when . When we do not consider the society learning parameter , PSO cannot get converged at all. The reason is that each particle only has the knowledge about itself, so the swarm can never evolve to the same direction.
The experimental results with different learning parameters.
In sum, from the analysis above, we can see that the time-variable inertia parameter w is better than the fixed one and the situation that both learning parameters and are taken into account is better than the situation when or . Therefore, in the following simulation, the parameters are set as
4.2. The Performance of Linear Fusion Rule with Different Sensor Numbers
Here, the number of sensors in the network is set as , and , respectively. According to the experimental results, it can be seen that our decision fusion rule can indeed get very high accuracy. The inputs and outputs of our simulation are as shown in Table 5 and Figure 5.
The experimental results under different sensor numbers n.
When , the decision fusion rule becomes
and, with these parameters, we can get the total error probability which is very small. After about more than 50 iterations, the PSO has already got converged, so the time consumed is very short.
Similarly, from Table 5 and Figure 5(b), when , we can see that the total error probability becomes and after about 120 iterations the PSO gets converged. When , which is close to zero and after 150 iterations the PSO gets converged.
As shown in Figure 6, along with the increase of n, the error cost becomes smaller and smaller. This phenomenon is easy to understand because the decision result is more precise when there are more sensors working in the network. However, as n increases, the time consumed for PSO to find the optimized parameters becomes longer, so when this fusion rule is used we should better get a balance between accuracy and time.
The total error probability R and time consumed for convergence change along with sensor number.
In brief, this simulation indicates that our linear fusion rule is capable of getting really small total error probability under the control of the constrained PSO, also showing that PSO does very well for the parameter optimization of the linear decision fusion rule.
5. Conclusion
In this paper, we present a linear decision fusion model and propose a way of controlling the parameters of the model taking the advantage of the constrained PSO. In the model, the integrated contribution of local decisions is computed with a linear equation which is made up with local decision weights and local decisions, and then the integrated contribution is compared with a threshold in the fusion center. After that, according to the comparison results, the final decision can be made. Furthermore, the constrained PSO is creatively employed to discover the weights and the threshold. The simulation results show that our linear decision rule and the way of parameter optimization are efficient to get very high accuracy.
References
1.
XiaoJ.-J.CuiS.LuoZ.-Q.GoldsmithA. J.Joint estimation in sensor networks under energy constraintsProceedings of the 1st Annual IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks (IEEE SECON '04)October 2004Santa Clara, Calif, USA264271
2.
SnidaroL.NiuR.VarshneyP. K.ForestiG. L.Sensor fusion for video surveillanceProceedings of the 7th International Conference on Information Fusion (FUSION '04)June 2004Stockholm, Sweden5645712-s2.0-6344248895
3.
TiwariA.LewisF. L.GeS. S.Wireless sensor network for machine condition based maintenanceProceedings of the 8th International Conference on Control, Automation, Robotics and Vision (ICARCV '04)December 20041Bangkok, ThailandAmari Watergate Bangkok461467
4.
KennedyJ.EberhartR.Particle swarm optimizationProceedings of the IEEE International Conference on Neural NetworksDecember 1995Perth, Australia194219482-s2.0-0029535737
5.
AzizN. A. B. A.MohemmedA. W.Daya SagarB. S.Particle swarm optimization and Voronoi diagram for wireless sensor networks coverage optimizationProceedings of the International Conference on Intelligent and Advanced Systems (ICIAS '07)November 20079619652-s2.0-5794908798510.1109/ICIAS.2007.4658528
6.
HuJ.SongJ.ZhangM.KangX.Topology optimization for urban traffic sensor networkTsinghua Science and Technology20081322292362-s2.0-4104909733510.1016/S1007-0214(08)70037-3
7.
LiJ.LiK.ZhuW.Improving sensing coverage of wireless sensor networks by employing mobile robotsProceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO' 07)December 2007Sanya, China8999032-s2.0-4924910657710.1109/ROBIO.2007.4522282
8.
WangX.WangS.MaJ. J.An improved co-evolutionary particle swarm optimization for wireless sensor networks with dynamic deploymentSensors2007733543702-s2.0-34047170719
9.
HongT. P.ShiuG. N.Allocating multiple base stations under general power consumption by the particle swarm optimizationProceedings of the IEEE Swarm Intelligence Symposium (SIS '07)April 200723282-s2.0-3454879756610.1109/SIS.2007.368022
10.
GopakumarA.JacobL.Localization in wireless sensor networks using particle swarm optimizationProceedings of the IET Conference on Wireless, Mobile and Multimedia NetworksJanuary 20082272302-s2.0-6744909436710.1049/cp:20080185
11.
KulkarniR. V.VenayagamoorthyG. K.ChengM. X.Bio-inspired node localization in wireless sensor networksProceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC '09)October 2009San Antonio, Tex, USA2052102-s2.0-7484909227010.1109/ICSMC.2009.5346107
12.
LowK. S.NguyenH. A.GuoH.A particle swarm optimization approach for the localization of a wireless sensor networkProceedings of the IEEE International Symposium on Industrial Electronics (ISIE '08)July 2008182018252-s2.0-5784915265210.1109/ISIE.2008.4677205
13.
GuoH.LowK.-S.NguyenH.-A.Optimizing the localization of a wireless sensor network in real time based on a low-cost microcontrollerIEEE Transactions on Industrial Electronics201158374174910.1109/TIE.2009.2022073
14.
LowK. S.NguyenH. A.GuoH.Optimization of sensor node locations in a wireless sensor networkProceedings of the 4th International Conference on Natural Computation (ICNC '08)October 20082862902-s2.0-5764918071710.1109/ICNC.2008.670
15.
GuruS. M.HalgamugeS. K.FernandoS.HalgamugeS. K.Particle swarm optimisers for cluster formation in wireless sensor networksProceedings of the Intelligent Sensors, Sensor Networks and Information Processing ConferenceDecember 20053193242-s2.0-33847187746
16.
LatiffN. M. A.TsimenidisC. C.SharifB. S.Energy-aware clustering for wireless sensor networks using particle swarm optimizationProceedings of the 18th Annual IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC '07)September 2007152-s2.0-4434915947410.1109/PIMRC.2007.4394521
17.
CaoX.ZhangH.ShiJ.CuiG.Cluster heads election analysis for multi-hop wireless sensor networks based on weighted graph and particle swarm optimization7Proceedings of the 4th International Conference on Natural Computation (ICNC '08)October 20085996032-s2.0-5764915772210.1109/ICNC.2008.693
18.
KulkarniR. V.VenayagamoorthyG. K.Particle swarm optimization in wireless-sensor networks: a brief surveyIEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews20114122622672-s2.0-7795476984010.1109/TSMCC.2010.2054080
19.
WimalajeewaT.JayaweeraS. K.Optimal power scheduling for correlated data fusion in wireless sensor networks via constrained PSOIEEE Transactions on Wireless Communications200879360836182-s2.0-5234908357310.1109/TWC.2008.070386
20.
VeeramachaneniK.OsadciwL.Swarm intelligence based optimization and control of decentralized serial sensor networksProceedings of the IEEE Swarm Intelligence Symposium (SIS '08)September 2008182-s2.0-5764923113710.1109/SIS.2008.4668332
21.
VeeramachaneniK.OsadciwL. A.Dynamic sensor management using multi objective particle swarm optimizer5434Multisensor, Multisource Information Fusion: Architectures, Algorithms, and ApplicationsApril 2004Orlando, Fla, USA112Proceedings of SPIE10.1117/12.542927
22.
FloudasC. A.PardalosP. M.A collection of test problems for constrained global optimization algorithmsLecture Notes in Computer Science1987455Berlin, GermanySpringer180
23.
HimmelblauD. M.Applied Nonlinear Programming1972New York, NY, USAMcGraw-Hill
24.
RaoS. S.Optimization: Theory and Applications1977Wiley Eastern Limited
25.
YangJ.-M.ChenY.-P.HorngJ.-T.KaoC.-Y.Applying family competition to evolution strategies for constrained optimizationLecture Notes in Computer Science19971213Berlin, GermanySpringer201211