Accurate and low-cost localization of multiple targets or nodes is one of fundamental and challenging technical issues in wireless sensor networks (WSNs). Furthermore, compressive sensing allows that a sparse signal can be reconstructed from few measurements, and choosing a suitable sensing matrix is also important. For this purpose, random sensing matrices have been studied, while a few researches on deterministic sensing matrices have been considered. In this paper, we use compressive sensing for multiple target localization in WSNs. We formulate multiple target locations as a sparse matrix in the discrete time domain. Then, we exploit received signal strength information to recover noisy measurements, while utilizing deterministic sensing matrices and greedy algorithm to locate each target. The proposal approach reduces the number of measurements in localization process, takes low-cost, and maintains the accuracy as compared to the conventional approach which is noncompressive sensing. Further simulation shows that the proposed approach is practical in use, while being favorably comparable to the existing random sensing matrices in reconstruction performance. This cooperation between the compressive sensing using deterministic sensing matrices and multiple target localization provides a new point of view in WSN localization.
1. Introduction
Recently, many techniques for localization or positioning in wireless sensor networks (WSNs) have been considered with a variety of applications such as robotic land-mine detection, target tracking, and environmental and traffic monitoring, to name just a few. Since targets or nodes in WSNs are randomly deployed, we need to know actual locations or positions of these nodes. Even though there exists global positioning system (GPS) which provides highly accurate global location information, it is difficult to embed a GPS receiver in a WSN node because of cost and power-consumption issues. Moreover, the GPS usually does not work well in non-line-of-sight (NLoS) or indoor environments. Hence, target localization in WSNs is a hot topic for researchers [1, 2].
Generally, a localization process consists of two steps. First, we choose measurement parameters between nodes using features of the received signals such as time of arrival (ToA), time difference of arrival (TDoA), angle of arrival (AoA), and received signal strength (RSS). Then, we find the location of the node by using parameter estimation methods which depend on the signal features used in the first step. These estimation methods include triangulation, trilateration, multilateration, bounding box, fingerprinting, and some probability approaches. Localization based on RSS has been studied in indoor systems over recent years, which can be easily obtained by WiFi-integrated mobile terminals without any extra hardware [3–11].
Compressive sensing, also known as compressed sampling or sparse sampling, provides a novel framework for recovering a sparse or a compressible signal from a few measurements under certain assumptions on noise and power range by the Nyquist sampling theorem [12, 13]. Furthermore, the compressive sensing theory has been applied to localization; in particular it is suitable to multiple target localization problems which always have a large number of unknown nodes distributed in a large area. To count and localize targets, the target locations can be considered as sparse signals. By representing the location of the target over grid, the locations can be expressed as a sparse vector. Then, compressive sensing techniques are used to reconstruct those sparse locations [14–17]. The compressive sensing theory involves recovering a sparse signal x from linear measurements of the form , where Φ is the sensing matrix. In [12, 13, 18], it is shown that a k-sparse signal x is the solution of optimization if every submatrix of Φ is formed by choosing arbitrary columns of Φ with full rank. In [13], it is shown that, for a small k, the problem is equivalent to the problem if the matrix Φ satisfies restricted isometry property (RIP). Most of early works on compressive sensing construct random matrices as sensing matrices, and some recent works have designed specific deterministic sensing matrices based on some given conditions. In [19], the authors designed deterministic Toeplitz matrices having guaranteed RIP properties. Using Weil's exponential sum, they obtained a deterministic sampling of multivariate trigonometric polynomial [20]. Constructing some weaker RIPs, utilization of fast Fourier transform (FFT) or algebraic curves have been recently proposed [21–23]. There are also many related researches on this topic to improve the accuracy of recovery and to reduce the system complexity [24–27].
We consider a scenario that the nodes are randomly deployed in a large area, measured by taking RSS information from their neighbors as Figure 1 illustrates. Only few of nodes are known, which are called anchor nodes. The rest of nodes could be obtaining their location through localization method. Our proposed method is a compressive sensing using deterministic sensing matrices for multiple target localization in WSNs based on sparse and inconsistent signal measurements. The problem is formulated as a sparse approximation of a sparse matrix with its elements being RSS measurements at grid. Then we employ Orthogonal Matching Pursuit (OMP) algorithm to recover the locations of all the targets.
The scenario of multiple target localization in WSNs.
The rest of the paper is organized as follows. In Section 2, we provide a brief summary of sparse signal recovery using compressive sensing. The node localization problem is formulated in Section 3. In Section 4, we describe our approach for node localization in WSNs using deterministic sensing matrices. Then, numerical evaluation results are presented in Section 5, followed by concluding remarks in Section 6.
Notations., and denote the matrix transpose, inverse, and complex-conjugate operations, respectively. The operations , and denote -norm, -norm, and -norm, respectively. is conorm of P in the extension , is the greatest common divisor of m and r, is dimension of the Riemann-Roch space, and is closure of F.
2. Background
2.1. Fundamentals of Compressive Sensing
The goal of compressive sensing is to minimize the number of measurements needed to take samples of a signal. Let the signal x be a discrete-time column vector in and let be an orthogonal basis of . In practice, this signal vector x can be represented as sparse coefficients by an orthonormal basis Ψ having only nonzero coefficients. Then we can write
where is the ith sample of sparse coefficient of θ, satisfying which means
and is the ith column of the sparse basis. The compressive sensing aims to recover x from knowledge of
or recover θ from knowledge of
where , and is the measurement matrix or the sensing matrix of size with . The solution can be recovered by solving the following equation which is one of the common convex optimization problems:
where is called the holographic dictionary. In [21], the authors showed that the number of measurements M required for sparse reconstruction via -minimization method satisfies . If the measurement y is affected by unknown noise n, then the information can be expressed as
and the minimization problem changes to
for a chosen , or the other unconstrained version is given by
for a parameter .
Some of the most important concerns of the compressive sensing are how to find an appropriate sensing matrix to guarantee recovering the signal with high probability, what kind of reconstruction algorithm to use, and its efficient computation. In [12], the authors show that the sensing matrices must satisfy the k-RIP to ensure a unique and stable solution in convex reconstruction problem. A matrix Φ obeys the RIP condition with parameters for if
holds for all k-sparse vectors u. An upper bound of the sparsity is
where c is a positive constant. Some notable sensing matrices satisfying the k-RIP with high probability are independent identically distributed (i.i.d.) Gaussian, Bernoulli, and random Fourier ensembles.
For a given sensing matrix and the representation basis , where and are their corresponding column vectors, the incoherence between them is defined as the largest correlation as follows:
Donoho in [12] showed that ensures unique k-sparse signal recovery by Basis Pursuit (BP), and Tropp and Gilbert in [28] then extended the sufficient condition to the OMP. Gribonval and Vandergheynst in [29] showed that Matching Pursuit (MP) derives a unique solution with exponential convergence.
2.2. Orthogonal Matching Pursuit (OMP) Algorithm
The OMP algorithm aims to approximate the solution of the following problem:
or the error-constrained sparse coding problem given by
Denote the OMP operation as , where ϵ is an upper bound on the norm of the residual error term . The algorithm can be described by Algorithm 1.
Algorithm 1: OMP algorithm.
() procedure OMP
()
()
()
()
() whiledo
() Compute for all
() Choose
() Update
(10) Compute
(11) Compute
() Set .
() end while
() return ⊳ Approximated solution
() end procedure
2.3. Related Works on Deterministic Sensing Matrices
We consider the sensing matrix Φ in usual. It can be chosen as a random Gaussian matrix whose entries obey i.i.d. Gaussian distribution. The matrix Φ must also satisfy RIP condition to guarantee unique and exact reconstructed solution. Meanwhile, random matrices have critical drawbacks such as cost and complexity in reconstruction, error in signal approximation, significant space requirement for storage, and no efficient algorithm to verify whether a sampled sensing matrix satisfies the RIP property. In [21], some weaker RIP conditions have been considered. With a uniform distribution of vector α among all k-sparse vectors in ,
) StRIP matrix: A sensing matrix is said to be a StRIP matrix if, for any k-sparse signal , holds with probability exceeding ;
() UStRIP matrix: A sensing matrix is said to be UStRIP matrix, if, for any k-sparse signal , Φ is ) StRIP matrix and holds with probability exceeding .
In [22], a sensing matrix is designed with chirp sequences forming the columns using FFT. They show empirically that this type of matrix is valid as compressed sensing measurements. This is done by a comparison with random projections and modified RIP conditions. These conditions still guarantee the recovery, but with a smaller probability. In [23], a new method to construct binary sensing matrices is introduced by using algebraic curves over a finite field. They show that if we choose appropriate curves, we can obtain matrices better than DeVore's ones. Some highlighted results such as deterministic construction of sensing matrices via algebraic curves over finite fields in terms of coherence and chirp sensing matrices, second-order Reed-Muller codes, binary Bose-Chaudhuri-Hocquenghem (BCH) codes, and the sensing matrices with statistical RIP in terms of the RIP have been introduced in [24].
2.4. Motivation
In localization problems, there are two main techniques to select a sensing matrix Φ: access point selection; orthogonalization and signal recovery using minimization. Since random matrices have critical drawbacks such as that given in Section 2.3, we have to construct a matrix that is valid for compressed sensing measurement and guarantees the recovery. Thus, exploiting specific structures of the deterministic sensing matrices is required to solve these problems of the random sensing matrices. In this paper, using the results in [23], we will obtain a binary matrix Φ which achieves better performance in recovery by choosing an appropriate curve. This sensing matrix Φ is obtained by taking projections from the chosen curve.
Denote as the finite field of order p where p is a prime power. is an algebraic curve over . Let be the set of rational points of
Note that, given , the set has elements. Choose a divisor A of such that and . Then, by using Riemann-Rock Theorem in the Appendix, we have . Order the element of lexicographically as . For any , define a binary column vector which has the form
where
for and . The column vector forms a matrix . Then we have the following theorem.
Theorem 1.
Suppose that ; then Φ is a sensing matrix satisfying the RIP with coherence .
To get an optimal solution, the matrix Φ must satisfy the RIP of order at least. Gaussian matrix is one of this type. However, this is a random matrix. Thus, by using Theorem 1, we try to construct deterministic sensing matrices for smaller range of k.
3. Problem Formulation
Consider the two-dimensional localization problem, a sensor network with N points (grid); K targets are located in an isotropic area, where their positions are unknown. Wireless devices measure RSS from all the targets with available measurements, while the number of measurements is much smaller than the grid size .
Since at each time the target is located at a specific point in the time domain, the localization problem can be modeled as a sparse problem. Assume that the target is located at one of the Reference Points (RPs), and the location of a target over the grid is represented by a vector as follows:
Each has all elements equal to zeros, except . Here n is the index on grid point where ith target is located. Then, with K targets we obtain an matrix
Denote as the measured signal which is recorded at the ith target. Then each signal is given a distinct measurement matrix which is a deterministic matrix. The measurement can be written as
Collecting all these equations, we can write the following form:
where n is a measurement noise. In detail, each term in (20) can be expressed as
where , , , and .
Our goal is to determine the locations of all targets accurately only using a small number of measurements with a fast and low-complexity algorithm.
4. Main Results
With the aforementioned notions, we can write an arbitrary sparse signal as
where and P is RSS sparsity basis matrix. It is clear that x is a K-sparse vector since .
4.1. RSS Information Matrix P
The sparsity basis matrix P can be generated using the radio propagation channel model as
Each is an submatrix whose element has the form
is the received signal power in dBm, is the path loss component, is the real distance between transmitter and receiver, and is reference distance for the far field with receive power .
4.2. Sensing Matrix Φ
Reference [20] gives a construction of sensing matrices using polynomials in finite fields. Also, it can be extended from a finite field to its extension . We exploit this idea in our problem. Note that the algebraic curves over may have good properties such as large dimension and minimum distance, whereas the restricted algebraic curves may be very poor since the dimension of may be less than the dimension of . Hence, by using the following theorem, we can replace with to construct a matrix with large size. In [30] the author shows the following.
Theorem 2.
Denote F as algebraic function field of genus g whose constant field is the finite field , as an algebraic function field, as a finite separable extension, and as the constant field of . Thus, is a finite separable extension as well. For each , there exists exactly one extension of degree r with and one sets . Therefore, one has the following.
is Galois with a cyclic Galois group of order r ( is a cyclic extension of degree r). The Galois group Gal is generated by Frobenius automorphism σ which acts on by .
is the full constant field of .
Let be a place of degree m. Then with , pairwise distinct places and .
If T is a perfect field, denote as the number of places of degree one in the constant field extension . With special case , we have
That means each of the three places of of degree one has a unique extension of degree one in and so on. Also, several explicit examples of function fields are considered, namely, elliptic and hyperelliptic function fields and Kummer and Artin-Schreier extensions of the rational function field such as the following.
Consider ; there is only elliptic function field with five places of of genus :
Let with
genus , and the number of places of degree one is .
For instance, in (26) the number of places of degree one of the function field is given by [30]
where are the reciprocals of the root of
with , . Hence
Then, for a given integer s satisfying , we have an sensing matrix with
Thus,
and the matrix satisfies the RIP of order . Therefore, we set
According to Section 2, our goal is achieved by calling the OMP algorithm
5. Numerical Evaluation
We performed computer simulations and provide in this section numerical evaluation results of the proposed method using RSS measurements in indoor environments. Targets were assumed to be randomly placed in a 30 m × 30 m square. We divided this area into grid. In our work, we used the OMP algorithm to compute the sparse solution. Without loss of generality, we assumed that the number of measurements of each target was the same; that is, . We chose the elliptic curve in (26) with given r and s.
Figure 2 shows the recovery ratio of the deterministic matrix formed by the elliptic cure and the i.i.d. Gaussian matrix. With and , we obtained a sensing matrix. For each sparsity level k, input signals were used to compute the ratio. The results show that the performance of the proposed deterministic matrix is comparable to the i.i.d. Gaussian matrix.
Recovery ratio of a random Gaussian matrix and the proposed deterministic matrix formed by elliptic curve with the same size .
Figure 3 illustrates an example of actual multiple target localization. 20 targets were randomly deployed in this area. In our simulations, we employed the RSS indoor model defined in [7]. We have and , and signal-to-noise ratio (SNR) was equal to 25 dB. Each location was observed over 100 simulations. The result shows the effectiveness of location recovery from compressive measurement via using the Gaussian random matrix and the proposed deterministic matrix with elliptic curve. The number of RSS measurements was required at least in deterministic sensing matrix case and in the i.i.d. Gaussian matrix case. The simulation result shows that both approaches achieve a comparably high level of accuracy with a similar number of measurements. However, note that only the deterministic construction is feasible in practical use.
Performance of position recovery for 20 targets.
Figure 4 shows localization error (LE) for different SNR levels. Here, the LE is defined as
In this simulation, K was fixed to 5 and the values of were the same as previous simulations. As shown in Figure 4(a), when the RSS measurements were heavily affected by noise with small SNR (0 dB or 5 dB), the LEs of two approaches are almost the same. However, when SNR becomes larger (10 dB or 25 dB), it may affect the LE performance. With large SNR, the LE of the localization using the deterministic matrix is drastically decreased when the number of measurements becomes large, as clearly observed in Figure 4(b).
Localization errors of two approaches with various SNR levels.
The simulation results indicate that multiple target localization using the proposed deterministic sensing matrix can achieve good LE performance which is comparable to that of random sensing matrix. It can be found in the summary statistics in Table 1.
The minimum values of location error in meters with respect to various SNR levels using the two approaches.
SNR level (dB)
Sensing matrix
Random
Deterministic
0
0.01
0.025
5
0.025
0.02
10
0.019
0.03
25
0.03
0.024
6. Conclusion
This paper investigates the problem of multiple target localization in WSNs based on compressive sensing using deterministic sensing matrices. We show that the sensing matrix obtained by taking projections of chosen curve validates our problem formulation. Experimental results show that the OMP using the proposed deterministic sensing matrix compares favorably with existing random sensing matrix in reconstruction performance, while achieving a high accuracy. This approach offers an efficient way in localization process, and it can be also applied in other applications such as error correction, imaging, and secure communication.
Footnotes
Appendix
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This research was partly supported by the MSIP, Korea, under the Convergence-ITRC Support Program (IITP-2015-H8601-15-1001) supervised by the IITP, and by the Human Resources Development program (no. 20134010200570) of the KETEP grant funded by the MOTIE, Korea.
References
1.
ShangY.RumlW.ZhangY.FromherzM.Localization from connectivity in sensor networksIEEE Transactions on Parallel and Distributed Systems2004151196197410.1109/TPDS.2004.672-s2.0-8744271199
2.
AkyildizI. F.SuW.SankarasubramaniamY.CayirciE.Wireless sensor networks: a surveyComputer Networks20023843934222-s2.0-003708689010.1016/s1389-1286(01)00302-4
3.
LiuJ.ZhangY.ZhaoF.Robust distributed node localization with error managementProceeedings of the 7th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MOBIHOC '06)May 2006Florence, Italy2502612-s2.0-3374807547410.1145/1132905.1132933
4.
JiX.ZhaH.Sensor positioning in wireless ad-hoc sensor networks using multidimensional scaling4Proceedings of the 23rd Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM ′04)March 2004Hong Kong265226612-s2.0-834422763410.1109/infcom.2004.1354684
5.
PatwariN.HeroA. O.IIIManifold learning algorithms for localization in wireless sensor networksProceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ′04)May 2004Montreal, CanadaIII857III8602-s2.0-4544235011
6.
ShiQ.HeC.ChenH.JiangL.Distributed wireless sensor network localization via sequential greedy optimization algorithmIEEE Transactions on Signal Processing20105863328334010.1109/TSP.2010.2045416MR27568592-s2.0-77952561798
7.
MaoG.FidanB.AndersonB. D. O.Wireless sensor network localization techniquesComputer Networks200751102529255310.1016/j.comnet.2006.11.0182-s2.0-34247885928
8.
PatwariN.HeroA. O.IIIPerkinsM.CorrealN. S.O'DeaR. J.Relative location estimation in wireless sensor networksIEEE Transactions on Signal Processing20035182137214810.1109/TSP.2003.8144692-s2.0-0042665374
9.
ShangY.RumlW.ZhangY.FromherzM.Localization from connectivity in sensor networksIEEE Transactions on Parallel and Distributed Systems2004151196197410.1109/tpds.2004.672-s2.0-8744271199
10.
DohertyL.El GhaouiL.PisterK. S. J.Convex position estimation in wireless sensor networks3Proceedings of the 20th Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM ′01)April 2001Anchorage, Alaska, USA165516632-s2.0-0035013232
11.
BiswasP.LiangT.-C.TohK.-C.YeY.WangT.-C.Semidefinite programming approaches for sensor network localization with noisy distance measurementsIEEE Transactions on Automation Science and Engineering20063436037110.1109/TASE.2006.8774012-s2.0-33750083077
12.
DonohoD. L.Compressed sensingIEEE Transactions on Information Theory20065241289130610.1109/tit.2006.871582MR22411892-s2.0-33645712892
13.
CandèsE.Compressive samplingProceedings of the International Congress of MathematiciansAugust 2006Madrid, Spain14331452
14.
FengC.ValaeeS.TanZ.Multiple target localization using compressive sensingProceedings of the IEEE Global Telecommunications Conference (GLOBECOM ′09)2009Honolulu, Hawaii, USA1610.1109/GLOCOM.2009.5425808
15.
ZetikR.JovanoskaS.ThomäR.Simple method for localisation of multiple tag-free targets using UWB sensor networkProceedings of the IEEE International Conference on Ultra-Wideband (ICUWB ′11)September 2011Bologna, Italy268272
16.
FengC.ValaeeS.TanZ.Localization of wireless sensors using compressive sensing for manifold learningProceedings of the IEEE 20th Personal, Indoor and Mobile Radio Communications Symposium (PIMRC ′09)September 2009Tokyo, JapanIEEE271527192-s2.0-7795285628610.1109/pimrc.2009.5449918
17.
ZhangB.ChengX.ZhangN.CuiY.LiY.LiangQ.Sparse target counting and localization in sensor networks based on compressive sensingProceedings of the Conference on Computer Communications (INFOCOM ′11)April 2011Shanghai, China
18.
CandesE. J.WakinM. B.An introduction to compressive samplingIEEE Signal Processing Magazine2008252213010.1109/MSP.2007.914731
19.
SaligramaV.Deterministic designs with deterministic guarantees: toeplitz compressed sensing matrices, sequence design and system IDIEEE Transactions on Information Theory20149910.1109/TIT.2012.2206951
20.
XuZ.Deterministic sampling of sparse trigonometric polynomialsJournal of Complexity201127213314010.1016/j.jco.2011.01.007MR27764882-s2.0-79952450497
21.
CalderbankR.HowardS.JafarpourS.Construction of a large class of deterministic sensing matrices that satisfy a statistical isometry propertyIEEE Journal on Selected Topics in Signal Processing20104235837410.1109/JSTSP.2010.20431612-s2.0-77949687001
22.
ApplebaumL.HowardS. D.SearleS.CalderbankR.Chirp sensing codes: deterministic compressed sensing measurements for fast recoveryApplied and Computational Harmonic Analysis200926228329010.1016/j.acha.2008.08.002MR24902202-s2.0-59649115633
23.
LiS.GaoF.GeG.ZhangS.Deterministic construction of compressed sensing matrices via algebraic curvesIEEE Transactions on Information Theory20125885035504110.1109/tit.2012.2196256MR29574182-s2.0-84863945159
24.
NguyenT. L. N.ShinY.Deterministic sensing matrices in compressive sensing: a surveyThe Scientific World Journal2013201361927952-s2.0-8488887330810.1155/2013/192795
25.
GurevichS.HadaniR.SochenN.On some deterministic dictionaries supporting sparsityJournal of Fourier Analysis & Applications2008145-685987610.1007/s00041-008-9043-z2-s2.0-57349154327
26.
IwenM. A.A deterministic sub-linear time sparse Fourier algorithm via non-adaptive compressed sensing methodsProceedings of the 19th Symposium on Discrete Algorithms (SODA ′08)January 2008San Francisco, Calif, USAACM2029MR2485285
27.
HowardS. D.CalderbankA. R.SearleS. J.A fast reconstruction algorithm for deterministic compressive sensing using second order reed-muller codesProceedings of the 42nd Annual Conference on Information Sciences and Systems (CISS ′08)March 2008Princeton, NJ, USA111510.1109/ciss.2008.45584862-s2.0-51849129654
28.
TroppJ. A.GilbertA. C.Signal recovery from random measurements via orthogonal matching pursuitIEEE Transactions on Information Theory200753124655466610.1109/tit.2007.909108MR24469292-s2.0-64649083745
29.
GribonvalR.VandergheynstP.On the exponential convergence of matching pursuits in quasi-incoherent dictionariesIEEE Transactions on Information Theory20065212552612-s2.0-3294445895210.1109/tit.2005.860474MR2237350
30.
StichtenothH.Algebraic Function Fields and Codes20092ndSpringerMR2464941