当前位置:首页 >> 工学 >>

Fault tolerant spatio-temporal fusion for moving vehicle classification in wireless sensor networks


www.ietdl.org
Published in IET Communications Received on 8th June 2009 Revised on 2nd June 2010 doi: 10.1049/iet-com.2009.0746

ISSN 1751-8628

Fault tolerant spatio-temporal fusion for moving vehicle classi?cation in wireless sensor networks
C.T. Liu1 H. Huo1 T. Fang1 D.R. Li2
1

Institute of Image Processing & Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200240, People’s Republic of China 2 State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, People’s Republic of China E-mail: liuchunting@gmail.com

Abstract: Wireless sensor networks (WSNs) can implement complicated tasks through collaboration among multiple sensor nodes. The low-cost sensors in WSNs often generate noisy and even faulty measurements, which will degrade the network performance. Therefore developing collaborative signal processing (CSP) algorithms that has high fault tolerance ability is necessary for the increasingly deployed WSNs. In this study, the authors propose a novel fault tolerant fusion scheme to implement reliable vehicle classi?cation by integrating fault detection and correction with a spatio-temporal fusion structure. Sensor faults are detected at the fusion centre and then the fault detection results are fed back to the local sensors to update subsequent classi?cation results. A Dempster – Shafer theory based fault correction strategy is devised to utilise the fusion centre feedback. Simulation results demonstrate that the proposed scheme ensures more than 95% of the classi?cation results to be correct when no larger than 30% of the sensors are faulty and the scheme achieves improved fault detection rate and false alarm rate than the optimum threshold Bayesian fault detection scheme.

1

Introduction

Wireless sensor networks (WSNs) composed of a number of low-cost sensors are increasingly deployed for missioncritical applications such as target detection, tracking and classi?cation [1]. These applications often impose stringent quality-of-service requirements, including high detection/ classi?cation rate as well as good tolerance to sensor faults. However, we face the following challenges in realising reliable sensor network applications. First, owing to the complicated dynamics of environment and the low-cost sensor hardwares, each individual sensor often suffers from heavy stochastic noises. Second, because of limited communication capability and power supply, sensors often yield faulty readings. As a result, it is desirable to develop reliable collaborative signal processing (CSP) algorithms which take advantage of collaboration among multiple sensors to meet the performance requirements imposed by the mission-critical applications. Data fusion, as a typical CSP technique, can improve the performance of WSN surveillance by jointly processing the unreliable measurements from multiple sensors [1 – 3]. In many applications, combining time-series measurements at local sensors before sending them for fusion can reduce communication traf?cs and improve the reliability of individual sensors. For instance, in a moving target surveillance application, the periodical measurements taken by spatially distributed sensors form a spatio-temporal observation of the moving object, which can be utilised to
434 & The Institution of Engineering and Technology 2011

improve the surveillance performance of the network [1, 2]. Various spatial and spatio-temporal data fusion schemes have been proposed in previous literature [2, 4 – 8] to improve the sensing performance of WSNs. In particular, various fusion algorithms are devised to achieve a certain level of fault tolerance, such as selectively fusing a portion of data, or giving sensors different con?dence weights. However, the correctness of the fused data cannot be guaranteed when sensors’ fault probability is high. Therefore an explicit fault detection process is in demand for ef?ciently fusing heavily faulty sensor measurements. Fault detection algorithms have been investigated in respect of detection precision, energy ef?ciency, online and distributed data processing in WSNs [9 – 14]. Many existing fault detection algorithms detect faulty sensors and exclude them from the data processing procedure. However, in many cases, faulty sensors also provide useful information for sensor network CSP. Different from the previous fault tolerance schemes that directly exclude faulty measurements, in this paper, we aim to improve the system performance and reliability by making use of the errorprone sensor data together with the fault detection results during data fusion. To address the aforementioned challenges, we propose a fault tolerant spatio-temporal data fusion scheme by exploiting the correlation nature in sensor measurements for moving vehicle classi?cation. In our scheme, faulty sensors are detected at the fusion centre after spatial fusion, and the fault detection results are utilised to conduct subsequent
IET Commun., 2011, Vol. 5, Iss. 4, pp. 434 –442 doi: 10.1049/iet-com.2009.0746

www.ietdl.org
temporal fusion at local sensors. When a moving vehicle presents, the sensors that are close to the vehicle will start to collect measurements and make decisions continuously. Our fault tolerant fusion scheme is carried out iteratively to combine the spatio-temporal sensor measurements. Data fusion of the time-series decisions improves the reliability of local sensors. Afterwards, the ?nal decision on the vehicle type is yielded by spatial fusion based on the local decisions from multiple sensors. In particular, the fusion centre performs fault detection by assuming most of the sensors are correct. Then the fault detection results are fed back to the local sensors to adjust the next temporal fusion result. A Dempster– Shafer (DS) theory [15] based fault correction strategy is devised to utilise the fusion centre feedback, which can effectively deal with the inconsistent local decisions. The computation and communication overheads of the proposed scheme are affordable for lowpower sensors. First, only hard decision labels are transmitted to the fusion centre, and then the fusion centre returns 1-bit status to indicate the correctness of local decision. Second, the computation overhead of our fusion scheme is also low because of simple and easily implementable fusion models. In our later simulating experiment, the performance of classi?cation and fault detection are evaluated in terms of classi?cation rate and fault detection/false alarm rate against different sensor fault probabilities. Simulation results show that the proposed fault tolerance mechanism can signi?cantly improve the accuracy of classifying moving targets. The rest of this paper is organised as follows. Section 2 reviews related work. Section 3 introduces the background of moving vehicle classi?cation problem. Section 4 presents the proposed spatio-temporal fusion and fault detection procedure. Section 5 analyses and evaluates the fault tolerant ability of this method. Section 6 presents simulation results and Section 7 concludes this paper. tolerant strategies detect faulty sensors and exclude them from the data processing procedure. However, in many cases, faulty sensors also provide useful information for CSP in sensor networks. Krishnamachari and Iyengar [23] ?rst address the fault detection and correction problem for event region detection. Sensor readings are usually corrected by collaborating with neighbouring sensors [23, 24]. Ding et al. [25] explore both temporal and spatial dimensions aggregation to correct sensor faults, in which the two dimensions are considered separately. In this paper, we detect and correct the sensor faults iteratively based on the consecutive sensor data of distributed sensors, which achieves progressive improvement of the system performance. In a moving target surveillance scenario, sensor observations are usually spatio-temporal correlated [26]. The signal processing algorithms in such scenarios can take use of both the temporal and spatial correlation characteristics of the sensor measurements [13, 25– 27]. In previous literature, spatio-temporal fusion has been widely employed in multi-camera systems to extract trajectory or semantic information [28 – 30]. There are also researches of spatio-temporal fusion in distributed sensor networks. In [4], both temporal fusion and spatial fusion are performed to improve classi?cation accuracy of sensor networks. Hong and Lynch [31] develop a recursive spatio-temporal fusion algorithm by utilising DS theory for target identi?cation. In [8], three strategies of spatio-temporal decision fusion are compared in terms of their implementation costs and system performance. The analysis shows that the time – space fusion scheme is preferable in communication constrained WSNs. Although the study of spatio-temporal fusion algorithms have received extensive attention, integrating fault tolerant mechanism in the fusion process based on the consecutive sensor measurements has seldom been addressed. We adopt a time – space fusion scheme similar to that depicted in [8] to implement out fault tolerant data fusion scheme.

2

Related work

Data fusion in WSNs has attracted considerable research attention in recent years. Various fusion schemes have been devised from diverse aspects of WSNs, such as spatial distribution of sensors [5], lossy communication channels [16, 17], and constrained system resources [18, 19]. There is also a vast amount of literature focusing on data fusion for moving target classi?cation [2 – 5, 20, 21]. The network performance is improved by the collaboration of multiple sensors. However, the low-cost sensor nodes in WSNs are likely to exhibit unreliable behaviour. Faulty sensor measurements will degrade the reliability of fusion results. Therefore developing fault tolerant fusion algorithms is important for reliable sensor network application. Fault tolerant fusion rules are studied in [7, 22]. In [7], if faulty sensors exist, a number of extreme data is dropped before computing the average of neighbouring sensors’ measurements. However, determining the number of dropped data is a challenging issue to the algorithm. In [22], fault tolerant fusion for distributed multi-class classi?cation is implemented via error correcting codes. But the code matrix needs to be trained before deployment, which is not practical for a dynamic network environment. In contrast, we propose an online strategy to deal with faulty sensors by detecting and correcting faults explicitly during a spatio-temporal data fusion procedure. The problem of detecting faults from multiple sensor nodes has received extensive investigation [9 – 14]. Many fault
IET Commun., 2011, Vol. 5, Iss. 4, pp. 434– 442 doi: 10.1049/iet-com.2009.0746

3

Problem formulation

A variety of modalities of signals, such as acoustic, seismic and infrared [3], emitted by different moving vehicles have unique information of vehicle type, which can be used to extract distinct features for target classi?cation. We consider the scenario of a vehicle moving through the sensor ?eld. The sensor nodes are grouped as clusters and the cluster head acts as fusion centre. Assume a cluster of n sensor nodes {Si , i = 1, . . . , n} participate in the classi?cation process, distributed sensor nodes take measurements of a signal periodically, the measurement of Si at time t is modelled as [2] xi (t ) = si (t ) + ni (t ) (1)

where si (t ) is the emitted signal of a moving vehicle measured at sensor Si and ni (t ) is an additive Gaussian noise. The processing unit of sensor node can extract features in time or frequency domain and use these features to distinguish vehicle type. Let Xi denote an l-dimensional feature vector extracted by Si from a period of raw samples, and let Cj represent the jth type of vehicle in an M-ary classi?cation problem. The a priori probability of Cj is denoted as P(Cj ). Assume the distribution function of X i for each class is known, the conditional probability of X i
435

& The Institution of Engineering and Technology 2011

www.ietdl.org
committed to Cj can be represented by P(X i |Cj ). Each sensor makes a classi?cation decision locally through the maximum a posteriori (MAP) estimation [32] ui = arg max P(Cj |X i )
j=1,...,M

(2)

and we obtain the decision label Di as Di = g (ui ) (3)

where P(Cj |X i ) is the posterior probability of Cj derived from Bayes theorem, function g (ui ) outputs an M-dimensional vector with only the uith element is 1 and the others are 0. If these sensor nodes are reliable, observations of the same object should have consistent classi?cation results. However, faults are always inevitable in WSNs. A taxonomy for classi?cation of faults in sensor networks has been presented in [11]. In this study, we assume the sensor’s software is fault tolerable and communication links between sensor nodes and fusion centre are reliable. We only consider the sensors generating arbitrary faulty decisions with probability d.

temporal updating are performed at local sensor node. Dtf i ,k is the temporal fusion result of the kth block of time-series classi?cation decisions. Dtu i,k is the temporal updating output tu derived from Dtf i ,k , Di ,k?1 and li . After temporal updating, Dtu i,k is sent to the fusion centre and saved locally for the next step of temporal updating. Note that, the temporal updating step begins from k ? 2. The ?rst temporal updating result Dtu i,2 is generated based on the ?rst two tf temporal fusion results Dtf i,1 and Di,2 , and assuming li = 1. Fig. 1b shows the spatial fusion procedure. The fusion centre not only generates spatial fusion output Dk , but also evaluates the correctness of each received local decision, and then feeds back the correctness status li to the sensor node. Detailed illustration of the three steps is as follows. 4.1 Temporal fusion

In the temporal fusion step, a block of time-series decisions are combined at local sensor node. The decision label of sensor Si at time t is denoted as Di,t , which is derived from (2) and (3). The fusion rule is expressed as ? ? ′ ? ? g (j ) ? ? ?
kT

if
t =(k ?1)T +1

4 Fault tolerant spatio-temporal fusion scheme
To achieve reliable classi?cation when encountering faulty sensors, the basic concept of this study is to detect and correct faults through collaboration among the spatiotemporal sensor measurements. Our fault tolerant fusion scheme includes temporal fusion, temporal updating and spatial fusion steps. Figs. 1a and b show the signal processing procedure at local sensor node and the fusion centre, respectively. As seen in Fig. 1a, temporal fusion and

Dtf i,k

=

Di,t (j) . T /2, (4)

j = 1, 2, . . . , M g ′ (M + 1) otherwise

where T is the length of the fused time-series decisions, the kth block includes the Di,t ’s for t [ [(k ? 1)T + 1, kT ], g′ (j) outputs an (M + 1)-dimensional vector with the jth element is 1 and the others are 0. When more than half of the Di,t ’s belong to the same class, a classi?cation decision is made; otherwise the (M + 1)th element of Dtf i,k , which indicates uncertain decision, is set to 1 and the others are 0. 4.2 Temporal updating

The temporal updating step generates the kth temporal tf tu updating result Dtu i,k based on Di,k , Di,k?1 and li . The correctness status li , which indicates the consistency of Dtu i,k?1 with the spatial fusion result, is a binary value. Intuitively, if li = 1(Dtu i,k?1 is consistent with the spatial fusion result), we think that Dtu i,k?1 is correct, otherwise, it is faulty. We use DS theory [15] to implement temporal updating. DS theory is an appropriate measure to deal with and represent uncertain information. The frame of discernment is de?ned as CQ = {C1 , . . . , CM }. Denote mi,k ?1 as the basic probability assignment (BPA) function of Dtu i,k?1 . Let Cj represent the class type derived from Dtu i,k?1 , that is for tu 1 ≤ j ≤ M, there is Dtu i,k?1 (j ) = 1. The BPAs of Di ,k?1 are de?ned as mi,k ?1 (Cj ) = mi,k ?1 (CjC ) = Dtu i,k?1 (j), 0, 0, Dtu i,k?1 (j), li = 1 li = 0 li = 1 li = 0 m = 1, . . . , M , m=j (5)

Fig. 1 Illustration of the fault tolerant spatio-temporal fusion scheme
a Temporal fusion and temporal updating at local sensor b Framework of spatial fustion 436 & The Institution of Engineering and Technology 2011

mi,k ?1 (Cm ) = Dtu i ,k?1 (m),

mi,k ?1 (CQ ) = Dtu i ,k?1 (M + 1) where CjC = CQ\Cj is the complement of Cj . De?ne mi,k as
IET Commun., 2011, Vol. 5, Iss. 4, pp. 434 –442 doi: 10.1049/iet-com.2009.0746

www.ietdl.org
the BPA function of Dtf i ,k mi,k (Cm ) = Dtf i,k (m), m = 1, . . . , M (6) In temporal fusion, when more than half of the time-series decisions are correct, the fusion output is correct. Let d be the sensor fault probability, the probability of generating correct temporal fusion result is
T

mi,k (CQ ) = Dtf i,k (M + 1)

To avoid evidence con?ict between mi,k ?1 and mi,k , we use the combination rule proposed by Yager [33] to allocate the con?ict evidence to the frame of discernment CQ mi (Cj ) =
A>B=Cj

ptfC =
i=?T /2?+1

T (1 ? d )i d T ?i i

(11)

mi,k ?1 (A)mi,k (B) mi,k ?1 (A)mi,k (B) (7)

mi (CQ ) = mi,k ?1 (CQ )mi,k (CQ ) +

A>B=f

In a similar way, if more than half of the time-series decisions belong to the same wrong class, the temporal fusion result is faulty. To derive the expression of ptfF , we consider the situation of M ? 2 and M . 2, respectively. When M ? 2, the probability of generating faulty temporal fusion result is
T

where f denote empty set, A, B , CQ . The temporal updating decision of block k is made by utu = arg max mi (Cj )
j=1,...,M ,Q

ptfF =
i=?T /2?+1

T i d (1 ? d )T ?i i

(12)

(8)

The decision label is derived as
′ tu Dtu i ,k = g ( u )

(9)

After decision making, we send Dtu i,k to the fusion centre based on the following principle: 1. If only Cj , j = 1, . . . , M corresponds to the maximum element in mi , sensor Si sends Dtu i,k to the fusion centre to perform spatial fusion. 2. If mi (CQ ) . mi (Cj ) for all j = 1, . . . , M or more than one class have the maximum decision value, an uncertain decision is made. Si does not send Dtu i,k to the fusion centre and we set ′ Dtu i,k = g (M + 1) and li = 1. In such a case, Si will combine the next block of temporal fusion result to make a de?nite decision. 4.3 Spatial fusion and fault detection

If M . 2, we assume that the probabilities of Di,t belonging to the incorrect classes are all equal to d/(M 2 1). Given that, in the T time-series local decisions, j decisions are correct, i decisions belong to the same incorrect class and the reminder T 2 i 2 j decisions belong to other M 2 2 classes, there are M 2 1 vehicle-type options for the i decisions and M 2 2 for each of the reminder T 2 i 2 j decisions. Therefore the probability of obtaining faulty temporal fusion result is
T

ptfF = (M ? 1)
i=?T /2?+1

T i

d M ?1

i T ?i j =0

×

M ?2 T ?i (1 ? d )j d M ?1 j

T ?i?j

(13)

If the number of the decisions that commit to each class is no greater than T/2, an uncertain decision is made. The probability of obtaining uncertain temporal fusion result is ptfU = 1 ? ptfC ? ptfF (14)

The fusion centre will receive temporal updating result Dtu i ,k from sensor i if the Dtu i,k is a de?nite decision. Assume the fusion centre receives n ′ local decisions, the spatial fusion result is made based on ? ? g ′ (j ) ? g′ (M + 1)
n

Dk =

if

i=1

′ Dtu i,k (j ) . n /2, j = 1, 2, . . . , M

otherwise (10)

After spatial fusion, the fusion centre uses Dk to evaluate the correctness of Dtu i ,k and sends status li back to sensor i to tu indicate the correctness of Dtu i,k . If Di,k is consistent with Dk , li = 1 is sent back to sensor Si , otherwise, li ? 0 is returned.

5

Algorithm analysis

All three steps in this fault tolerant scheme can improve classi?cation performance. This section presents the analysis of how the three steps interact with each other and impact the classi?cation performance.
IET Commun., 2011, Vol. 5, Iss. 4, pp. 434– 442 doi: 10.1049/iet-com.2009.0746

The temporal updating and spatial fusion steps interact with each other recursively. In spatial fusion phase, the ?nal decision Dk is determined by Dtu i,k ’s from local sensors. On the other hand, the spatial fusion result impacts temporal updating performance through the correctness status li . We ?rst discuss the temporal updating step. The temporal updating result Dtu i ,k is jointly determined by mi,k ?1 , mi,k and li . Table 1 lists all the possible Dtu i ,k and their corresponding h) probabilities p( , where C means the temporal updating k result is correct, F and U represent faulty and uncertain decisions, respectively. When k ? 2, local sensors initially tf update the temporal fusion results Dtf i,2 by using Di,1 and assuming li = 1. Note that, in situation 8 and 11, two faulty evidences may generate an uncertain decision when M . 2. As li = 1, Dtu i,k will be faulty if mi,k ?1 and mi,k are corresponding to the same wrong class, otherwise, Dtu i,k will be uncertain. The probabilities of obtaining faulty and uncertain temporal updating results are 1/(M ? 1)p(8) k and (8) respectively. Similarly, the (M ? 2)/(M ? 1)pk , probabilities of the temporal updating results being faulty and uncertain when li = 0 are (M ? 2)/(M ? 1)p(11) and k , respectively. 1/(M ? 1)p(11) k
437

& The Institution of Engineering and Technology 2011

www.ietdl.org
Table 1
(h) Temporal updating results and probabilities list, C: correct, F: faulty, U: uncertain mi,k21 C C C C C C F F F F F F U U U mi,k C F U C F U C F U C F U C F U li 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 Temporal updating result M ? 2/M . 2 C U C U F U U F/F, U F C U/F, U U C F U p (h) k (k ? 2)
2 ptfC ptfC ptfF ptfC ptfU 0 0 0 ptfC ptfF 2 ptfF ptfF ptfU 0 0 0 ptfU ptfC ptfU ptfF 2 ptfU

p (h) k (k . 2) ptuC,k ?1 ptfC psfC,k ?1 ptuC,k ?1 ptfF psfC,k ?1 ptuC,k ?1 ptfU psfC,k ?1 ptuC,k ?1 ptfC (psfF,k ?1 + psfU,k ?1 ) ptuC,k ?1 ptfF (psfF,k?1 + psfU,k ?1 ) ptuC,k ?1 ptfU (psfF,k ?1 + psfU,k ?1 ) ptuF,k ?1 ptfC psfF,k ?1 ptuF,k ?1 ptfF psfF,k ?1 ptuF,k ?1 ptfU psfF,k ?1 ptuF,k ?1 ptfC (psfC,k ?1 + psfU,k ?1 ) ptuF,k ?1 ptfF (psfC,k ?1 + psfU,k ?1 ) ptuF,k ?1 ptfU (psfC,k ?1 + psfU,k ?1 ) ptuU,k ?1 ptfC ptuU,k ?1 ptfF ptuU,k ?1 ptfU

(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15)

According to Table 1, the corresponding probabilities of obtaining correct, faulty and uncertain temporal updating results at block k are
(3) (10) ptuC,k = p(1) + p(13) k + pk + pk k

The probability of the spatial fusion result being uncertain is psfU,k = 1 ? psfC,k ? psfF,k (19)

ptuF,k

1 M ? 2 (11) (9) (14) p(8) + p = p(5) + k + pk + pk M ?1 k M ?1 k M ? 2 (8) 1 p + p(11) M ?1 k M ?1 k (15)

(4) (6) (7) (12) ptuU,k = p(2) + p(15) k + pk + pk + pk + pk k

+

Assume there are b sensors generating uncertain local decisions, based on the spatial fusion rule, if more than half of the other n 2 b local decisions are correct, the fusion result is correct. Therefore the probability of obtaining correct spatial fusion result at block k is psfC,k =
n?1 b=0 n? b n b n?b j n?b?j ptuU,k ptuC,k ptuF, k b j j=?(n?b)/2?+1

From Table 1 we can see that con?ict between mi,k ?1 and mi,k will cause an uncertain temporal updating result, but the uncertainty can be corrected when it is updated by a correct temporal fusion result. So a sensor will always make de?nite decisions within ?nite iteration steps as long as the faulty or uncertainty of the sensor is temporary. Since temporal updating and spatial fusion are held iteratively, we investigate the tendency of these two steps with respect to the number of iterations k. Fig. 2 shows the correct temporal updating probability ptuC against k with different T and n for d ? 0.4, M ? 2. We can see that ptuC is stabled after several blocks of temporal updating. The number of iterations needed for convergence is usually no greater than 10. Although ptuC is rather low when T and n are small, the classi?cation reliability is still improved. This is because a portion of the faults are converted to uncertain decisions. The number of faulty local decisions sent to the

(16) The respective probabilities of obtaining faulty spatial fusion result when M ? 2 and M . 2 are psfF,k = and psfF,k = (M ? 1)
n?b?j n? 1 b= 0 n? b n b n?b ptuU,k b j j=?(n?b)/2?+1 n?1 b=0 n?b n b n?b j n?b?j ptuU,k ptuF,k ptuC, k b j j=?(n?b)/2?+1

(17)

ptuF,k M ?1

j

×
l=0

n?b?j l M ?2 ptuC,k ptuF,k l M ?1

n?b?j?l

(18)
438 & The Institution of Engineering and Technology 2011

Fig. 2 Probability of correct temporal updating with respect to k for d ? 0.4
IET Commun., 2011, Vol. 5, Iss. 4, pp. 434 –442 doi: 10.1049/iet-com.2009.0746

www.ietdl.org
where nC is the number of correct decisions derived from spatial fusion results, nF is the number of faulty decisions and pC indicates the percentage of correct decisions among all the de?nite decisions. If a temporal updating result is faulty and the correctness status value is 0, a fault is detected; otherwise, if a correct temporal updating result is evaluated as faulty, it is a false alarm. The fault detection rate pFD is de?ned as the ratio of the number of faults that are detected to the total number of faulty temporal updating results. The false alarm rate pFA is de?ned as the number of correct temporal updating results that are evaluated as faulty to the total number of correct temporal updating results. According to the de?nition, the fault tolerant algorithms with higher pC , pFD and lower pFA are preferable. 6.2
Fig. 3 Correct probability of spatial fusion with respect to k for d ? 0.4

Simulation results

fusion center is actually reduced, for example, for T ? 3 and n ? 3, according to (11), (12) and (15), ptfC and ptfF are 65 and 35%, respectively, whereas ptuC and ptuF are 53 and 17%, respectively, the reduction of faulty local decisions is more distinct than that of correct ones. Fig. 3 shows psfC with respect to k. psfC is also improved and stabled along with k. Therefore after a number of iterations, the performance of temporal updating and spatial fusion will be stabilised. Speci?cally, the probability of correct spatial fusion is over 99% for d ? 0.4 even with moderate n and T (n ? 15, T ? 9). When a sensor node makes a de?nite decision, the sensor will send an M-dimensional hard decision label to the fusion centre and the fusion centre will feed back a 1-bit correctness status. On the other hand, if the decision is uncertain, there are no information exchanges between the sensor and the fusion centre. Accordingly, to make a ?nal decision, the maximum number of messages that a sensor node sends to the fusion centre is M and the maximum number of messages for the fusion centre to transmit is n. The temporal fusion step combines T time series decisions at each time and the decisions are M-dimensional vectors. So the computation complexity of the temporal fusion step is O(TM). In the temporal updating step, two (M + 1)dimensional BPAs are combined according to DS theory. So, the computation complexity is O (M 2). The spatial fusion step combines n ′ local decisions and compares them with the spatial fusion result. In consequence, the corresponding computation complexity is O (nM).

We simulate the fusion procedure for vehicle classi?cation by using MATLAB. As referred to the real world measurements in SensIT project [3], the sensing ranges of acoustic sensors are from 26.01 to 119.07 m with average of 66.05 m. The sensing range variation is due to various reasons, including different moving conditions and the complex environment. Therefore in this simulation, we assume that n sensor nodes are randomly deployed in a 50 × 50 m2 region and all the sensors within this region can make local decisions when a vehicle is moving through this area. Each sensor has an independent probability d to report a faulty decision. The number of vehicle type M is set to 2. We vary the number of sensor nodes n to simulate different network density. All the simulation results in this section are averages of 200 runs. At each run, 120 consecutive decisions are generated for each sensor. Fig. 4 compares the classi?cation rate of our fault tolerant fusion method with a time –space decision fusion scheme as depicted in [8] with respect to fault probability d. The time – space fusion scheme is similar to the strategy proposed in this paper except that fault detection and fault correction are not held. We can see that our fault tolerant fusion scheme performs better than the time – space fusion scheme given different n and T and as d increases, the improvement is more signi?cant. When d is no less than 30%, the classi?cation rate improvement is more than 10% for n ? 3 and T ? 3, which shows that the fault detection

6
6.1

Performance evaluation
Performance metrics

We evaluate the proposed algorithm in terms of classi?cation accuracy and fault detection ability through classi?cation rate, fault detection rate and false alarm rate. The three metrics are de?ned in the following. Since ?nal classi?cation decision is derived from spatial fusion, we de?ne the classi?cation rate as pC = nC nC + nF (20)
Fig. 4 Classi?cation rate comparison of fusion with and without fault detection
439

IET Commun., 2011, Vol. 5, Iss. 4, pp. 434– 442 doi: 10.1049/iet-com.2009.0746

& The Institution of Engineering and Technology 2011

www.ietdl.org

Fig. 6 Classi?cation rate with respect to d given different T and n Fig. 5 pFD and pFA with respect to d for T ? 1 and n ? 5

Fig. 7 Fault detection and false alarm rate with respect to d for different n
a Fault detection rate b False alarm rate

Fig. 8 Fault detection and false alarm rate with respect to d for different T
a Fault detection rate b False alarm rate 440 & The Institution of Engineering and Technology 2011 IET Commun., 2011, Vol. 5, Iss. 4, pp. 434 –442 doi: 10.1049/iet-com.2009.0746

www.ietdl.org
and correction operations are effective in eliminating the impact of faulty sensor measurements. In Fig. 5, we compare the proposed strategy with the optimum threshold Bayesian fault detection scheme [23, 34] to evaluate the fault detection performance. It can be seen that our scheme has higher fault detection rate and lower false alarm rate than the Bayesian fault detection scheme. The reason is that besides collaborating with neighbouring sensors, the spatio-temporal fault tolerant fusion scheme also exploits temporal dimensional measurements to correct faults. The maximum increment of pFD is about 23% and the maximum decrement of pFA is about 8%. To study the impact of the block length T and the number of sensor nodes n on classi?cation performance, Fig. 6 presents the classi?cation rate pC as a function of the sensor fault probability d, given different T and n. We can see that the correct classi?cation rate pC is over 95% when the sensor fault probability is no greater than 30%, and the classi?cation performance is improved with the increasing of T and n. Note that larger T will cause more time needed to make local decisions and larger n will cause higher network communication traf?c. Therefore given a classi?cation performance demand and network density, the value of T and n can be properly selected to rationally allocate the network resources. Fig. 7 shows the fault detection rate pFD and false alarm rate pFA for different n when T ? 3. It is shown that pFD increases with n and the false alarm rate decreases. When n is 25, reasonably high pFD and low pFA are attained. Fig. 8 shows that increasing T can also improve pFD and pFA . This is because increasing T can improve temporal updating performance of local sensors and accordingly increases spatial fusion and fault detection reliability.

9

References

7

Conclusions

This paper proposes a fault tolerant fusion scheme in which fault detection and correction are carried out iteratively during a spatio-temporal fusion procedure. By using the fault detection results to update the subsequent local decisions, the proposed scheme presents robust classi?cation performance when sensor faults exist. Compare with the spatio-temporal fusion scheme without fault detection, our fault tolerant fusion scheme only requires the fusion centre to transmit an extra 1-bit message to each local sensor, which causes a very little increase of the communication traf?c, while the fault tolerant performance is improved signi?cantly. The computation complexity is also low because of the use of hard decision labels and simple fusion models. The fault detection performance is improved because of the improvement of the spatial fusion result. The scheme achieves a maximum of 23% increment of fault detection rate and a maximum of 8% decrement of false alarm rate than the Bayesian fault detection scheme when n ? 5, T ? 1. Simulation results also demonstrate that the fault detection ability is improved with the increasing of both the number of sensor nodes n and block length T, which provides ?exible parameter selection for different application requirements.

8

Acknowledgment

This paper was partially supported by the project of Science & Technology Department of Shanghai, China under grant No. 05dz15004.
IET Commun., 2011, Vol. 5, Iss. 4, pp. 434– 442 doi: 10.1049/iet-com.2009.0746

1 Li, D., Wong, K.D., Hu, Y.H., Sayeed, A.M.: ‘Detection, classi?cation and tracking of targets’, IEEE Signal Process. Mag., 2002, 19, (2), pp. 17–29 2 D’Costa, A., Ramachandran, V., Sayeed, A.M.: ‘Distributed classi?cation of gaussian space-time sources in wireless sensor networks’, IEEE J. Sel. Areas Commun., 2004, 22, (6), pp. 1026– 1036 3 Duarte, M.F., Hu, Y.H.: ‘Vehicle classi?cation in distributed sensor networks’, J. Parallel Distrib. Comput., 2004, 64, (7), pp. 826–838 4 Wang, X.L., Qi, H.R., Iyengar, S.S.: ‘Collaborative multi-modality target classi?cation in distributed sensor networks’. Proc. Fifth Int. Conf. Information Fusion, Annapolis, USA, July 2002, pp. 285–290 5 Duarte, M.F., Hu, Y.H.: ‘Distance-based decision fusion in a distributed wireless sensor network’, Telecommun. Syst., 2004, 26, (2– 4), pp. 339–350 6 Niu, R.X., Varshney, P.K., Moore, M., Klamer, D.: ‘Decision fusion in a wireless sensor network with a large number of sensors’. Proc. Seventh Int. Conf. Infromation Fusion, Stockholm, Sweden, pp. 21–27 7 Clouqueur, T., Saluja, K.K., Ramanathan, P.: ‘Fault tolerance in collaborative sensor networks for target detection’, IEEE Trans. Comput., 2004, 53, (3), pp. 320 –333 8 Wu, H.W., Mendel, J.M.: ‘Quantitative analysis of spatio-temporal decision fusion based on the majority voting technique’. Proc. SPIE, Orlando, FL, USA, April 2004, vol. 5434, pp. 13–24 9 Lee, M.H., Choi, Y.H.: ‘Fault detection of wireless sensor networks’, Comput. Commun., 2008, 31, (14), pp. 3469–3475 10 Rajagopal, R., Nguyen, X.L., Ergen, S.C., et al.: ‘Distributed online simultaneous fault detection for multiple sensors’. Proc. Int. Conf. Information Processing in Sensor Networks (IPSN), St. Louis, USA, April 2008, pp. 133– 144 11 Koushanfar, F., Potkonjak, M., Sangiovanni-Vincentelli, A.: ‘On-line fault detection of sensor measurements’, Proc. IEEE Sensors, 2003, 2, pp. 974–979 12 Chen, J.R., Kher, S., Somani, A.: ‘Distributed fault detection of wireless sensor networks’. Proc. DIWANS, Los Angeles, USA, 2006, pp. 65–72 13 Banerjee, T., Xie, B., Agrawal, D.P.: ‘Fault tolerant multiple event detection in a wireless sensor network’, J. Parallel Distrib. Comput., 2008, 68, pp. 1222– 1234 14 Venkataraman, G., Emmanuel, S., Thambipillai, S.: ‘Energy-ef?cient cluster-based scheme for failure management in sensor networks’, IET Commun., 2008, 2, (4), pp. 528– 537 15 Shafer, G.: ‘A mathematical theory of evidence’ (Princeton University Press, 1976) 16 Chen, B., Jiang, R.X., Kasetkasem, T., Varshney, P.K.: ‘Channel aware decision fusion in wireless sensor networks’, IEEE Trans. Signal Process., 2004, 52, (12), pp. 3454– 3458 17 Niu, R.X., Chen, B., Varshney, P.K.: ‘Fusion of decisions transmitted over rayleigh fading channels in wireless sensor networks’, IEEE Trans. Signal Process., 2006, 54, (3), pp. 1018–1027 18 Aldosari, S.A., Moura, J.M.F.: ‘Fusion in sensor networks with communication constrainsts’. Proc. Int. Conf. Infromation Processing in Sensor Networks (IPSN), Berkeley, USA, April 2004, pp. 108–115 19 Chamberland, J., Verravalli, V.V.: ‘Decentralized detection in sensor networks’, IEEE Trans. Signal Process., 2003, 51, (2), pp. 407– 416 20 Pan, Q., Wei, J.M., Cao, H.B., Li, N., Liu, H.T.: ‘Improbed DS acousticseismic modality fusion for ground-moving target classi?cation in wireless sensor networks’, Pattern Recognition Letters, 2007, 28, (16), pp. 2419– 2426 21 Kotecha, J.H., Ramachandran, V., Sayeed, A.M.: ‘Distributed multitarget classi?cation in wireless sensor networks’, IEEE J. Sel. Areas Commun., 2005, 23, (4), pp. 703– 713 22 Wang, T.Y., Han, Y.S., Varshney, P.K., Chen, P.N.: ‘Distributed faulttolerant classi?cation in wireless sensor networks’, IEEE J. Sel. Areas Commun., 2005, 23, (4), pp. 724 –734 23 Krishnamachari, B., Iyengar, S.: ‘Distributed bayesian algorithms for fault-tolerant event region detection in wireless sensor networks’, IEEE Trans. Comput., 2004, 53, (3), pp. 241– 250 24 Luo, X.W., Dong, M., Huang, Y.L.: ‘On distributed fault-tolerant detection in wireless sensor networks’, IEEE Trans. Comput., 2006, 55, (1), pp. 58– 70 25 Ding, M., Chen, D.C., Thaeler, A., Cheng, X.Z.: ‘Fault-tolerant Target Detection in Sensor Networks’. Proc. IEEE Wireless Communications and Networking Conf. (WCNC), New Orleans, USA, March 2005, pp. 2362– 2368 26 Vuran, M.C., Akan, O.B., Akyildiz, I.F.: ‘Spatio-temporal correlation: theory and applications for wireless sensor networks’, Comput. Netw., 2004, 45, pp. 245–259 27 Xiao, L., Boyd, S., Lall, S.: ‘A space– time diffusion scheme for peerto-peer least-squares estimation’. Proc. Int. Conf. Infromation 441

& The Institution of Engineering and Technology 2011

www.ietdl.org
Processing in Sensor Networks (IPSN), Nashville, USA, April 2006, pp. 168–176 28 Wu, G., Wu, Y., Jiao, L., Wang, Y.F., Chang, E.Y.: ‘Multi-camera spatio-temporal fusion and biased sequence-data learning for security surveillance’. Proc. 11th ACM Int. Conf. Multimedia, Berkeley, USA, November 2003, pp. 528– 538 29 Long, F.H., Feng, D.G., Peng, H.C., Siu, W.C.: ‘Extracting semantic video objects’, IEEE Comput. Graph. Appl., 2001, 21, (1), pp. 48–55 30 Trivedi, M.M., Mikic, I., Bhonsle, S.K.: ‘Active camera networks and semantic event databases for intelligent environments’, IETE J. Res., 2002, 48, (3–4), pp. 295–302 31 Hong, L., Lynch, A.: ‘Recursive temporal-spatial information fusion with applications to target identi?cation’, IEEE Trans. Aerosp. Electron. Syst., 1993, 29, (2), pp. 435–445 32 Poor, H.V.: ‘An introduction to signal detection and estimation’ (Springer, 1988) 33 Yager, R.R.: ‘On the Dempster–Shafer framework and new combination rules’, Inf. Sci., 1987, 41, pp. 93–138 34 Chen, Q.C., Lam, K.Y., Fan, P.Z.: ‘Comments on Distributed bayesian algorithms for fault-tolerant event region detection in wireless sensor networks’, IEEE Trans. Comput., 2005, 54, (9), pp. 1182– 1183

442 & The Institution of Engineering and Technology 2011

IET Commun., 2011, Vol. 5, Iss. 4, pp. 434 –442 doi: 10.1049/iet-com.2009.0746


相关文章:
更多相关标签: