03964.com

文档资料库 文档搜索专家

文档资料库 文档搜索专家

Microelectronics Reliability 49 (2009) 199–208

Contents lists available at ScienceDirect

Microelectronics Reliability

journal homepage: www.elsevier.com/locate/microrel

Evolutionary derivation of optimal test sets for neural network based analog and mixed signal circuits fault diagnosis approach

S.J. Seyyed Mahdavi *, K. Mohammadi

Electrical Engineering Department, Iran University of Science and Technology Tehran, Iran

a r t i c l e

i n f o

a b s t r a c t

Testing issues are becoming more and more important with the quick development of both digital and analog circuit industry. In this paper, we study the utilization of evolutionary algorithms for optimal input vectors derivation of neural network based analog and mixed signal circuits fault diagnosis approach and compare the results with normal method. We have introduced a new procedure which uses the n-detection test set concept and selects the input samples in a way that for each case of fault injection, there will be at least n sample to activate that fault. This procedure performs the optimization in two ways. The ?rst one called speed method generates samples in a way that acceptable decision strength and lower training phase duration would be achieved. The second one called stamina method generates samples in a way that best decision strength and higher training phase duration would be achieved. Experimental results demonstrate that the obtained input voltages yields fault diagnosis with increased fault coverage and high decision strength. ? 2008 Elsevier Ltd. All rights reserved.

Article history: Received 29 April 2008 Received in revised form 2 December 2008 Available online 9 January 2009

1. Introduction Fault diagnosis and fault location in analog and mixed signal circuits are of fundamental importance for design validation and prototype characterization in order to improve yield through design modi?cation. However, at present, while for digital circuits well-consolidated fully automated techniques for fault diagnosis are commonly used, for analog circuits the development level is less advanced [1]. The analog test determination problem is formulated as selecting an optimal subset from an initial large set of tests with optimality criteria de?ned in terms of fault coverage and fault separation on a given fault set [2]. The test set size, however, is directly proportional to the cost and time of test application, which may involve more than thousands of chips. Furthermore, a compact test set also results in a smaller demand for buffers in ATE as well as less hardware overhead in an IC with in-chip test storage [3]. Test data compression provides two bene?ts. First, it reduces the amount of data stored on the tester, which can extend the life of older testers that have limited memory. Second—and this is the more important bene?t, which applies even for testers with plenty of memory— it can reduce the test time for a given test data bandwidth. Test data compression must compress the test vectors losslessly to preserve fault coverage [4].

Minimal test sets have the property that each input test vector tests simultaneously several faults in a circuit [5]. A minimal test set can be de?ned as follows [6]: For a given set S of detectable faults a set of test vectors is said to be a minimal test set if it contains the least number of test vectors required to cover all the faults in S. Although the ideal goal of test compaction is to obtain the minimal test set, its computation is unfortunately not feasible because the complexity of calculating the minimum number of tests required to detect all single faults in a circuit has already been proven to be NP-hard [3]. One of the approaches for deriving test sets to achieve high defect coverage is based on the generation of n-detection test sets [7]. An n-detection test set is one where each modeled fault is detected either by n different tests or by the maximum number of different tests that can detect the fault if this number is smaller than n [7– 12]. In several experiments involving fabricated chips reported in [8], n-detection test sets were shown to be useful in achieving high coverage of unmodeled faults. In this work, we report on test pattern generators to derive test sets that detect each fault a given number of times (each fault is detected a given number of times by different tests in the test set). Additionally, we study the effects of test set compaction on the defect coverage of such test sets. Compact test sets lead to shorter test application times, hence their practical importance [9,11,12]. In our previous work, we proposed a fast method for fault diagnosis of mixed signal circuits based on neural network training. But

* Corresponding author. E-mail addresses: javad_mahdavi@ee.iust.ac.ir (S.J. Seyyed Mahdavi), mohammadi@iust.ac.ir (K. Mohammadi). 0026-2714/$ - see front matter ? 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.microrel.2008.12.002

200

S.J. Seyyed Mahdavi, K. Mohammadi / Microelectronics Reliability 49 (2009) 199–208

we have a problem on selecting the input samples to train them with neural networks [13]. Test generation procedures based on genetic optimization [1] were shown to be effective in achieving high fault coverage for digital benchmark circuits [14]. In this work, we introduce a test generation procedure based on genetic optimization to derive an optimal test set for analog and mixed signal circuits. Genetic algorithms (GAs) differ from other optimization methods in the following ways [19]: 1. GAs search from a population of points, not a single point that equips them with inherent parallel computation ability. This population can climb over hills and move across valleys. GAs can therefore discover a globally optimal point. 2. GAs use payoff (?tness or objective functions) information directly for the search direction, not derivatives or other auxiliary knowledge. GAs therefore can deal with the nonsmooth, noncontinuous and nondifferentiable functions that are experienced in real-life optimization problems. This property also equips GAs to ?nd more accurate solutions for highly nonlinear real-life optimization problems in contrast with traditional optimization methods, which make lots of approximations (hence incorporate error) in the model representation. 3. GAs use probabilistic transition rules to select generations, not deterministic rules, so they can reach a complicated and uncertain area to ?nd the global optimum. GAs are more ?exible and robust than conventional methods. These features mean that GAs are robust and parallel search algorithms than adaptively search for the global optimal point. The existence of multiple decision-makers and goals, spatial and non-linear optimization objectives and the combinatorial nature of optimization problems are reasons that support the use of heuristic algorithms instead of the more traditional methods. A heuristic is a search algorithm that does not necessarily ?nd the global optimum but it can produce relatively good solutions within reasonable time. The performance of different heuristics may vary depending on the complexity of the optimization problem. It has already been proven that GA is the best heuristic optimization algorithm in the most dif?cult problems. The results suggest that it is a bene?t if the method performs more complicated moves than selecting one of the neighboring solutions and therefore GA will be an appropriate choice [20]. This paper is organized as follows. Section 2 reviews the neural network based fault diagnosis approach of analog and mixed signal circuits. This section also illustrates the test set selection problem, motivating the need for a procedure for derivation of optimal test set for analog and mixed signal circuits. In Section 3, our evolutionary technique for obtaining optimal test set is introduced and an algorithm is developed and implemented through genetic algorithm. In Section 4, we present some preliminary experimental results to show how this approach can be used during the neural network based fault diagnosis approach of analog and mixed signal circuits. The fault diagnosis of an analog benchmark circuit is analyzed and compared with normal method in this section. Finally, conclusions and directions for future research are presented. 2. Neural network based fault diagnosis approach of mixed signal circuits Neural network based fault diagnosis is a fast technique for determining the location of faults in analog and mixed signal circuits. This method has been proposed in [13] to diagnose an ADC (analog to digital convertor) circuit. Here, we apply the method for an analog benchmark circuit and compare the results with our evolutionary approach. The original technique consists of the following steps.

2.1. Choosing the type of fault models to be injected to the circuit Fig. 1 shows the schematic of a single stage ampli?er that is proposed in [15] to be used as an analog benchmark circuit. Fault models for analog and mixed signal circuits can be classi?ed into two categories: catastrophic faults (sometimes called hard faults) and parametric faults (sometimes called soft faults) [16]. A catastrophic fault model is analogous to the stuck-at fault model in the digital domain in that the terminals of the component can be stuck-open or stuck-short. In this way, for a stuck-open resistor, a 100 MX resistor and for a stuck-short resistor a 1 X resistor has been used. Parametric faults, on the other hand, are deviations of component parameters that result in performance out of acceptable limits. Parametric faults can be simulated as a variation of a component parameter which is out of speci?ed tolerance limits. We establish acceptable component parameter variations using a normal distribution and specify the 1-sigma value (expecting the acceptable variation to be up to 3-sigma). Therefore we specify parametric faults at the ±6-sigma values for high and low parametric fault values, respectively. Table 1 lists the value of all elements at the ±6-sigma for circuit of Fig. 1. 2.2. Fault data bank generation In this step, fault models are injected to the circuit and the results of each case fault injection are saved in an individual Excel ?le. This stage consists of the following steps: 1. Initially, we should determine all of the possible locations of fault injection and assign a number for their fault happening. Table 2 shows the numbers assigned to each case of fault injection. 2. Choosing inputs and outputs of the neural network. In this method, the voltages of all the nodes of the circuit (either inputs, or outputs of the circuit) are considered as the inputs of the neural network. Since each neural network’s output represents one fault model happening in the circuit, for circuit of Fig. 1, the neural network would have four outputs. 3. Injecting the fault to the circuit. In this step, the transient analysis of Multisim is used. For each case of fault injection, the input voltage of the circuit is swept from 0 V to 5 V. An important

Fig. 1. Single stage ampli?er benchmark circuit.

S.J. Seyyed Mahdavi, K. Mohammadi / Microelectronics Reliability 49 (2009) 199–208 Table 1 Value of elements at the ±6-sigma for circuit of Fig. 1. Soft fault table b R1 R2 R3 R4 Rb Nominal value 80 17.6 KX 61.0 KX 1.2 KX 300.0 X 100 X Variation at 1-Sigma 12 0.9 KX 3.1 KX 60 X 15 X 5X ?6-Sigma 6 12.2 KX 42.4 KX 0.84 KX 210.0 X 70 X +6-Sigma 152 23.0 KX 79.6 KX 1.5.6 KX 390.0 X 130 X

201

models. In this approach instead of having k single outputs, the neural network uses k vectors, each one with m elements. Each element of the vector refers to a case of fault injection. For example, if a sample refers to both stuck-short 2 and 5, then in stuck-short vector, indexes 2 and 5 will turn into 1 and others remain zero. Since vector indexes indicate the occurrence of a fault in a special node, we call this method ‘‘indexing approach”. This method suffers from following problems: A very high training time. Not reaching the desired performance. 2.4. Index–separation approach

Table 2 Assigned numbers to each case of fault injection. Fault number 1 2 3 4 5 6 7 8 Stuck-open R1 R2 R3 R4 R5 Q1 base Q1 collector Q1 emitter Stuck-short R1 R2 R3 R4 R5 Q1 Base–collector Q1 Emitter–collector Q1 Base–emitter ±6-sigma R1 R2 R3 R4 R5 b

Suppose s is the number of all samples. In the index-separation method, each s ? m output matrix is converted into m vectors with the length s. Each of these vectors refers to an index, and each index refers to a case of fault occurrence. Therefore, for each fault model, we will have m neural networks that should be trained by their output vector. 2.5. Testing neural networks

point is the value of the voltage step. The value of the step determines the number of obtained samples for each case of fault injection. Since the number of samples will be reduced in the next step, voltage step value can be very small. 4. Reducing the number of samples in each sweep. It is trivial that if the size of the circuit increases, the number of the nodes and, therefore, the number of cases that fault can be injected to the circuit will increase. As a result, the number of total samples would be so large that we cannot apply them to the neural network, because the training time and the number of neurons would be impractical. The main idea of this method is that for each sweep, we adopt only a few samples (N), and the neural network will train for these samples. Therefore from samples generated for each sweep in previous step, only N samples are selected. Note that these N samples, have N identical input voltages for all sweeps. In other words, initially N input voltages are selected and N samples are adopted from all of the samples of each sweep according to these N input voltages for all sweeps. These N input voltages are selected by uniform distribution in the interval of 0 V to 5 V. We will name this method of N input voltage selection, ‘‘normal” method. 5. Deleting samples equal to fault-free samples. When a fault is injected to a node, only some of the samples would differ from fault-free samples, and the others are completely equal. Notice that fault-free samples have zero outputs, while other samples have non-zero outputs. Therefore, neural networks cannot be trained for such samples, and usually the neural network training stops in an undesirable performance. 6. Merge equal faulty samples. Although in the previous step, samples equal to fault-free samples are deleted, there are still samples whose inputs are equal. We will consider two samples whose inputs are equal, but since they refer to different cases of injecting fault to circuit, their outputs differ. In such a case, the second sample should be deleted, and its output should be inserted in the output of the ?rst sample. In this way, we should change the neural network’s outputs to support multiple faults for a sample, which leads to ‘‘Indexing approach”.

Since the neural network has been trained for all single fault occurrence cases, it is so trivial that it will produce zero error for single fault test vectors. Thus, for testing neural networks, we have to generate test vectors which refer to two concurrent faults occurring in the circuit. Table 3 compares the decision strength of Indexing and Index-separation approaches for the normal method. 3. Evolutionary derivation of optimal test sets The above cited algorithm, which from now is called ‘‘normal method”, may produce proper results in special cases, but generally the way of selecting N input voltages has a signi?cant effect on the results. Normal method uses uniform distribution for selecting N input voltages, while in many cases; samples which activate most of the faults may have an asymmetric distribution. Therefore, some of faults will be inactivated and also most of faults will be activated by limited few number of samples. In this way, in addition to having poor fault coverage, occurrence of more faults in circuit may result in masking all of samples which activate one of the faults and therefore that the procedure cannot detect that fault. Another problem is related to N. If N is decreased, Table 3 shows that the decision strength of neural network will become poor. For example, Fig. 2 shows the two fault diagnosis of normal method for indexing approach. It is observed than when N is too small, neural network has very low decision strength. Therefore, if selection of N input voltages could be done in a way that satisfy the following conditions, better fault coverage is

Table 3 Decision strength of Indexing and Index-separation approaches for the normal method. N (No. of input voltages) 2 4 Training approach Separation Index-Sep. Separation Index-Sep. Separation Index-Sep. Separation Index-Sep. Fault detection (%) 87.5 87.5 95 95 95 95 95 95 1 Fault diagnosis (%) 67.5 70 90 90 90 95 95 92.5 2 Fault diagnosis (%) 2.5 12.5 15 10 45 47.5 57.5 47.5

2.3. Indexing approach Suppose k is number of fault models injected to the circuit and m is the number of fault occurrences in the circuit for each of fault

8 16

202

S.J. Seyyed Mahdavi, K. Mohammadi / Microelectronics Reliability 49 (2009) 199–208

should adopt samples that their input voltages conform to input voltages of fault free circuit samples. Since s is ?xed, we can concatenate all of Ap matrixes and obtain total activation matrix (A). Concatenation process takes place using the following relation:

A ? ?A1 A2 A3 A4 ?

?1?

Now, we have a matrix that each of its rows is related to a sample and since the columns identify a case of fault injection, each row represents the faults that are activated by that sample. 3.2. Evolutionary derivation of N input voltages In this step, genetic algorithm is used for optimization. First of all, we will review the basics of genetic algorithms and then the utilization of genetic algorithm for deriving optimal input voltages is described in detail. 3.2.1. Genetic algorithms in test set compaction Genetic algorithms (GAs) draw inspiration from the biological and evolutionary processes of selection, crossover and mutation. They use directed, probabilistic search techniques to ?nd globally optimal solutions in large, complex search spaces. GAs contain populations of k candidate solutions called chromosomes. In a binary encoded GA, chromosomes consist of bit strings, each bit being referred to as a gene. Associated with each chromosome is a ?tness value (dependent on the application) which rates its competence as a solution. After an initial population has been randomly generated, subsequent generations (each containing k chromosomes) evolve by mating ?t individuals from the current population. The objective of the GA is to evolve test sets that are successively more minimal from an initial, randomly generated population (population size k, ranging from 50 to 200 chromosomes depending on the circuit). The ?tness of each test set is based on its competence as a minimal test set, i.e. as one which covers a given number of faults with the least possible number of test vectors. The ?tness of each chromosome is determined by the ?tness function (discussed in more detail below) and is essentially a mathematical formulation of the aforementioned criteria. The ?ttest chromosomes are chosen as mates to produce the next generation since they contain good genetic building blocks from which the subsequent generations evolve. It is at this point that we have our minimal or near minimal test set. 3.2.2. The chromosome structure The ?rst stage in applying a GA to any problem is translating it into the GA framework. This is achieved by de?ning a coding scheme from which we can produce the chromosomes. In the present context it is appropriate to choose each chromosome to represent the selected input voltages; each chromosome is then of length s, where s is equal to the number of samples produced by each sweep. A gene value (allele) of 1 in the chromosome represents the presence of an input voltage corresponding to the index of that gene in the chromosome and a 0 the absence of it, with the ?rst gene representing input voltage 1, the second input voltage 2, and so on. A typical chromosome for our example circuit is (0 0 1 0–0 1 1 0 1). 3.2.3. The ?tness function At the heart of all GAs is the ?tness function and its appropriate formulation is a key factor in the successful application of the algorithm. In our problem the ?tness function must re?ect the de?nition of a minimal test set. The de?nition given earlier is for the general case in which no prior knowledge of the maximal fault coverage attainable or of the size of a minimal test set is assumed.

Fig. 2. Two fault diagnosis of normal method using indexing approach.

achieved and the masking probability of all activating samples will decrease signi?cantly: The selected samples activate all faults Each fault is activated by n samples In this work, a procedure for derivation of an optimal set of N input voltages based on the concept of n-detection test sets using evolutionary algorithm is proposed. In our method, this set is obtained in two ways: A set of N input voltages that improve the decision strength of neural network, but increase its training time. A set of N input voltages that needs less training time of neural network, but its decision strength is almost proper. Since the ?rst approach generates better fault diagnosis results, it is named ‘‘stamina approach” and the next one works faster and is called ‘‘speed approach”. The procedure proposed in this work, specially improves Section 3.2.4. of the original algorithm cited in previous section. In other words, the derivation of N input samples is done in a way that for every case of fault injection to the circuit there will be at least n samples which activate the fault. The proposed procedure consists of following steps. 3.1. Determining activation matrix In this step, we want to generate an s ? m matrix for every fault model, where s is number of samples for each case of fault occurrence in the circuit (Since for each case of fault injection a sweep of input voltage is done, we will use the term ‘‘a sweep” instead of ‘‘each case of fault injection”.). In other words, for a sweep of voltage for each case of fault injection, s samples are obtained. As mentioned before, m represents number of cases for each fault model. The entry ij of this matrix determines whether for the jth case of fault injection, the ith sample activates the fault or not. If it activates the fault, the entry becomes 1 and otherwise it remains zero. Since, we have 4 fault models (stuck-open, stuck-short, ±6-Sigma)1, an s ? m matrix is generated for each of them and is indicated by Ap where p{1,2,3,4}. This matrix is called partial activation matrix. Notice, s should be ?xed for all sweeps. It means that all of the cases of fault injection will generate s samples. Otherwise, we

1 For mixed signal circuits, stuck-at fault models are used for digital portion of the circuit.

S.J. Seyyed Mahdavi, K. Mohammadi / Microelectronics Reliability 49 (2009) 199–208

203

In order to calculate the ?tness, chromosome (X) is considered as a vector. This vector is multiplied by each column of total activation matrix A. This product will be the number of samples that this chromosome activates for this case of fault injection. Since we have four fault models and each fault model has mp cases of fault injection (see Table 2) where p 2 f1; 2; 3; 4g, activation maP trix A will have mp columns. Therefore, if chromosome is multiP plied by A, the product will be a vector (T) of size mp. Each entry of this vector (T) is equal to the number of samples that this chromosome activates for a case of fault injection. We name this vector (T), ‘‘chromosome activation vector”. It is trivial that best chromosome must select the least number of input voltages. Since, each entry of chromosome selects an input P voltage; the ?tness function is formulated in a way that i X(i) will be minimized, where i 2 f1; 2; . . .g. As mentioned before, we can de?ne ?tness function by two different approaches: stamina approach and speed approach. For stamina approach, it is desired to achieve better decision strength in spite of more training time of neural network. Each entry of chromosome activation vector represents the number of samples that this chromosome activates for a case of fault injection. Therefore, if we maximize the sum of chromosome activation vector’s entries, then all of the fault cases will have the most number of samples which activate them. As a result, the probability of that a fault masks other faults will decrease. On the other hand, since total number of activated samples increases, neural network needs to be trained for more inputs and this causes more training time. Since our genetic algorithm tries to minimize the ?tness

function, the following ?tness function is used for stamina approach:

F??

X

j

T?j?=

X

i

X?i?;

i 2 1; 2; . . . ; s; j 2 1; 2; . . . ;

X

mp

?2?

P This ?tness function minimizes the i X(i) and maximizes the j T(j). For speed approach, it is desired to achieve faster algorithm which results in lower decision strength of neural network. Here, we minimize the sum of chromosome activation vector’s entries. Since total number of activated samples decreases, training phase of neural network will be faster. But, the probability of masking will rise, because each case fault is activated by minimum number of samples. The following ?tness function is used for speed approach: P

F ? ?1=

X

j

T?j?;

j 2 1; 2; . . . ;

X

mp

?3?

P Here, since we are minimizing the j T(j), there is no need to P P minimize the i X(i). It means that minimizing j T(j) will result P in i X(i) minimization. As mentioned before, we want to use the concept of n-detection test sets for selecting N input voltages. Therefore, N input voltages should be selected in a way that: a) Each fault case is activated by n samples b) Or is activated by the maximum number of different samples that can detect the fault if this number is smaller than n. To implement this concept, two new variables are de?ned: limit: this variable will be equal to n in the n-detection concept, minT: this variable is equal to number of minimum activated samples for all cases of faults and is always less than the Limit. The initial value of minT is 1 which means a chromosome is selected if for all cases of fault, there is at least 1 activating sample. Using these variables, ?tness of a chromosome is determined; if the following conditions are con?rmed, the ?tness of the chromosome is calculated using relations (2), (3), otherwise the ?tness of the chromosome will be zero: Summation of chromosome entries should be greater or equal to P limit ( iX(i)Plimit).

Table 4 Comparison of ?tness related parameters for stamina, speed and normal methods. N (No. of input voltages) 2 Choosing samples Normal Speed Stamina Normal Speed Stamina Normal Speed Stamina Normal Speed Stamina Total activated samples 41 43 44 81 86 88 165 172 176 329 344 352 Minimum activated samples (n) 0 1 1 0 2 2 0 4 4 0 8 8 Undetectable faults 4 0 0 2 0 0 1 0 0 1 0 0

4

8

16

appropriate threshold

Genetic algorithm

Population chromosome activation vector

Crossover & Mutation & Selection

data bank for each case of fault occurrence

Generate Partial activation matrix for each fault type

Generate Partial activation matrix for each fault type

Stamina method Maximizes the sum of activation vector entries

Speed method Minimizes the sum of activation vector entries

Continue the neural network based fault diagnosis algorithm

Determine input voltage samples corresponding to best chromosome

Yes

Best result reached

No

Fig. 3. Diagram of the proposed algorithm.

204

S.J. Seyyed Mahdavi, K. Mohammadi / Microelectronics Reliability 49 (2009) 199–208

Minimum of the chromosome activation vector should be greater or equal to minT. (min(T) P minT). These conditions implement the n-detection concept. The ?rst condition is related to ?rst part of n-detection de?nition and the second condition con?rms the second part. Notice that during the algorithm, if a chromosome is found that min(T) > minT and also limit P min(T) then the min(T) value is assigned to minT. In this way, the value of minT is increased during the algorithm to make sure that the n-detection concept is con?rmed. Table 4 compares the stamina, speed and normal methods for different number of input voltages (N) for ?tness related parameP ters such as total activation samples ( jT(j)), number of unde-

tected faults and minimum activated samples (n). It is observed that when N decreases, number of undetectable faults increase for normal method, while for both of stamina and speed methods, no undetectable fault exists. Also, it is observed that stamina method has more activated samples than speed method. Notice that normal method has less activated samples than speed method, because there are fault cases that normal method has no samples to activate the fault, but speed method has at least n samples to activate any case of fault. 3.2.4. Population, selection, crossover and mutation Initially many individual solutions are randomly generated to form an initial population. The population size depends on the nature of the problem, but typically contains several hundreds or

Fig. 4. Training phase duration of the speed and stamina methods for indexing and index-separation approaches.

Fig. 5. Decision strength of the normal, speed and stamina methods using indexing approach.

S.J. Seyyed Mahdavi, K. Mohammadi / Microelectronics Reliability 49 (2009) 199–208

205

thousands of possible solutions. Traditionally, the population is generated randomly, covering the entire range of possible solutions (the search space). The greater the population size, the greater time to reach the proper result. In this work, the population size is equal to 150. The standard biased roulette-wheel selection method [17] was used for selecting the parents. In this way, the greater the ?tness of an individual, the greater the probability of it being selected. The GA community has devised several forms of crossover, each with its merits [18]. The method employed in the present case is two point crossover. Once selected, parent pairs do not always exchange genetic information, but do so with a given probability known as the crossover probability, c. This was set at 80%. Mutation describes the random alteration of genes within chromosomes. The mutation method employed in the present case is Gausian mutation. Gaussian mutation adds a random number to each vector entry of an individual. This random number is taken from a Gaussian distribution centered on zero. The variance of this distribution can be controlled. 3.2.5. Deriving N input voltages After genetic algorithm ?nished optimization, we have the best chromosome which activates at least n samples for each case of fault injection. Now we can ?nd the samples that correspond to the index of entries with value equal to 1 in the best chromosome. In other words, these entries correspond to optimal N input voltages. Fig. 3 shows a diagram of the proposed algorithm. Now we can continue the original algorithm from step B-5. In the next section,

VCC VCC 12V

Rc1

10k Ω 3

Rc2

10k Ω 6

Rbias 20kΩ

1

Rs1 1k Ω V2

2 Q1 Q2 5

Rs2 1k Ω 0

4 0 7

Q3 Vee Vee -12V

Fig. 8. Differential ampli?er benchmark circuit.

Q4

results of these three methods (normal, speed and stamina) are compared with each other.

Fig. 6. Decision strength of the normal, speed and stamina methods using index-separation approach.

Fig. 7. Two fault diagnosis capability for index-separation and indexing approaches using speed and stamina methods.

206

S.J. Seyyed Mahdavi, K. Mohammadi / Microelectronics Reliability 49 (2009) 199–208

4. Results After the neural network training is done for both indexing and index-separation approaches for these three methods, 40 test cases for two faults occurring in the circuit are applied to the networks and the results are compared with each other. A three layer MLP neural networks are used in this simulations. For indexing approach, each hidden layer has 100 neurons and for index-separation approach, each hidden layer has 15 neurons.

Fig. 4 compares the speed and stamina methods by the training phase duration. Fig. 4a shows the duration for indexing approach, while Fig. 4b is related to index-separation approach. Fig. 4c and d compares the effect of indexing and index-separation approaches on the training phase duration for each of speed and stamina methods. It is observed that as it was expected, for both indexing and index-separation approaches, the speed method has lower training phase duration than stamina method. An interesting result is that

Fig. 9. Comparison of speed and stamina methods for differential ampli?er benchmark circuit: (a) two fault diagnosis capability and (b) training phase duration.

Fig. 10. (a) Leapfrog ?lter benchmark circuit and (b) operational ampli?er.

S.J. Seyyed Mahdavi, K. Mohammadi / Microelectronics Reliability 49 (2009) 199–208

207

for large value of N (N > 6), indexing approach has larger training time than index-separation approach which is in agreement with the results of [13]. But for small value of N (N < 6), indexing approach has lower training time and means that there is no need to separate the output matrix of neural network to achieve better training time. Therefore, since we can reach relatively small value of N by evolutionary approach, the indexing approach will be more signi?cant. Figs. 5 and 6 compare the normal, speed and stamina methods by the decision strength for fault detection and fault diagnosis using indexing and index-separation approaches. It is observed that as it was expected, stamina method has relatively higher decision strength than other methods, especially for two fault diagnosis. Also, it is observed that for very small value of N such as 2, the normal method is too weak in fault detection and location, while both of stamina and speed method generate acceptable results. In order to compare the effect of indexing and index-separation approaches on the decision strength, Fig. 7 compares the two fault diagnosis capability for these approaches using speed and stamina methods. It is observed that for both methods, index-separation approach is more successful to locate both faults correctly than indexing approach and therefore results in better decision strength. The proposed method is applied to two other benchmark circuits. The ?rst one is shown in Fig. 8. This differential ampli?er consists of 5 resistors and 4 transistors. Therefore, total number of fault cases for this circuit will be 52 cases, while previous circuit has only 28 cases for fault occurrence.

Fig. 9 shows the observed results for this circuit using indexseparation approach. Fig. 9a compares the speed and stamina methods by the decision strength for two fault detection and Fig. 9b compares the speed and stamina methods by the training phase duration. It is observed that these results are in agreement with single stage ampli?er results. The third circuit is shown in Fig. 10. This leapfrog ?lter benchmark circuit consists of 13 resistors, four capacitors and six operational ampli?ers. Each operational ampli?er (Fig. 10b) consists of nine MOSFet transistors and one capacitor. Therefore, total number of fault cases for this circuit will be 200 cases, while previous circuits have only 28 and 52 cases for fault occurrence, respectively. In order to diagnose the location of fault in the capacitors, the input sweep voltage should be fast enough; therefore capacitors will not work similar to open circuits and will in?uence the output response. Fig. 11 shows the observed results for leapfrog ?lter using index-separation approach. Fig. 11a compares the speed and stamina methods by the decision strength for two fault detection and Fig. 11b compares the speed and stamina methods by the training phase duration. It is observed that these results are in agreement with previous results. The training phase duration of neural networks for these three circuits are compared in Fig. 12. Since leapfrog ?lter and, differential ampli?er have more fault occurrence cases (about 7 and 2 times, respectively), it is expected that its training phase duration should be much greater. But it is observed that the growth of training time is less than the growth of fault occurrence cases (about 6 and 1.2 times, respectively). This indicates that the basic index-

Fig. 11. Comparison of speed and stamina methods for leapfrog ?lter benchmark circuit (a) two fault diagnosis capability and (b) training phase duration.

Fig. 12. Comparison of training phase duration for three benchmark circuits (a) Speed method and (b) Stamina method.

208

S.J. Seyyed Mahdavi, K. Mohammadi / Microelectronics Reliability 49 (2009) 199–208

separation approach is scalable, especially when N is very small. It means that although the size of the problem obviously increases exponentially as the number of test vectors increases, but the growth of time is negligible. The proposed evolutionary method is scalable too, because it doesn’t depend on number of nodes or number of fault occurrence cases. The only important parameter which is related to scalability of this method is total number of samples (s). The execution time of the GA however, does not increase exponentially with circuit size, because the activation matrix will have ?xed value of s, but its columns will increase exponentially. Since size of chromosome is equal to s, therefore execution time of GA will not increase exponentially. 5. Conclusion In this paper, we introduced a new procedure to optimize the selection of input voltages for neural network based fault diagnosis approach of analog and mixed signal circuits. This evolutionary procedure uses very simple ?tness function to achieve a chromosome which con?rms the concept of n-detection test sets. Therefore, each case of fault injection will be the activated by at least n samples. This results in better fault coverage and also the probability of masking for multiple faults in the circuit will decrease. The proposed procedure can optimize the chromosome selection process in two ways. The ?rst one which is named ‘‘speed method” minimizes the total number of activated samples (while still n-detection concept is con?rmed). Therefore lower training phase time for neural network is achieved, since neural network is trained for smaller number of samples. But masking probability will increase for the case of multiple faults in the circuit, because although each case of fault injection is activated by at least n samples, but since N (number of input voltages) is always more than n, samples do not try to activate each case of fault with more than n samples in this method. The second one which is named ‘‘stamina method” maximizes the total number of activated samples. Therefore higher training phase time for neural network is achieved, since neural network is trained for greater number of samples. But masking probability will decrease for the case of multiple faults in the circuit, because although each case of fault injection is activated by at least n samples, each case of fault tends to be activated by maximum number of samples limited by N. The results observed in the previous section are in agreement with the above cited reasons. We also concluded that although index-separation approach has better decision strength in both stamina and speed methods, but when the value of N becomes too small, the training phase

duration of this approach will not be the best and indexing method has better training phase time. References

[1] Fedi G, Manetti S, Piccirilli MC, Starzyk J. Determination of the optimum set of testable components in the fault diagnosis of analog linear circuits. IEEE Trans Circ Syst–I 1999;46(7):779–87. [2] Devarayanadurg G, Soma M, Goteti P, Huynh SD. Test set selection for structural faults in analog IC’s. IEEE Trans Comput-Aid Des Integr Circ Syst 1999;18(7):1026–39. [3] Chang J-S, Lin C-S. Test set compaction for combinational circuits. IEEE Trans Comput-Aid Des 1995(November):1370–8. [4] Touba NA. Survey of test vector compression techniques. IEEE Des Test Comput 2006;23(6):294–303. April. [5] Manjunath KS, Whitaker S. Optimal test set for stuck-at faults in VLSI. In: Proceedings of ?rst Great lakes symposium on VLSI, USA: Kalamazoo, MI; 1991. p. 104–9. [6] Takhar JS, Gilbert DJ. The derivation of minimal test sets for combinational logic circuits using genetic algorithms. In: Procedings of the 40th Midwest Symposium on Circuits and Systems, USA: Sacramento; 1997. [7] Pomeranz I, Reddy SM. On n-detection test sets and variable n-detection test sets for transition faults. IEEE Trans Comput-Aid Des 2000(March): 372–83. [8] Pomeranz I, Reddy SM. A measure of quality for n-detection test sets. IEEE Trans on Comput 2004;53(11):1497–503. [9] Kantipudi KR, Agrawal VD. On the size and generation of minimal N-detection tests. In: 19th international conference on VLSI design, 2006. Held jointly with 5th international conference on embedded systems and design, vol. 3(4); January 2006. p. 6. [10] Pomeranz I, Reddy SM. On fault equivalence, fault dominance, and incompletely speci?ed test sets. IEEE Trans Comput-Aid Des Integr Circ Syst 2005;24(8):1271–4. Aug. [11] Polian I, Pomeranz I, Reddy SM, Becker B. Exact computation of maximally dominating faults and its application to n-detection tests for full-scan circuits. IEE Proc Comput Digital Tech 2004;151(3):235–44. [12] Reddy SM, Pomeranz I, Kajihara S. Compact test sets for high defect coverage. IEEE Trans Comput-Aid Des Integr Circ Syst 1997;16(8):923–30. [13] Mohammadi K, Seyyed Mahdavi SJ. On improving training time of neural networks in mixed signal circuit fault diagnosis applications. Microelectron Reliab 2008;48(5):781–93. [14] Pomeranz I, Reddy SM. A Cone-based genetic optimization procedure for test generation and its application to n-detections in combinational circuits. IEEE Trans Comput 1999. [15] Kondagunturi R, Bradley E, Maggard K, Stroud C. Benchmark circuits for analog and mixed-signal testing. In: Proceedings IEEE Southeastcon, vol. 99; 1999. p. 217–20. [16] Starzyk JA, Liu D. Locating stuck faults in analog circuits. In: IEEE international symposium on circuits and systems, 26–29 May 2002, vol. 3. p. III-153–6. [17] Goldberg D. Genetic algorithms in search. Optim Mach Learn. LISA: AddisonWesley; 1989. [18] Mitchell M. Introduction to genetic algorithms. Cambridge, LISA: The MIT Press; 1996. [19] Lee KY, El-Sharkawi MA. Modern heuristic optimization techniques. WileyIEEE Press; 2008. [20] Pukkala T, Kurttila M. Examining the performance of six heuristic optimization techniques in different forest planning problems. Silva Fenn 2005;39:67–80.

相关文章:

- 2012Rigorous theoretical derivation of lumped models to ....pdf
- 2012Rigorous theoretical
*derivation**of*lumped models... the*analog*behavior*of*global digital ...*based*on the lumped element -*networks*by chain ...

- Nonlinear Generalized Predictive Control_图文.pdf
- ball is passed through
*analog*filters (a RC*network*with 100 Hz roll-off...3.*Neural*Generalized Predictive Control A complete*derivation**of*the NGPC ...

- ...for Thermistor Linearization with Chebyshev-Optimal ....pdf
*Analog*Circuits*for*Thermistor Linearization with Chebyshev-*Optimal*Linearity ...The*derivation**of*(4) assumes that the circuit uses the same sensor type...

- Design Space Exploration and Trade-Offs in Analog Amplifier ....pdf
- Hence, there usually exist other
*sets**of*design ...*Derivation**of*performance metrics Symbolic analyzer ...A Design Path*for*Optimizationa*Based**Analog*...

- Networks.pdf
*Neural**Networks*Bruges (Belgium), 26-28 April ...The*analog*signal form the oculographic ...*based*in detection*of*the*derivation**of*the EOG...

- Planning simple trajectories using neural subgoal generators_....pdf
- It is
*based*on some initial ideas presented in ...*analog*uous to the*derivation**of*conventional ...In*Neural**Networks**for*Control. LeCun, Y. (...

- ...algorithm for fault tolerant feedforward neural networks_....pdf
- k 2K The detail
*of*the*derivation**of*the above...The patterns*of*the*test**set*are applied to ...in*Analog**Neural**Network*Hardware", IEEE Trans. ...

- Neural representation of sensory data_图文.pdf
- (Rojer and Schwartz, 1990)
*for**derivation*) ...Purely symbolic and purely*analog*“machines” can...*Neural**Networks*, 12:205210. Download Postscript...

- Learning Flexible Neural Networks for Pattern Recognition_....pdf
*of*a*network*comprises*analog*cells like neuron....And its*derivation*is always affirmative and its ..., "Intelligent control*based*on flexible*neural*...

- A BRIEF HISTORY AND CRITIQUE OF THEDEVELOPMENTS IN OF ....unkown
*neural**network*or similar post processing system. ...(GA) or*Evolutionary*Strategies(ES), to include...The eventual question is a*derivation**of*Ideas 1...

- Constructive Derivation of Analog Specification Test Criteria.unkown
- Constructive
*Derivation**of**Analog*Specification*Test*...*based*approach to the problem*of**analog*circuit ...*set*in a measurement space, x ∈ d, by ...

- Derivation of Upper Bounds on Optimization Time of Population....unkown
*Derivation**of*Upper Bounds on Optimization Time*of*Population-*Based**Evolutionary*Algorithm on a Function with Fitness Plateaus Using Elitism Levels Traverse ...

- CYBERNETICSPART.unkown
*neural**networks*[5], [6] or genetic algorithms...*based*method partitions the example*set*according ...*derivation**of*RBs by firstly generating a ...

- RMSD Protein Tertiary...(IJMSC-V2-N2-3).pdf
- statistics
*based*, and*neural**network**based*[1]...*derivation**of*a sustainable parametric model from ...*test*data*set**of*256 proteins from RCASP256 ...

- Genome Signatures, Self-Organizing Maps and Higher Order_图文....pdf
- (SOM) is a
*neural**network*method*for*the ...The ?rst*derivation**of*genome signatures predates ...[Continued] 218*Evolutionary*Bioinformatics 2007: 3...

- Constructive Derivation of Analog Specification Test Criteria.unkown
- Constructive
*Derivation**of**Analog*Specification*Test*...*based*approach to the problem*of**analog*circuit ...*set*in a measurement space, x ∈ d, by ...

- Constructive Derivation of Analog Specification Test Criteria.unkown
- Constructive
*Derivation**of**Analog*Specification*Test*...*based*approach to the problem*of**analog*circuit ...*set*in a measurement space, x ∈ d, by ...

- Constructive Derivation of Analog Specification Test Criteria.unkown
- Constructive
*Derivation**of**Analog*Specification*Test*...*based*approach to the problem*of**analog*circuit ...*set*in a measurement space, x ∈ d, by ...

- Constructive Derivation of Analog Specification Test Criteria.unkown
- Constructive
*Derivation**of**Analog*Specification*Test*...*based*approach to the problem*of**analog*circuit ...*set*in a measurement space, x ∈ d, by ...

- Derivation of Upper Bounds on Optimization Time of Population....unkown
*Derivation**of*Upper Bounds on Optimization Time*of*Population-*Based**Evolutionary*Algorithm on a Function with Fitness Plateaus Using Elitism Levels Traverse ...

更多相关标签:

- Derivation of Optimal Global Equalization Function with Variable Size Block Based Local Contras
- A Neural Network Based Design of Optimal Controllers
- Automatic Derivation of Petri Net Based Distributed Specification with Optimal Allocation o
- Simulation of Control Model Based on Optimal Structure ofHOPFIELD Neural Network
- Batch Process Modelling and Optimal Control Based on Neural Network Models
- Multifunctional Learning of a Multiagent based Evolutionary Artificial Neural Network with
- Parallelization of an evolutionary neural network optimizer based on PVM
- 基于进化优化神经网络(EONN)的机械手运动控制(IJISA-V6-N12-2)
- Neural Network Based Manipulator Control with Time-Optimal Robot Path Tracking