03964.com

文档资料库 文档搜索专家

文档资料库 文档搜索专家

ARTICLE IN PRESS

Chaos, Solitons and Fractals xxx (2005) xxx–xxx www.elsevier.com/locate/chaos

Spatial correlation genetic algorithm for fractal image compression

Ming-Sheng Wu

a

a,*

, Wei-Chih Teng a, Jyh-Horng Jeng b, Jer-Guang Hsieh

a

Department of Electrical Engineering, National Sun Yet-Sen University, 70 Lien-Hai Rd., Kaohsiung 804, Taiwan, ROC b Department of Information Engineering, I-Shou University, Kaohsiung, Taiwan Accepted 21 June 2005

Abstract Fractal image compression explores the self-similarity property of a natural image and utilizes the partitioned iterated function system (PIFS) to encode it. This technique is of great interest both in theory and application. However, it is time-consuming in the encoding process and such drawback renders it impractical for real time applications. The time is mainly spent on the search for the best-match block in a large domain pool. In this paper, a spatial correlation genetic algorithm (SC-GA) is proposed to speed up the encoder. There are two stages for the SC-GA method. The ?rst stage makes use of spatial correlations in images for both the domain pool and the range pool to exploit local optima. The second stage is operated on the whole image to explore more adequate similarities if the local optima are not satis?ed. With the aid of spatial correlation in images, the encoding time is 1.5 times faster than that of traditional genetic algorithm method, while the quality of the retrieved image is almost the same. Moreover, about half of the matched blocks come from the correlated space, so fewer bits are required to represent the fractal transform and therefore the compression ratio is also improved. ? 2005 Elsevier Ltd. All rights reserved.

1. Introduction Fractal image compression was original proposed by Barnsley [1–3] and ?rst realized by Jacquin in 1992 [4]. The underlying premise of fractal image compression is based on the PIFS which utilizes the self-similarity property in the image to achieve the purpose of compression. To encode an image according to the self-similarity property, each block must ?nd the most similar domain block in a large domain pool. For the conventional full search method, the encoding process is time-consuming since a large amount of computations of similarity measure are required to ?nd the best match. Also, in order to achieve the global optimization, global o?sets have to be recorded, which increase the storage space. Therefore, focal aims of fractal image compression are speeding up the encoder and increasing the compression ratio.

Corresponding author. Tel.: +886 7 5254139. E-mail addresses: d9131806@student.nsysu.edu.tw (M.-S. Wu), m923010008@student.nsysu.edu.tw (W.-C. Teng), jjeng@isu. edu.tw (J.-H. Jeng), jghsieh@mail.ee.nsysu.edu.tw (J.-G. Hsieh). 0960-0779/$ - see front matter ? 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.chaos.2005.07.004

*

ARTICLE IN PRESS

2 M.-S. Wu et al. / Chaos, Solitons and Fractals xxx (2005) xxx–xxx

In the past, some approaches were presented to reduce the search space in order to speedup the encoder. Fisher?s classi?cation method [5] constructed 72 classes for image blocks according to the variance and intensity. In Wang et al. [6], four types of range blocks were de?ned based on the edge of the image. Such a method can provide speedup ratio from 1.6 to 5. Recently, the application of the genetic algorithm (GA) to fractal image compression has attracted the attention of researchers. GA was developed by Holland [7] over the course of the 1960s and 1970s and popularized by Goldberg [8]. The genetic algorithm is a global search technique mimicking the natural selection and natural genetics. GA is capable of solving many large complex problems where other methods have experienced di?culties, especially when the search space has very rough landscape riddled with many local optima. Since natural images have such characteristics, GA is well suited for the search of the best match in fractal image compression. Many researchers have focused their sights on this advantage of GA and have reported some results. For example, Vences and Rudomin [9] and Mitra et al. [10] proposed some search strategies using GA in which the chromosome consists of the PIFS of all range blocks. In their methods, the ?tness of a chromosome is measured by the distance between the attractor and the given image. Although the speedup can be achieved, the retrieved image quality is not good enough because the ?tness is de?ned in terms of the whole image. In this paper, based on the self-similarity characteristic of the natural image, the spatial correlations between neighboring blocks in both the domain pool and the range pool are considered. An SC-GA method composed of two evolutionary processes is proposed. For this method, the spatial correlations and local search are incorporated to reduce the encoding time, while the quality of retrieved image is still preserved. The spatial correlation reveals that neighboring blocks usually have some similar properties such as edge, shade, etc., which are embedded into the Dihedral transformation for fractal compression. The ?rst stage of SC-GA method can be devised to search these spaces for the current block to the matched domain blocks of the neighboring range blocks. Since the search space is much smaller than that of the whole image, the encoding speed is improved. If the local optima are not satis?ed, the second stage will be invoked in order to explore more similarities from the whole image. Thus the quality of the retrieved image can be maintained. This algorithm can also improve the compression ratio since the search space is limited relative to the positions of the previously matched blocks, fewer bits are required to record the o?set of the domain block instead of the absolute position. The rest of the paper is organized as follows. We introduce the conventional fractal image coding scheme in Section 2. Section 3 describes the proposed SC-GA method in this paper. Some experimental simulations are performed in Section 4 to verify the improvement of our proposed algorithm. Finally, a conclusion is made in Section 5.

2. Theoretical basics of fractal image compression The fractal image compression is based on the local self-similarity property and PIFS. The related de?nitions and theorems are stated as follows. De?nition 2.1. Let X be a metric space with metric d. A map w : X ! X is said to be contractive with contractivity s, 0 < s < 1, if d ?w?x?; w?y ?? 6 s ? d ?x; y ? 8x; y 2 X . ?1?

De?nition 2.2. Let X be a metric space with metric d. The distance of a point x 2 X and a non-empty set A X is de?ned by d ?x; A? ? inf d ?x; a?.

a2A

The Hausdor? distance between two non-empty sets A and B, A, B X, is de?ned by d H ?A; B? ? max sup d ?a; B?; sup d ?A; b? .

a2A b2B

?2?

Let X be the set of N · N gray level images. The metric is de?ned as the usual Euclidean distance by regarding the elements in X as vectors of dimension N · N. Let I be a given image belonging to X. The goal is to ?nd a set of transformations {w1, w2, . . ., wn}, each of which is a restricted function and satis?es (1), such that the given image I is an approximate attractor. The set {w1, w2, . . ., wn} is called a PIFS. The following theorem is an important fact for PIFS. Theorem 2.1. Consider a PIFS {w1, w2, . . ., wn} with wi : X ! X for all i. Let W = [wi. Then there exists a unique point a 2 X such that for any point b 2 X a ? W ?a? ? limn!1 W n ?b?. ?3?

ARTICLE IN PRESS

M.-S. Wu et al. / Chaos, Solitons and Fractals xxx (2005) xxx–xxx 3

The point a in (3) is called the ?xed point or the attractor of the mapping W. Next listed below is the famous Collage theorem. Theorem 2.2. Let {w1, w2, . . ., wn} be a PIFS with contractivity s. Let B be any non-empty compact set in X. Then we have d H ?A; B? 6 ?1 ? s??1 d H ?B; W ?B??; where A is the attractor of the mapping W and dH is the Hausdorff metric (2). Let dH(I, W(I)) 6 , where is a small positive real number. By the Collage theorem, one can obtain . d H ?I ; A? 6 1?s

?4?

From (4), one can see that after a large number of iterations, an attractor A is generated which is su?ciently close to the given image I. For practical implementation, let f be a given 256 · 256 gray level image. The domain pool D is de?ned as the set of all possible blocks of size 16 · 16 of the image f, which makes up (256 ? 16 + 1) · (256 ? 16 + 1) = 58 081 blocks. The range pool R is de?ned to be the set of all non-overlapping blocks of size 8 · 8, which makes up (256/8) · (256/ 8) = 1024 blocks. For each block v from the range pool, the fractal transformation is constructed by searching in the domain pool D the most similar block. At each search entry, the domain block is sub-sampled such that it has the same size as the range block. Let u denote a sub-sampled domain block which is of the same size as v. The similarity of u and v is measured using mean square error (MSE) de?ned by d ?u; v? ?

7 X 7 1 X ?u?i; j? ? v?i; j??2 . 64 j?0 i?0

The fractal transformation allows the Dihedral transformation of the domain block, i.e., the eight orientations of the block generated by rotating the block counter-clockwise with angles 0°, 90°, 180°, and 270° and ?ipping with respect to the line y = x, respectively, as shown in Table 1. Thus for a given block from the range pool, it requires 58,081 · 8 = 464,648 MSE computations to ?nd the most similar block from the domain pool. Thus, in total, one needs 1024 · 464,648 = 475,799,552 MSE computations to encode the whole image using the full search compression method. For a given range block v, the fractal transformation also allows the adjustment of the contrast p and the brightness q on the sub-sampled domain block u. Here p and q are also referred to as scaling and o?set, respectively. The similarity is measured by the quantity d(pk ? uk + qk, v) = kpk ? uk + qk ? vk, where uk, 0 6 k 6 7, are the eight orientations of u. The fractal a?ne transformation / of u in the domain pool can be expressed by 3 2 3 32 3 2 2 x i a11 a12 0 i 7 6 7 76 7 6 6 U4 j 5 ? 4 a21 a22 0 54 j 5 ? 4 y 5. q u?i; j? 0 0 p u?i; j? a a12 where 11 is one is one of the Dihedral transformation in Table 1, meanwhile, x and y are the absolute position a21 a22 of the domain block. By minimizing d(pk ? uk + qk, v), p, and q can be computed directly as

* * *

pk ?

?N huk ; vi ? huk ; 1 ihv; 1 i? ?N huk ; uk i ? huk ; 1 i2 ?

*

;

qk ? 1 ?T .

* * 1 hv ? 1 i ? phuk ? 1 i ; N

where N = 64 and 1 ? ? 1 1 ? ? ?

Table 1 The eight transformations in the Dihedral group Rotate 0° 1 0 T0 ? 0 1 Rotate 90° 0 1 T1 ? ?1 0 Rotate 180° ?1 0 T2 ? 0 ?1 T6 ? 0 ?1 ?1 0 T7 ? Rotate 270° 0 ?1 T3 ? 1 0 ?1 0 0 1

Flip with the line X = Y from above 0 1 1 0 T4 ? T5 ? 1 0 0 ?1

ARTICLE IN PRESS

4 M.-S. Wu et al. / Chaos, Solitons and Fractals xxx (2005) xxx–xxx

The position of the most similar domain block, the contrast p, the brightness q, and the orientation k constitute the fractal code of the given range block v. In practice, for a 256 · 256 image, 16 bits are required to represent the position of the domain block. For p, q and the orientation k, 5, 7 and 3 bits are required, respectively. Finally, as v runs over all 1024 range blocks in the range pool R, the encoding process is completed. To decode, one ?rst makes up the 1024 a?ne transformations, i.e., the PIFS {w1, w2, . . ., w1024} from the compression codes and chooses any initial image. Next, one performs the 1024 a?ne transformations on the image to obtain a new image, and then proceeds recursively. According to Theorem 2.1 and Theorem 2.2, the sequence of images will converge. The stopping criterion of the recursion is designed according to user?s application and the ?nal image is the retrieved image of fractal coding.

3. Genetic algorithm on fractal encoding Genetic algorithm is a biologically motivated search method mimicking the natural selection and natural genetics. It is capable of ?nding the near-optimal solution since the candidate solutions will not get stuck at the local optima. Therefore, GA is especially e?cient when the search space of a problem has very rough landscape riddled with many local optima. GA is suitable to fractal image compression because the search of the best match is highly related to such characteristics. In this section, the conventional genetic algorithm is ?rst introduced. Next, the SC-GA method, which can further reduce the encoding time, will be introduced in detail. 3.1. Conventional genetic algorithm The elementary operations of GA include selection, crossover, and mutation. The block diagram of the algorithm is shown in Fig. 1. First, both the chromosome formation and the ?tness function are de?ned. The initial population is then randomly generated at the beginning of the evolutionary process. In each generation, the ?tness values of all chromosomes in the current population are calculated and chromosomes with better ?tness will be selected in the mating pool. Usually, two parents are selected from the mating pool for crossover in order to generate new o?springs controlled by a probability Pc. The mutation operation with probability Pm is then applied to the o?springs to maintain the diversity of the population and to avoid prematurity. Finally, a stopping criterion is tested to determine whether the evolution will go on or not.

Define chromosome formation and fitness function

Create initial population

Evaluate fitness values

Test stopping criterion

Select

Stop

Crossover

Mutate

Fig. 1. The ?ow chart of conventional genetic algorithm.

ARTICLE IN PRESS

M.-S. Wu et al. / Chaos, Solitons and Fractals xxx (2005) xxx–xxx 5

3.2. SC-GA method for fractal image compression The proposed SC-GA method is performed in two-stage fashion. The ?rst-stage GA makes use of the image correlation to speedup the encoder. If the satisfaction criterion does not meet, the second-stage GA will be invoked, in which the genetic parameters such as Pc and Pm are adapted according to the ?tness. In the ?rst stage, one makes use of the spatial correlation to reduce the search space. Let rj be the range block to be encoded. Denote the neighboring range blocks of rj, as depicted in Fig. 2, by rH, rV, rD1, and rD2, which have been encoded. These neighbors are the same as those utilized to improve the vector quantization (VQ) image coding [11]. Let dH, dV, dD1 and dD2 be the corresponding matched domain blocks, respectively. Now, one will restrict the search space of rj to neighborhoods of dH, dV, dD1 and dD2, which are denoted by SH, SV, SD1 and SD2, respectively (Fig. 2). The underlying mechanism of the neighborhood allocation is related to the neighbor direction of range blocks and the corresponding fractal codes. For example, rH is a horizontal neighbor of rj. If rj has horizontal edges, rH is very likely to have horizontal edges due to the image correlation. Assume the Dihedral transformation of rH is T1 in Table 1, i.e., dH has vertical edges. Then the neighborhood SH of rH is allocated mainly in the vertical direction as shown in Fig. 2. The expanded search space of SH is de?ned by W W L L . S H ? d 2 D : ? 6 ?x ? x H ? < ; ? 6 ?y ? y H ? < 2 2 2 2 Here x and y denote the absolute positions of domain block d in an image, xH and yH are the coordinates of dH, and L and W are the length and width of SH, respectively. Assume the best match is found in the neighborhood of dH, the o?set position of the best match to dH, denoted by (DxH, DyH), will be recorded instead of the absolute position in the domain pool. If L and W of dH are chosen to be 32 and 16, respectively, one will need 5 and 4 bits to represent DxH and DyH, respectively. This coding scheme has some advantages. First, the compression ratio will be improved since fewer bits are required to represent the fractal code. Meanwhile, since the chromosome is composed of the o?set value only (Fig. 3), shorter chromosome will reduce the encoding time during the evolution of GA. Another important issue is the feasibility problems which arises commonly in various evolutionary computations. The problem is mainly caused by crossover and mutation operations which produce illegal candidate solutions. One would have to give extra e?orts to eliminate such e?ects. In the proposed o?set coding method for neighborhoods, the feasibility problem is avoided automatically. Finally, in the process of crossover and mutation operations, the feasibility of solution will be assured. By the same reasoning, dV, dD1, and dD2, are expanded to SV, SD1, and SD2, respectively, according to their corresponding directions and the fractal codes. Thus the search space of rj is restricted to S ? fS H ; S V ; S D1 ; S D2 g.

W

SV

S D1

d D1

L

dV

r D1

SH

rH

rV

rj

r D2

S D2

d D2 dH

Domain Pool

Range Pool

Fig. 2. Search space of the ?rst-stage SC-GA for the current range block.

ARTICLE IN PRESS

6 M.-S. Wu et al. / Chaos, Solitons and Fractals xxx (2005) xxx–xxx

BL

BW

Fig. 3. Structure of a chromosome in the ?rst-stage SC-GA; the length is l = BL + BW.

Once the restricted search space is determined, GA is performed in these regions, in which uniform crossover and mutation are employed. The evolutionary steps performed on each of the four neighborhoods are listed below. The system parameters including population size, crossover mask, crossover rate Pc,s, mutation mask, and mutation rates Pm,s are set ?rst. The ?tness value is de?ned as the reciprocal of the MSE value. 1. 2. 3. 4. 5. 6. 7. 8. Initialize the chromosomes according to the normal distribution. Calculate the MSE values of chromosomes. Rank the chromosomes according to the MSE values. If the stopping criterion is met, then stop; otherwise go to the next step. Select the ?rst half of the chromosomes as elite members. Perform the uniform crossover on the elite members with probability Pc,s to generate the o?springs. Perform the mutation operation with probability Pm,s for these o?springs. Calculate the MSE values of the chromosomes for the new generation and go to step 3.

In step 1, the absolute positions of dH is used as the means of the normal distribution for the neighborhood SH. The variance is properly set according to the size of the neighborhood such that most of chromosomes are concentrated around dH. Chromosomes initialized outside the neighborhood SH are placed at the boundaries of SH. In step 5, the ranking selection mechanism is utilized to generate the mating pool for the next generation. When a pre-speci?ed number of iterations is reached, the best chromosome in the four neighborhoods will be the candidate of the fractal code. If the best MSE value is smaller than a pre-speci?ed threshold T, the encoding of this range block is completed, otherwise the second stage of SC-GA method will be invoked. In the second stage, GA is performed on the whole image. Based on the characteristic of natural images, a mechanism controlling the mutation is applied to the genetic algorithm in order to exploit the optimal solution and maintain the diversity of population. The system parameters including population size, crossover mask, crossover rate Pc,f, mutation mask, mutation rates Pm,b and Pm,w, are pre-speci?ed. The details of the second-stage GA are given as follows: Generate the initial population of chromosomes randomly. Calculate the MSE values of these chromosomes. Rank the chromosomes according to their MSE values. If the stopping criterion is met, then stop; otherwise go to the next step. Let Sb be the better half of the population and Sw be the worse half. Let Tf be the MSE value separating Sb and Sw in the ranking. 6. Perform uniform crossover on chromosomes in Sb to generate the temporary o?springs (see Fig. 4) and replace the members in Sw. 1. 2. 3. 4. 5.

parent 1 parent 2 mask offspring 1 offspring 2

Fig. 4. Uniform crossover.

0

1

ARTICLE IN PRESS

M.-S. Wu et al. / Chaos, Solitons and Fractals xxx (2005) xxx–xxx 7

7. Perform mutation operation on the whole population. 8. Calculate the MSE values of the chromosomes for the new generation and go to step 3. In step 1, the chromosome is composed of the absolute position (x, y) of the domain block and the Dihedral transformation d (see Fig. 5). If an image is of size 256 · 256, x, y and d are the genes of 8, 8 and 3 bits, respectively. In step 5, the class Sb is composed of better chromosomes which are the ‘‘superior clan’’ and constitute the mating pool. Whilst, the rest are the ‘‘inferior clan’’, denoted by Sw. In step 6, since the chromosomes in Sb have good genes, they will be temporarily held and the uniform crossover operation with probability Pc,f is performed on them to generate temporary o?springs. Members of ‘‘inferior clan’’ usually do not have good genes. From the viewpoint of natural images, for chromosomes in Sw, there is little chance to ?nd good chromosome in the near neighborhood of the corresponding domain block. Therefore, the chromosomes in Sw will be replaced by the temporary o?springs. In step 7, based on the characteristic of natural images, two strategies of mutation are used in the evolutionary process. If a chromosome in the new population has MSE value smaller than Tf, it will be a good candidate for the best match. It implies that the neighborhoods of this domain block tend to be good candidates too. Hence high bits of x and y of such a chromosome may be regarded as good schemata. As pointed out by Holland [7] that chromosomes belonging to the same schema will contain similar information to a certain degree. Therefore, chromosomes belonging to such good schemata should also have good ?tness values and they must be preserved to survive in the further generations. Based on this reason, those chromosomes will be mutated on the low bits (LB) of x and y with probability Pm,b, as shown in Fig. 6(a), in order to exploit the optimal solutions. On the other hand, a chromosome in the new population with MSE value larger than Tf should be mutated on the high bits (HB) of x and y as well as the bits of d with probability Pm,w, as shown in Fig. 6(b), in order to explore the whole space and to maintain the diversity of the population. Similar to the ?rst stage, when a pre-speci?ed number of iterations is reached, the evolution stops and the best chromosome will be the candidate of the fractal code. If the MSE value of this chromosome is better than that of the best chromosome in the ?rst stage, then it is regarded as the fractal code. Otherwise the one with the best MSE in the ?rst stage is chosen as the fractal code. Finally, based on the self-similarity as well as the spatial correlation between a range block and its neighborhoods in natural images, a spatial correlation GA method, called SC-GA, is proposed for fractal image compression. Such a method, which can speedup the encoder and improve the compression ratio, is composed of evolutionary processes of two stages as described above. The detailed steps of the SC-GA method are given as follows: 1. j = 0. 2. De?ne the search space S by {SH, SV, SD1, SD2} as depicted in Fig. 2.

x

lx

y

ly

d ld

Fig. 5. Structure of a chromosome in the second-stage SC-GA; the length is l = lx + ly + ld.

x

HB LB HB

(a)

y

LB

d

x

HB LB HB

y

LB

d

(b)

Fig. 6. Mutation operation in the second-stage SC-GA. Mutation operation of chromosome if (a) MSE is smaller than Tf and (b) MSE is larger than Tf.

ARTICLE IN PRESS

8 M.-S. Wu et al. / Chaos, Solitons and Fractals xxx (2005) xxx–xxx

j? 3. Perform the ?rst-stage GA on rj. Let d ? s be the best-matched block. If its MSE value is smaller than T, record the fractal code and go to step 5. Otherwise go to the next step. ?j? 4. Perform the second-stage GA on rj. Let d f be the best-matched block. If its MSE value is smaller than T, then ?j? ?j? record the fractal code. Otherwise, compare the MSE values of d s and d f . Take the better chromosome as the fractal code. 5. Let j add one. If j is equal to 1024, then stop, otherwise go to step 2.

In step 2, since the range blocks located at the ?rst row and column do not have four neighbors, the ?rst-stage GA is not performed on these blocks. If the fractal codes come from the second stage, the absolute position is recorded. On the other hand, when the codes come from the ?rst stage, the range block rj is referred to as a ‘‘hit’’ block. It can be regarded as an acceptable local minimum. For a hit block, fewer bits are required to record the o?set of the domain block instead of the 16-bit absolute position. This will improve the compression ratio. Let NR and NH denote the number of range blocks and hit blocks, respectively. For hit blocks, BL + BW bits are required to record the relative positions, where BL and BW denote the bit numbers of length and width of the expanded neighborhood, respectively. For non-hit blocks (NR ? NH in total), Ba bits are required to record the absolute positions. Let Bk, Bp and Bq denote the numbers of bits required to represent the orientation, the contrast, and the brightness, respectively. Then the bit rate bpp (bit per pixel) can be computed directly in terms of the number of hit blocks as bpp ? N H ?1 ? ?2 ? BL ? BW ? ? Bp ? Bq ? ? ?N R ? N H ??1 ? Ba ? Bk ? Bp ? Bq ? ; N TP

where NTP is the total number of pixels in the image. Note that one bit is required to indicate if the block is a hit block or not and two bits are required to indicate which neighborhood is selected.

4. Experimental results The images Lena, Baboon, F-16, and Pepper are tested to demonstrate the speedup rate, bit rate, and retrieved quality of the proposed SC-GA method. For a given image of size 256 · 256, let the size of range block be 8 · 8. For GA computations, some generic parameters are used throughout this section. The population size in GA of the second stage is 160. For both of the ?rst and second stages, the crossover rate is 0.6 and the mutation rate is 0.02. The ?tness for GA is de?ned as the reciprocal of MSE. The software simulations are performed using BCB v. 6.0 on a Celeron 2.4G Windows XP PC. 4.1. Scale of neighborhoods and population size in the ?rst stage The encoding e?ciency is directly a?ected by the scale of neighborhoods and the population size. In the ?rst stage, the scale of neighborhoods directly a?ects the bit rate because only o?set values instead of absolute positions of domain blocks are recorded. The quality of the retrieved images also depends on the size of neighborhoods as depicted in Fig. 2. Table 2 shows the results of such a?ect for Lena image. Here, the number of generations is 15 and the threshold T is 50. In the third row of Table 2, 12 chromosomes are placed in each of the four neighborhoods. Every neighborhood is of

Table 2 PSNR for scale of neighborhoods and population size in the ?rst stage; the tested image is Lena Population size 16(4 · 4) 32(8 · 4) 48(12 · 4) 16(4 · 4) 32(8 · 4) 16(4 · 4) 16(4 · 4) 16(4 · 4) 16(4 · 4) Length (L) 32 32 32 32 32 32 16 16 8 Width (W) 16 16 16 8 8 4 8 4 4 PSNR 27.41 27.42 27.41 27.29 27.34 27.34 27.29 27.24 27.14

ARTICLE IN PRESS

M.-S. Wu et al. / Chaos, Solitons and Fractals xxx (2005) xxx–xxx 9

size 32 · 16. The PSNR of quality is 27.41. Table 2 shows that we have higher probability to obtain better match when the search space is broadened, but for various population size, the retrieved qualities are almost the same.

700 650 600 550 500 450 400 350 300 250 200 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 MSE Threshold Value T

Fig. 7. Number of hit blocks versus threshold T for Lena.

MSE Computations

Number of Hit Blocks

2.6E+06 2.4E+06 2.2E+06 2.0E+06 1.8E+06 1.6E+06 1.4E+06 1.2E+06 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 MSE Threshold Value T

Fig. 8. Number of MSE computations versus threshold T for Lena.

27.7 27.6 27.5 PSNR 27.4 27.3 27.2 27.1 27 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150

MSE Threshold Value T

Fig. 9. PSNR versus threshold T for Lena.

ARTICLE IN PRESS

10 M.-S. Wu et al. / Chaos, Solitons and Fractals xxx (2005) xxx–xxx

27.5 27.2 26.9 PSNR 26.6 26.3 26 25.7 25.4 0 5 10 15 Generation 20 25

Fig. 10. PSNR versus number of generations for Lena.

4.2. Threshold value The threshold T is used to determine whether the second stage has to be invoked or not, so its value also a?ects both encoding time and bit rate. The relations of the number of hit blocks, the number of MSE computations, and the PSNR versus threshold T are shown in Figs. 7, 8 and 9, respectively, for Lena. The parameters in the ?rst stage are selected from the ?rst row of Table 2. The number of iterations for this simulation is 25. When the value of T is larger, the probability to ?nd the match block in the ?rst stage will be higher. In this case, the probability of invoking the second stage is reduced, so the number of hit blocks increases as shown in Fig. 7 and the number of MSE computations decreases as shown in Fig. 8. But, for such cases, the PSNRs are worse as shown in Fig. 9. Tradeo? exists between the PSNR and the number of hit blocks. Also, it exists between the PSNR and the number of MSE computations. Moreover, the choice of

SC-GA 27.5 27 PSNR

GA

19.8 19.7 19.6 19.5 19.4 19.3 19.2 19.1 19 (b)

SC-GA

GA

26 25.5 25 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Generation SC-GA 24.5 24 GA

PSNR

26.5

(a)

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Generation

SC-GA GA

28.5 28 27.5 PSNR 27 26.5 26 25.5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Generation (d) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Generation

PSNR

23.5 23 22.5 22

(c)

Fig. 11. The comparison of the PSNR versus the number of generations. (a) Lena, (b) Baboon, (c) F-16 and (d) Pepper.

ARTICLE IN PRESS

M.-S. Wu et al. / Chaos, Solitons and Fractals xxx (2005) xxx–xxx 11

T is dependent on the original images. If an image has rough landscape, proper value for T will tend to be larger. Here, T = 50 is a good choice for Lena.

SC-GA 25 20 Time (second) 15 10 5 0 0 1 2 3 4 (a) 5

GA

SC-GA 25 20 Time (second) 15 10 5 0 (b) 0 1 2 3 4

GA

6 7 8 9 10 11 12 13 14 15 Generation SC-GA GA

5 6 7 8 9 10 11 12 13 14 15 Generation SC-GA GA

25 20 Time (second) 15 10 5 0 (c)

25 20 Time (second) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Generation

Method PSNR 28.91 27.44 27.24 27.41 20.15 19.74 19.67 19.71 25.21 24.15 23.04 23.37 29.84 28.26 28.13 28.19

15 10 5 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Generation

(d)

Fig. 12. The comparison of the encoding time versus the number of generations. (a) Lena, (b) Baboon, (c) F-16 and (d) Pepper.

Table 3 The comparison of full search, GA, and SC-GA methods Time (s) 3141.88 23.00 13.64 13.61 3146.77 23.61 18.13 18.19 3154.20 22.78 13.94 13.89 3151.00 23.19 16.02 15.80 Hit block 0 0 489 495 0 0 304 303 0 0 470 475 0 0 384 399 bpp 0.4844 0.4844 0.4179 0.4396 0.4844 0.4844 0.4490 0.4630 0.4844 0.4844 0.4211 0.4420 0.4844 0.4844 0.4355 0.4513 Speedup rate 1 136.60 230.34 230.85 1 133.28 173.76 172.99 1 138.46 226.59 227.08 1 135.88 197.31 199.43

Lena

Full search GA SC-GA (16 · 4) SC-GA (32 · 16) Full search GA SC-GA (16 · 4) SC-GA (32 · 16) Full search GA SC-GA (16 · 4) SC-GA (32 · 16) Full search GA SC-GA (16 · 4) SC-GA (32 · 16)

Baboon

F-16

Pepper

ARTICLE IN PRESS

12 M.-S. Wu et al. / Chaos, Solitons and Fractals xxx (2005) xxx–xxx

SC-GA 2.0E+06 MSEComputations 1.6E+06 1.2E+06 8.0E+05 4.0E+05 0.0E+00 (a)

GA 2.0E+06 MSE Computations 1.6E+06 1.2E+06 8.0E+05 4.0E+05 0.0E+00

SC-GA

GA

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Generation SC-GA GA

(b)

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Generation SC-GA GA

2.0E+06 MSE Computations MSE Computations 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Generation 1.6E+06 1.2E+06 8.0E+05 4.0E+05 0.0E+00 (c)

2.0E+06 1.6E+06 1.2E+06 8.0E+05 4.0E+05 0.0E+00 (d) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Generation

Fig. 13. The comparison of the number of MSE computations versus the number of generations. (a) Lena, (b) Baboon, (c) F-16 and (d) Pepper.

4.3. Number of generations The number of generations in GA is one of the main factors a?ecting the encoding time and the quality. More iterations will increase the quality at the expense of lower encoding speed but not proportionate. Fig. 10 shows the relation of the PSNR versus the number of generations for Lena. As in Section 4.2, the parameters in the ?rst stage are selected

Table 4 The detailed data for Lena Generation SC-GA method Time (s) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1.6 2.3 3.1 4.0 4.8 5.7 6.5 7.3 8.0 8.8 9.6 10.3 11.1 11.9 12.5 13.6 PSNR 25.49 26.06 26.17 26.54 26.50 26.76 26.75 26.95 27.00 27.06 27.11 27.10 27.15 27.17 27.25 27.41 MSE computations 104,976 174,464 243,027 315,409 381,882 450,460 523,344 594,852 655,654 722,284 796,289 863,363 937,766 1,007,314 1,062,144 1,151,408 Hit block 464 477 486 484 492 495 492 491 500 503 499 501 498 498 506 495 GA method Time (s) 2.0 3.6 5.1 6.5 8.0 9.4 10.9 12.3 13.7 15.1 16.4 17.8 19.2 20.5 21.7 23.0 PSNR 25.32 25.87 26.15 26.31 26.60 26.84 26.84 27.02 27.03 27.14 27.21 27.23 27.29 27.32 27.37 27.44 MSE computations 163,840 283,522 404,288 524,971 645,867 767,457 889,256 1,010,066 1,131,073 1,252,830 1,373,403 1,494,145 1,615,361 1,736,021 1,854,526 1,975,111

ARTICLE IN PRESS

M.-S. Wu et al. / Chaos, Solitons and Fractals xxx (2005) xxx–xxx 13

from the ?rst row of Table 2 and the MSE threshold value is set to be 50. As indicated, the PSNR gradually saturates at about 15 iterations. 4.4. Comparison of SC-GA and traditional GA methods The traditional GA for fractal encoding is the method mentioned in Section 3 in which only second stage is adopted. To demonstrate the e?ciency of the proposed SC-GA method in comparison to the GA method, further experiments are performed. Fig. 11 shows the relation of the PSNR versus the number of generations. The parameters in the ?rst stage are selected from the ?rst row of Table 2 and the number of iterations is 15. The MSE threshold values are set to be 50, 300, 30 and 30, for the images Lena, Baboon, F-16, and Pepper, respectively. The value for Baboon is selected to be 300 because the image is very rough. The retrieved quality of these two methods is almost the same except F-16. For

Fig. 14. (a) Original image, Lena of size 256 · 256. (b) Initial image, Baboon of size 256 · 256. (c) Full search method, time used = 3141.88 s, PSNR = 28.91, bit rate = 0.4844. (d) GA method, time used = 23.00 s, PSNR = 27.44, bit rate = 0.4844. (e) SC-GA (16 · 4) method, time used = 13.64 s, PSNR = 27.24, bit rate = 0.4179. (f) SC-GA (32 · 16) method, time used = 13.61 s, PSNR = 27.41, bit rate = 0.4396.

ARTICLE IN PRESS

14 M.-S. Wu et al. / Chaos, Solitons and Fractals xxx (2005) xxx–xxx

the result of Fig. 11(c), these two methods exhibit rough behavior when the number of generations is less than 15, but when the number of generations is su?ciently large, these two curves will be stabilized and coincident. Fig. 12 shows the relations of the encoding time versus the number of generations. As observed from Figs. 11 and 12, SC-GA method is faster than traditional GA method, while the quality is almost the same. Table 3 lists the results of Figs. 11 and 12 at 15th iteration, in which the neighborhood of size 16 · 4 is also included. For Lena, the PSNRs of GA and SC-GA (32 · 16) are almost the same, which are only 1.5 dB decay in comparison to that of the full search method. For the speedup rate, GA and SC-GA are 136.6 and 230.85 times faster, respectively, than that of the full search method. For the compression ratio, the bit rate of SC-GA is thus improved by 9.25% since there are 495 hit blocks. For SCGA (16 · 4), the speedup rate is almost the same as that of SC-GA (32 · 16), but the bit rate is further improved by 13.73% with only little quality decay. In average, Table 3 shows that SC-GA is the 1.5 times faster than GA method. Moreover, SC-GA is about 200 times faster than the full search method. For the compression ratio, the bit rate is improved by 7.31–13.73% for 16 · 4 scale. Fig. 13 shows the relation of the number of MSE computations versus the number of generations. As indicated, the computation complexity of SC-GA method is lower and therefore the speedup can be achieved. The detailed data of Figs. 11–13 for Lena is listed in Table 4. In the last row of Table 4, the PSNRs of these two methods are almost the same, but for SC-GA method, the encoding time and the number of MSE computations are 59.1% and 58.3% of GA method, respectively. The results of retrieved images by SC-GA, GA, and full search methods are shown on Fig. 14. Fig. 14(a) is the original Lena image and Fig. 14(b) is the initial image used for fractal decode. Fig. 14(c) and (d) are the retrieved images using the full search method and GA method, respectively. Fig. 14(e) and (f) are the retrieved images using the SC-GA method, the scale of neighborhoods in the ?rst stage being 16 · 4 and 32 · 16, respectively.

5. Conclusion In this paper, based on the self-similarity characteristic of natural images, the spatial correlations in both the domain pool and the range pool are taken into account to speedup the fractal image compression. There are two stages for the proposed SC-GA method. The ?rst stage exploits local optima by making use of the spatial correlations between neighboring blocks and the second stage explores further similarities from the whole image if the local optima are not satis?ed. Since the search space in the ?rst stage is much smaller and about half local optima are satisfactory, the purpose of speedup can be achieved. Moreover, since only relative positions instead of the absolute positions for such half hit range blocks are required to record, the compression ratio can also be improved. In comparison to the GA method, SCGA method spends about two-thirds of the encoding time and achieves higher compression ratio, while the quality of the retrieved image is almost the same.

References

[1] Barnsley MF, Demko SG. Iterated function systems and the global construction of fractals. Proc Roy Soc, London 1985;399:243–75. [2] Barnsley MF. Fractal everywhere. New York: Academic; 1988. [3] Barnsley MF, Elton JH, Hardin DP. Recurrent iterated function systems. Construct Approx 1989;5:3–31. [4] Jacquin AE. Image coding based on a fractal theory of iterated contractive image transformations. IEEE Trans Image Process 1992;1:18–30. [5] Fisher Y. Fractal image compression: theory and application. Berlin: Springer- Verlag; 1995. [6] Wang Z, Zhang D, Yu Y. Hybrid image coding based on partial fractal mapping. Signal Process Image Commun 2000;15:767–79. [7] Holland JH. Adaptation in natural and arti?cial systems. Ann Arbor, MI: Michigan University; 1975. [8] Goldberg DE. Genetic algorithms in search, optimization, and machine learning. Massachusetts: Addison-Wesley; 1989. [9] Vences L, Rudomin I. Genetic algorithms for fractal image and image sequence compression. Proc Comput Visual 1997:35–44. [10] Mitra SK, Murthy CA, Kundu MK. Technique for fractal image compression using genetic algorithm. IEEE Trans Image Process 1998;7(4):586–93. [11] Tsai JC, Hsieh CH. Predictive vector quantization for image compression. Electron Lett 1998;32:2325–6.

赞助商链接

相关文章:

- 用MATLAB的时重要MATLAB tools
- GAOT -
*genetic**algorithm*optimization ? Garch - ...MWM - multi*fractal*wavelet model ? NetCDF ? ...*Spatial*Regression ?*Spatial*Statistics ? Spectral ...

- matlab工具箱安装教程
*fractal*analysis*for*signal processing FSBOX - ...(Top) Logistic regression http://www.*spatial*-... Cascade-*Correlation*, Aspirin/Migraines Neural ...

- Matlab的第三方工具箱大全(强烈推荐)
*fractal*analysis*for*signal processing FSBOX - ...*genetic**algorithm*optimization Garch - estimating ...*Spatial*and Geometric Analysis*Spatial*Regression ...

- 多媒体论文Research and Analysis of Digital Video Wa...
*image*:*spatial**correlation*and time*correlation*....*Compression*in block-based schemes like MPEG-2 ... the*algorithm*extract*for*each encoded 8*8 ...

更多相关标签:

- image compression
- 1. Speeding Up Fractal Image Compression (6 unit)
- Fractal Volume Compression
- FRACTAL-BASED DESCRIPTION
- Fast Fractal Image Encoder
- Algorithm for fast fractal image compression
- A Hierarchical Distributed Genetic Algorithm for Image Segmentation
- Correspondence Technique for Fractal Image Compression Using Genetic Algorithm
- fast fractal image compression using spatial correlation-2004
- Fractal Image Coding for Emission Tomographic Image Compression
- A ForegroundBackground Separation Algorithm for Image Compression. Microsoft Research
- Adaptive partitionings for fractal image compression
- An adaptive spatial fuzzy clustering algorithm for 3-D MR image segmentation
- W. A Complexity Constraint Best-Basis Wavelet Packet Algorithm for Image Compression. submi
- @novel prediction and subblock based algorithm for fractal image compression-2005