03964.com

文档资料库 文档搜索专家

文档资料库 文档搜索专家

Information Sciences 250 (2013) 82–112

Contents lists available at SciVerse ScienceDirect

Information Sciences

journal homepage: www.elsevier.com/locate/ins

Particle swarm optimization based on intermediate disturbance strategy algorithm and its application in multi-threshold image segmentation

Hao Gao a,b, Sam Kwong b,?, Jijiang Yang c, Jingjing Cao b

a

College of Automation, Nanjing University of Posts and Telecommunications, Nanjing, China Department of Computer Science, City University of Hong Kong, Hong Kong c Research Institute of Information Technology, Tsinghua University, Beijing, China

b

a r t i c l e

i n f o

a b s t r a c t

Particle swarm optimization (PSO) algorithm simulates social behavior among individuals (or particles) ‘‘?ying’’ through multidimensional search space. For enhancing the local search ability of PSO and guiding the search, a region that had most number of the particles was de?ned and analyzed in detail. Inspired by the ecological behavior, we presented a PSO algorithm with intermediate disturbance searching strategy (IDPSO), which enhances the global search ability of particles and increases their convergence rates. The experimental results on comparing the IDPSO to ten known PSO variants on 16 benchmark problems demonstrated the effectiveness of the proposed algorithm. Furthermore, we applied the IDPSO algorithm to multilevel image segmentation problem for shortening the computational time. Experimental results of the new algorithm on a variety of images showed that it can effectively segment an image faster. ? 2013 Elsevier Inc. All rights reserved.

Article history: Received 27 December 2012 Received in revised form 25 June 2013 Accepted 1 July 2013 Available online 11 July 2013 Keywords: Particle swarm optimization Image segmentation Intermediate disturbance strategy Partial derivative theory Monte Carlo method

1. Introduction Optimization forms an important part of our daily life. Many parameters of scienti?c, social, economic, and engineering problems can be adjusted to produce a more desirable outcome. Optimization is a process of ?nding the best possible solution to a problem within a reasonable time limit [47,44]. As an important branch of optimization methodology, nature-inspired computation has attracted much attention in the past decades. Nature serves as a fertile source of concepts, principles, and mechanisms for designing optimization systems to deal with complex computational problems. Among them, the most successful algorithm is the evolutionary algorithm (EA) that simulates the evolution of individuals via the processes of selection, mutation, and reproduction [59,18,23,40,24,43]. Recently, a new variant of EA, known as particle swarm optimization (PSO) [29], which is inspired by the emergent motion of ?ock birds searching for food, has been developed. As a recursive algorithm, the PSO algorithm simulates social behavior among individuals (or particles) ‘‘?ying’’ through a multidimensional search space, where each particle represents a point at the intersection of all search dimensions. Recently, PSO has become one of the most popular optimization techniques for solving optimization problems in systems, such as power systems [32,54], arti?cial neural network training [21,57], fuzzy system control [8,26], and others [34,51,61].

? Corresponding author.

E-mail address: cssamk@cityu.eud.hk (S. Kwong). 0020-0255/$ - see front matter ? 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.ins.2013.07.005

H. Gao et al. / Information Sciences 250 (2013) 82–112

83

Accelerating convergence speed and avoiding the local optima are the two most important and appealing goals in PSO research. As mentioned in previous studies [14,62], with components named pbest and gbest, which are respectively contributed by the best solutions achieved by an individual particle and all particles, the PSO algorithm has a faster and stable convergence rate compared to other EAs. Thus, identifying a competitive region that collects the experience of an individual particle and all particles can enhance both the local search ability and the convergence rate. Another problem in PSO is premature convergence, which indicates that the global optima in the search space may not be found [63,36]. This weakness has restricted PSO from wider applications. A number of variant PSO algorithms have been proposed to help particles escape from the local optima [5,45,55,30] (e.g., proposing new update rules and mutation operator). Kennedy and Mendes [30] investigated the effects of various population topologies on the particle swarm algorithm and presented a von Neumann topology, which performs more consistently. Liang et al. [36] designed a novel learning strategy, in which the historical best information of all particles is used to update a particle’s velocity, ensuring that the diversity of the swarm is preserved to discourage premature convergence. More recently, the approach toward providing a balance between global exploration (global search) and local exploitation (local search) has attracted many PSO researchers. One signi?cant improvement is the introduction of inertia weight [49], which allows a powerful global search in the early stage and more precise search in the later stage of the iteration process. However, similar with the original PSO, the particles in [49] could not jump out of the local optima especially in the last stage of iteration because the inertia weight of the improved algorithm was very small. As the Gaussian mutation in EAs is promising at a ?ne-grained search and chaotic mutation, it may give particles more opportunities to jump out of the local optima. Leandro [33] presented a novel, combined optimization method that incorporated chaotic mapping and Gaussian distribution to PSO to improve global and local convergence, respectively. Because the search space of a mutated particle in [33] belonged to the local area of pbest and gbest, the function of the mutated operator was limited. Sun et al. [50,52] proposed a quantum-behaved PSO. The Delta potential well model makes it easier for particles to escape from the local optima [52]. They also used a contraction–expansion coef?cient to balance the local and global searches during the optimization process. The exploration ability of QPSO depends mainly on the Delta potential well model, although the range of data generated by this model is limited; therefore, this model is not very powerful, especially in the last stage of iteration. By selecting the appropriate dilation parameter and the probability of mutation, Ling et al. [37] presented an improved PSO (HWPSO) and proposed a mutation strategy with dynamic mutating space by incorporating a wavelet function. At the early stage of the search, the solution space was explored by setting a larger mutating space. Besides, a precise solution is more likely to be ?ne-tuned in the later stage of the search when the properties of the wavelet lead to a smaller mutating space. Because the wavelet mutation generates only smaller values in the later stage of iteration, HWPSO is unlikely to jump out of the local optimum. Park et al. [42] proposed an improved PSO framework (CCPSO) that employed chaotic sequences combined with the traditional linearly decreasing inertia weights. As the chaotic inertial weight approach can encompass the whole weight domain under the decreasing line in a chaotic manner, the exploration and exploitation capabilities of particles are balanced [42]. Furthermore, by using a crossover operator that enables particles to have more opportunity to learn from pbest, the CCPSO achieves more favorable results on unimodal functions than on PSO algorithm. Gao et al. [16] proposed a novel PSO algorithm with a moderate-random-search strategy (MRPSO). MRPSO provides a greater chance to search in the local area and gives the particles a moderate probability of generating long jumps; this algorithm provides a better balance between global and local searches than the traditional PSO algorithm. Based on the search behaviors and the population distribution characteristics of PSO, a real-time evolutionary state estimation procedure [60] was performed to identify one of the four de?ned evolutionary states: exploration, exploitation, convergence, and jumping out. It enables automatic control of algorithmic parameters (e.g. inertia weight and acceleration coef?cients) to accelerate the convergence speed and enhance the global search ability simultaneously. Using a slightly different social metaphor from that of the original PSO, a naive PSO (NPSO) was proposed in a previous study [25]. Each particle learns from a better one and takes warning from the worse one in the swarm [25]. As a result, this metaphor is helpful in exploring more regions of the search space in the early stage and exploiting more precise regions in the later stages. Furthermore, the approach to strike a better balance between global exploration and local exploitation attracts EA researchers [22,24,27,28,35,43]. In an arti?cial bee colony (ABC) [28], a greedy selection scheme was applied between the new solution and the old one, and the better one was preferred for inclusion in the population. In this way, the information of a good member of the population is distributed among the other members. The ABC algorithm also has a scout phase, which provides diversity in the population by allowing new and random solutions to be inserted into the population. ABC shows a more powerful global search ability than the traditional PSO and has been successfully applied in real-time [2]. Self-adaptive control method has been a popular strategy to improve the differential algorithm (DE). Adaptive rules incorporate some forms of the feedback from the search procedure to guide the parameter adaption. Qin et al. [43] proposed a self-adaptive DE (SaDE) algorithm, in which both the trial vector generation strategies and their associated parameters were gradually self-adapted by learning. Consequently, compared with the conventional DE and several state-of-the-art parameter adaptive DE variants, the solutions obtained by SaDE are more stable with a smaller standard deviation and have a higher success rate. Unlike PSO algorithms using both pbest and gbest to guide the evolutionary search, the SaDE algorithm only uses gbest to guide the evolutionary direction of individuals. Thus, its exploitation ability is limited. A group search optimization (GSO) [22] employed a producer–scrounger model as a framework. A number of group members are selected as the scrounger and the producer, who have great opportunities to search in the local area. The rest of the group members perform random walk motions to maintain the diversity of the GSO. As compared with GA and PSO algorithms, the GSO shows a better

84

H. Gao et al. / Information Sciences 250 (2013) 82–112

search ability for multimodal functions. Furthermore, since the GSO does not spread the experience of a swarm, it shows slow convergence rate than PSO. Based on the theoretical model that can depict the collaboration between global and local searches in memetic computation, the quasi-basin class (QBC), wihch categorizes problems according to the distribution of their local search zones, is adopted [35]. Then, a sub-threshold seeker, taken as a representative archetype of memetic algorithms, is analyzed on various QBCs to develop a general model for memetic algorithms. The Rosenbrock ABC (RABC) algorithm that combines Rosenbrock’s rotational direction method with an ABC algorithm is proposed for accurate numerical optimization [27]. There are two alternative phases of RABC: The exploration phase realized by ABC and the exploitation phase competed by rotational direction method. The results show that the proposed algorithm is promising in terms of convergence speed, success rate, and accuracy. The intermediate disturbance hypothesis (IDH), which was ?rst proposed by Grime in 1974 [19], predicts that diversity is maintained because both competitive and opportunistic species coexist at intermediate levels of disturbance. In an ecosystem, disturbances act to disrupt stable ecosystems and clear species’ habitat. As a result, disturbances lead species to newly cleared area. Once an area is cleared, there is a progressive increase in species richness, and, once again, the competition starts. Once the disturbance is removed, species richness decreases and competitive exclusion increases. The IDH states that species diversity is maximized when ecological disturbance is neither too rare nor too frequent. In this paper, inspired by the competition and diversity in the IDH, we propose a novel particle swarm algorithm called intermediate disturbance PSO (IDPSO), primarily for accelerating convergence speed and avoiding the local optima. Under this framework, concepts and strategies of resource searching from the IDH were adopted metaphorically for designing optimum searching strategies. First, by analyzing the limitation of the previous works, we identi?ed a new and promising search region in PSO for preserving the experiences of particles in swarm, which allowed the swarm to be competitive. Second, for conquering the premature convergence, we introduced a new operator called as the intermediate disturbance operator into PSO. Every particle in the improved PSO not only focuses on searching in the promising region but also has more opportunities to jump out of the local search area. Then, diversity and competition of particles are balanced in IDPSO. Third, since the IDPSO is an improved version of PSO, other strategies in PSO can also be added to it (e.g., adaptive strategy, topology strategy, and mutation strategy). Finally, we applied the IDPSO to image segmentation for solving multilevel thresholding selection problems. It showed that IDPSO-based method is more effective than other EA-based methods, and it shortens the computation time of the traditional threshold-based segmentation methods. This paper introduces PSO in Section 2, followed by presentation of the IDPSO and the details of its implementations in Section 3. The experimental studies of the proposed IDPSO have been presented in Section 4, and its application to image segmentation has been presented in Section 5. Finally, Section 6 concludes the paper. 2. Particle swarm optimization PSO works by ‘‘?ying’’ a population of cooperating potential solutions called particles through a problem’s solution space. Each particle in PSO has a position and a velocity, and its evaluation is achieved by using the objective function of the optimization problem, whose variables are the particle position dimensions. The particle updating method aims to move particles to better positions by accelerating them toward pbest and gbest. The basic PSO algorithm can be described as follows:

v id ?t ? 1? ? wv id ?t? ? c1 rand1 ? ?ppid ? xid ?t?? ? c2 rand2 ? ?pgd ? xid ?t?? xid ?t ? 1? ? xid ?t ? ? v id ?t ? 1?

?1? ?2?

where c1 and c2 are positive constants and represent the acceleration coef?cients, and rand1 and rand2 are two random variables within [0, 1]. vid is the velocity of individual i on dimension d. xid is i’s current position on dimension d, ppid is the location of the best problem solution vector found by i, pgd is the location of the best particle among all the particles in the population on dimension d, and w is the inertia weight that ensures convergence of the PSO algorithm. Generally, a maximum velocity (Vmax) for each modulus of the velocity vector of the particles is de?ned to control excessive roaming of particles outside the user-de?ned search space. Whenever a vid exceeds the de?ned limit, its velocity is set to Vmax. 3. Intermediate disturbance particle swarm optimization To enhance the global search ability and to improve the convergence rate of PSO, we presented a new PSO algorithm in cooperation with the intermediate disturbance strategy (IDS). Optimization, a process of seeking global optimum in a search space, is analogous to the resource-searching process of species in nature. The mechanisms of PSO can also be considered as a process of competition in species (e.g., birds and ?shes). Two or more species (called particles in PSO) live in the same environment and retrieve their nutrients (called global optimum) from a common space (called solution space). Because of the limitation of resources, all species living in this space have to compete (for searching the global optimum). If one species gets more resources (local optimum), according to the theory of the survival of the ?ttest, it excludes other species living in the same space and limits the diversity of ecosystems. During evolution, as per the principle of competition exclusion, all survival species have a very similar property (premature convergence). However, in contrast to the competition theory, the ecosystem contains various species (population diversity).

H. Gao et al. / Information Sciences 250 (2013) 82–112

85

The apparent contradiction between competitive exclusion and the species richness found in the nature has been a longstanding enigma [9,11]. The IDH was ?rst introduced by Grime [19] and proposed again by Connell in 1978 [12]. It is one of the most frequently suggested none-equilibrium explanations for maintaining species diversity in ecological communities [4,15,38]. It predicts a peak in species richness at intermediate intensities and frequencies of disturbance, which is believed to be an event that leads to environmental variability. This theory effectively explains how species diversity is maintained near equilibrium in the ecological system [13,46]. According to the IDH, at low levels of disturbance, more competitive organisms push subordinate species to extinction (premature convergence) and dominate the ecosystem. At high levels of disturbance, due to frequent forest ?res or human activity like deforestation, all species are at risk of getting extinct. Thus, at intermediate levels of disturbance, both competitive and opportunistic species can coexist. Under the IDH, concepts and strategies on how to balance the competition and diversity are adopted metaphorically to design optimum searching strategies. In PSO, as proved in previous studies [16,62], each particle converges to its one and only local attractor of points and is unlikely to jump out of the local region, which is de?ned by the attractor, especially in the last stage of iteration. Thus, our motivation to introduce the IDH theory into the PSO algorithm was to understand how to focus on searching in the local region (species competitive exclusion in ecosystem) as well as to provide more opportunities to each particle to jump out of the local search area (species diversity in ecosystem). To design an ef?cient IDH structure, two issues need to be addressed. First, a promising region that contains competitive individuals in swarm and serves as the main search direction of particles should be de?ned. Second, an intermediate disturbance mechanism should be included in the traditional PSO algorithm. By setting the promising region as the moving direction of particles, the intermediate disturbance mechanism proposed in our algorithm not only enables most particles to make precise search in the region (species follow the evolutional direction in ecological system) but also permits part of particles to perform long jump to escape from this region (preserving diversity in ecological system). Thereby, the new PSO algorithm achieves a balance between exploration and exploitation searches. In PSO, the direction of an individual’s and population’s evolution is conducted by pbest and gbest, respectively. Because of the convergence property, the optima exist in the region where more particles visit, as de?ned by pbest and gbest. For improving the convergence rate of PSO, we identi?ed a local region that has the most number of particles as the main evolutionary direction. Then, the particles in the proposed algorithm focused on their searches in this particular region to preserve the competition. Clerc and Kennedy [10] presented a complete theoretical analysis of PSO and demonstrated that each particle in the PSO system converges to its one and only local attractor Q = (Q1, Q2, ... , Qn). Also, they used Q as the main search direction of the improved PSO.

Qd ?

?c1 rand1 ? ppid ? c2 rand2 ? pgd ? c1 rand1 ? c2 rand2

? d ? 1; . . . ; n ?

? 3?

In order to ?nd a promising region as the main search direction, we ?rst used the Partial Derivative Theory to get the interval of Qd, which de?nes the search space of Qd. Then, by using Monte Carlo method, we recorded the frequency of particles searching in this interval during iteration. Finally, inspired by Qd, we introduced a new region P and compared P with Qd to determine which one is more effective to be the main searching direction. As it has c1 = c2 in the original PSO, Qd can also be described as follows:

Qd ?

?rand1 ? ppid ? rand2 ? pgd ? rand1 ? rand2

As Partial Derivative Theory is suitable for investigating the effect of one independent variable on others, we ?rst used it to get the interval of Qd. In addition, we tested the frequency of particles located at this interval during iteration. Next, we inferred whether Qd is effective to serve as the main moving direction.

If x ? rand1 ; y ? rand2 ; p ? ppid ; g ? pgd ; z ? Q d ; then z ?

In which, (x e [0, 1], y e [0, 1]), except the point (x = 0, y = 0).

px ? gy x?y

? 4?

Lemma 1. Global extreme value of a function f on a domain A occurs only at boundaries, non-differentiable points, and stationary points. If x0 is a global extreme value of f, then, one of the following is true:

(1) Boundary: x0 is in the boundary of A. (2) Non-differentiable: f is not differentiable at x0. (3) Stationary point: x0 is a stationary point of f. To ?nd the stationary point of z:

86

H. Gao et al. / Information Sciences 250 (2013) 82–112

De?nition 1. The point (x?, y?) is a stationary point of f(x, y), provided that one of the following is true:

(1) rf ?x? ; y? ? ? 0 ; same as f x ?x? ; y? ? ? 0 and f y ?x? ; y? ? ? 0: (2) fx ?x? ; y? ? ? 0 or f y ?x? ; y? ? ? 0 does not exist: For ?nding the stationary point of z, we need all ?rst-order partial derivatives of (4). This can be done as follows:

!

fx ?

p?x ? y? ? px ? gy ?x ? y?2

?

?p ? g ?y ?x ? y?2

;

fy ?

g ?x ? y? ? px ? gy ?x ? y?2

?

?g ? p?x ?x ? y?2

?5?

Stationary points are solutions to the following system of equations.

fx ?

?p ? g ?y ?x ? y?2

? 0 and f y ?

?g ? p?x ?x ? y?2

? 0:

It can be observed that there are no any points can be found for these two equations in the domain of z. We also found that the point (x = 0, y = 0) that met the condition (2) of De?nition 1 was not in the domain of z. Therefore, no stationary point belonged to z. To ?nd the non-differentiable points of z: Lemma 2. Consider a function f(x, y). Assume that its partial derivatives fx(x, y), fy(x, y) exist at (x0, y0) and both are continuous functions at (x0, y0). Then, f(x, y) is differentiable at (x0, y0). As described in (5), both fx and fy existed and were continuous in the domain of z. According to Lemma 2, z was differentiable in its domain. Finally, it was trivial to get the two boundaries of z, x = 0, y e (0, 1], and x e (0, 1], y = 0. For x = 0, y e (0, 1], z = g, and for x e (0, 1], y = 0, z = p. In this paper, we set [a = min(p, g), b = max(p, g)]. Now, recall Lemma 1, we get the extreme values of Qd, which are a and b, respectively. Thus, we can infer that Qd belongs to [a, b]. Since a and b are determined by pbesti and gbest, the two components are both updated by PSO random mechanism. It is extremely inconvenient to directly calculate the values of a and b during iteration.

Fig. 1. The distribution of Q with 500 particles on 10-dim Sphere and Rastrigrin functions.

Fig. 2. The distribution of P with 500 particles on 10-dim Sphere and Rastrigrin functions.

H. Gao et al. / Information Sciences 250 (2013) 82–112

87

As described in a previous study [6], for problems that are too complex, Monte Carlo method is an effective method to get precise results from a large number of trials. The Monte Carlo method is a broad class of computational algorithm that relies on repeated random sampling to obtain numerical results. It is often used in physical and mathematical problems and is most suitable when deterministic algorithm is infeasible. The Monte Carlo method is mainly used in three distinct problems: optimization, numerical integration, and generation of samples from a probability distribution. It was evidenced by several literatures of successful applications [20,31]. For testing the degree of particles belonging to [a, b] during iteration by using the Monte Carlo theory, we recorded the position of 500 particles on 10-dim Sphere and Rastrigrin functions during iteration, and the maximum number of iterations was set as 4000. This example is shown in Fig. 1, and the parameters c1, c2 and x are initialized as 1.4, 1.4, and 0.7, which guarantee the convergence behavior of the PSO algorithm [62]. In this experiment, we split the solution space into [a = min(p, g), b = max(p, g)] space and the remaining space. Then, we recorded the total number of xid located in each space during iteration. According to Fig. 1, the results show that 36% and 53% of particles locate within [a, b] on Sphere and Rastrigrin functions. It proves that Q is a promising search direction of particles during iteration. However, we also noticed that 64% and 47% of the total particles still locate out of [a, b] on the Sphere and Rastrigrin functions. Furthermore, according to (3), when pbesti equals to gbest, Q also equals to gbest. In this case, Q cannot be used to de?ne a region, but only a point. It has a very little contribution to enhance the local search ability of particles. Therefore, we inferred that, if we set Q as the main moving direction, it would con?ne the local search space of particles. Thus, locating more favorable local regions was believed to improve the local search ability of PSO. In this paper, we propose to use P, which is similar but not equal to Q, as the main moving direction of particles. It is calculated by Eq. (6).

pid ? gbest d ? rand ? ?pbest id ? gbestd ? ? normrnd ? ?gbest d ? round?gbest d ??

? 6?

The main difference between P and Q is that the former has a disturbance term. Since the area nearby the current gbest is the most promising and competitive search area, we set it as the disturbance item in P, and its formula is described as follows:

normrnd ? ?gbest d ? round?gbest d ??

normrnd is a random value within (?1, 1), which is drawn from a standard Gaussian distribution. round() rounds the argument to the nearest integer. According to (6), P belongs to [a0 = a ? c, b0 = b + c], in which c = abs(gbest ? round(gbest)) and the maximum absolute value of c equals 1. We used the Monte Carlo theory to test if the degree of the particles belongs to [a0 , b0 ] during iteration. In this experiment, we split the solution space into [a, b], [b < x < b0 ], or [a0 < x < a] space and the remaining space. According to Fig. 2, the results show that 49% and 15% of the particles locate within [b < x < b0 ] or [a0 < x < a] space on the Sphere and Rastrigrin functions. It illustrates that, during iteration, many particles still locate in this area, except [a, b]; therefore, the disturbance item in P de?nes a promising search region of particles nearby [a, b]. We also noticed that the total numbers are 85% and 67% of particles locating in [a0 , b0 ] on the Sphere and Rastrigrin functions, respectively. P enlarges the local search area of particles and provides more opportunities for the particles to search in the area nearby Q. It demonstrates that P is more suitable to be set as the main moving direction of particles than Q in the PSO systems. Furthermore, even if all pbest converge to gbest, the disturbance item in P continues to give the momentum for particles to search, but Q stops only at gbest. Simulations were carried out with benchmark problems, as given in Section 4.6 (all benchmark problems are discussed in Section 4), to demonstrate the role of P. In this section, we have presented a new mechanism that not only enhances the global search ability of particles but also con?nes most particles to search in the local area de?ned by P. The reason why PSO has premature convergence is because particles have no ability to jump out of the local optimum [63,16]. According to the IDH theory, if we occasionally give

Fig. 3. The Probability density distribution of b.

88

H. Gao et al. / Information Sciences 250 (2013) 82–112

Fig. 4. Pseudo code for the IDPSO algorithm.

particles some disturbances during iteration that enables them to have more opportunities to search in the global area, it should enhance the global search ability of PSO. Furthermore, enabling the particles to focus on searching in the area nearby P is an effective way to maintain competition of individuals. In view of above, we use the following equation:

xid ?t ? 1? ? pid ?t? ? a ? abs?b??mbestd ? xid ?t?? b ? normrnd?0; c?=?rand? mbest ?

popsize X i? 1

?7? ?8? ?9?

pbesti =popsize

The ?rst term on the right side of (7), pid is the direction of individual i on dimension d. The second term on the right side of (7) de?nes the step size of particles. For conquering the premature convergence of PSO and accelerating the convergence of particles, this term should not only allow particles to have chances to search in the global solution space but also allow them to make a search in the local space of pid. We therefore designed an intermediate disturbance operator, b, and its value is set

H. Gao et al. / Information Sciences 250 (2013) 82–112 Table 1 Benchmark test functions. F. f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 f16 Formula Pn 2 i?1 xi Pn Pi 2 i?1 ? j?1 xj ? Pn 2 i?1 i ? xi Qn Pn i?1 jxi j ? i?1 jxi j Qn Pn 2 2 ?1 ?xi ? ? i?1 ?xi ? Pi n ?1 max jxi j Pi n 2 i?1 ?xi ? 10 cos?2pxi ? ? 10? q?????????????????? Pn 2? Pn ? exp?1 ?20 exp ?0:2 1 i?1 xi i?1 cos 2pxi ? ? 20 ? e n n Pn?1 2 2 p=nf10 sin ?py1 ? ? i?1 ?yi ? 1? ? ?1 ? 10 sin2 ?pyi?1 ?? ? ?yn ? 1?2 g Pn?1 2 2 2 2 0:1fsin2 ?3px1 ? ? i ?1 ?xi ? 1? ? ?1 ? sin ?pxi?1 ?? ? ?xn ? 1? ?1 ? sin ?2pxn ?g Pn Pn 2 ? x ? 1 ? ? x x i?1 i i?2 i i?1 Pn P20 k k ? i?1 ? k?0 ?0:5? cos?2p ?3? ?xi ? 0:5??? P k k ? ?? 0 : 5 ? cos ? 2 p ? 3 ? ? 0:5?? ?n? 20 k?0

2 2 ?x2 ?sin2 ?50?x2 ? ? 1? 1 ? x2 ? 1 ? x2 ? q??????????????????? 2 2 2 2 100 ?x2 ? 10h? ? ?x1 ? x2 ? ? 1 ? x2 3 0:25 0:1

89

Range [?100, 100] [?100, 100] [?100, 100] [?10, 10] [?10, 10] [?100, 100] [?5.12, 5.12] [?32, 32] [?50, 50] [?50, 50] [?n2, n2] [?0.5, 0.5] [?100, 100] [?10, 10] [?1, 1] [?50, 50]

Xmax 100 100 100 100 10 100 5.12 32 50 50 n2 0.5 100 10 1 50

fmin 0 0 0 0 0 0 0 0 0 0 f(x?) 0 0 0 0 0

X? 0 0 0 0 0 0 0 0 0 0 i(n + 1 ? i) 0 0 (1, 0, 0) (0, 1, 1, 1) 0

?exp?x1 ? ? x2 ?4 ? 100?x2 ? x3 ?6 ? ?tan?x3 ? x4 ??4 ? x8 1

2 x2 1 ? 2x2 ? 0:3 cos?3px1 ? ? 0:4 cos?4px2 ? ? 0:7

Table 2 Comparison between different PSO algorithms on f1 and f2 (Category 1). Alg. f1 Dim = 10 FEs = 20000 PSO QPSO MRPSO HWPSO CCPSO NPSO GCPSO CLPSO ISPSO IDPSO 1.17E?20 (6.3E?20) 9.97E?41 (4.2E?40) 5.34E?72 (3.2E?71) 9.77E?15 (2.2E?14) 1.17E?59 (7.2E?59) 2.9E?21 (8.8E?21) 5.76E?36 (3.9E?35) 1.82E?10 (2.03E?10) 1.24E?14 (2.6E?14) 3.2E?124+ (2E?123) 20 40000 1.83E?16 (5.03E?16) 4.73E?27 (2.57E?26) 1.43E?72 (6.43E?72) 1.88E?11 (5.63E?11) 2.15E?52 (5.58E?52) 3.9E?12 (2.72E?11) 1.009E?16 (5.57E?16) 8.59E?10 (5.69E?10) 9.32E?12 (3.27E?11) 4.12E?153+ (2.289E?152) 30 60000 1.58E?13 (4.17E?13) 3.5E?21 (8.6E?21) 5.7E?69 (2.3E?68) 2.04E?9 (2.7E?9) 3.3E?45 (1.3E?44) 9.9E?10 (6.2E?9) 2.57E?10 (1.3E?9) 1.09E?9 (6.6E?10) 3.66E?9 (1.1E?8) 2.2E?161+ (9.6E?161) 40 80000 2.8E?11 (6.8E?11) 6.5E?17 (1.5E?16) 4.96E?66 (2.6E?65) 6.22E?8 (1.02E?7) 1.04E?39 (3.38E?39) 4.68E?4 (0.0033) 4.18E?6 (2.35E?5) 1.18E?9 (5.66E?10) 2.36E?7 (3.5E?7) 7E?160+ (4.9E?159) f2 10 20000 6.7E?5 (1.8E?4) 2.81E?4 (8.3E?4) 0.003 (0.0043) 0.0026 (0.0042) 2.5E?8 (6.6E?8) 4.6E?5 (1.59E?4) 2.9E?11 (1.01E?10) 20.14 (11.81) 4.9E?4 (0.001) 1.71E?45+ (9.3E?45) 20 40000 6.8287 (7.0073) 3.9913 (3.5116) 0.1575 (0.1897) 5.4435 (4.153) 0.1851 (0.174) 0.1459 (0.1405) 0.0105 (0.0255) 608.8822 (163.8008) 19.902 (16.7941) 1.69E?21+ (1.18E?20) 30 60000 347.16 (228.77) 206.15 (97.93) 1.5998 (1.167) 97.127 (47.88) 41.58 (68.45) 6.281 (5.572) 6.004 (9.48) 2.87E3 (744.3) 593.104 (329.63) 1.0992+ (0.937) 40 80000 2.17E3 (924.15) 1.44E3 (646.77) 12.387 (7.0692) 503.4477 (192.29) 653.937 (624.79) 50.545 (44.92) 55.7323 (46.16) 7.31E3 (1.63E3) 2.86E3 (1.5E3) 9.046+ (2.834)

to b = (normrnd(0, c))/rand. For estimating the step size of particles, we use the Monte Carlo method to estimate the density distribution of b. The value of c in b is set as 0.35, which is proved to be the best choice of c in Section 4.5. From the 100,000 trials given in Fig. 3, over 90% absolute values of b are found to be <2 then the main search region of particles in the IDPSO is closer to pid because a in (7) is set to 0.5, which proved to be the best choice, as given in Section 4.5. This enabled particles in the IDPSO to have more opportunities to make appropriate searches in the area nearby pid. Furthermore, almost 10% absolute values of b shown in Fig. 3 are >2, and some absolute values of b are close to 50,000. This enables particles to appear anywhere, particularly in the last stage of iterations, which enhances the global search ability of the IDPSO. Simulations were performed with numerical benchmark problems in Section 4.6 to demonstrate the effectiveness of b (See Fig. 4).

90

H. Gao et al. / Information Sciences 250 (2013) 82–112

Table 3 Comparison between different PSO algorithms on f3 and f4 (Category 1). Alg. f3 Dim = 10 FEs = 20000 PSO QPSO MRPSO HWPSO CCPSO NPSO GCPSO CLPSO ISPSO IDPSO 9.38E?20 (3.54E?19) 1.5E?38 (8.1E?38) 4.49E?72 (1.76E?71) 2.25E?14 (5.04E?14) 2.6E?59 (8.5E?59) 2.7E?20 (1.49E?19) 1.01E?32 (7E?32) 1.05E?9 (2.21E?9) 7.48E?14 (2.87E?13) 6.32E?123+ (2.7E?122) 20 40000 4.72E?16 (1.43E?15) 6.699E?28 (2.774E?27) 2.55E?69 (1.46E?68) 7.33E?11 (1.402E?10) 9.44E?53 (2.98E?52) 8.31E?13 (3.61E?12) 6.84E?18 (2.55E?17) 6.47E?9 (4.45E?9) 7.34E?11 (1.97E?10) 4.52E?152+ (2.42E?151) 30 60000 1.76E?12 (3.3E?12) 1.01E?20 (1.86E?20) 1.7E?67 (1.2E?66) 8.05E?8 (4.1E?7) 5.9E?45 (2.19E?44) 5.22E?5 (3.6E?4) 5.09E?9 (2.4E?8) 1.2E?8 (9.2E?9) 1.07E?7 (3.1E?7) 1.2E?158+ (8.63E?158) 40 80000 0.32 (0.5455) 0 (0) 0.02 (0.14) 0.3 (0.539) 3.72 (23.62) 420.46 (452.88) 6.72 (3.837) 0 (0) 0.3 (0.61) 0 (0) f4 10 20000 1.23E?11 (2.58E?11) 2.57E?13 (1.76E?12) 8.54E?37 (6E?36) 2.89E?4 (8.51E?4) 1.36E?33 (3E?33) 7.18E?13 (3.E?12) 12.72 (45.4) 6.78E?17 (4E?7) 7.13E?7 (1.99E?6) 2.79E?60+ (1.69E?59) 20 40000 1.79E?9 (5.9E?9) 1.587E?13 (1.11E?12) 4.56E?33 (2.38E?32) 3.463 (22.678) 1.33E?29 (8.89E?29) 1.83E?8 (6.75E?8) 30.98 (113.36) 1.9E?6 (9.35E?7) 1.75E?5 (5.38E?5) 7.77E?74+ (5.44E?73) 30 60000 0.0041 (0.0284) 3.15E?11 (1.55E?10) 2.57E?36 (1.7E?35) 5.2879 (18.422) 9.84E?27 (2.75E?26) 1.7E?5 (6.2E?5) 178.75 (358.23) 2.086E?6 (7.9E?7) 3.73E?4 (0.0016) 2.88E?76+ (1.73E?75) 40 80000 22.582 (45.164) 1.02E?10 (1.53E?10) 8.33E?41 (8.14E?41) 6.8214 (10.6661) 4.39E?22 (8.76E?22) 0.002 (0.0028) 666.432 (783.45) 2.766E?6 (1.19E?6) 2.87E?4 (8.8E?5) 1.09E?54+ (2.17E?54)

The term mbest used in Eq. (9), called as the Mean Best Position [50], gives the step size for the particle’s movement, and each pbest should contribute to the evolution of particles. Thus, it enhances the diversity and the global search ability of particles. According to the IDH theory, a larger disturbance item should lead to chaos of particles, while a smaller one should make the disturbance component contribute less in enhancing the global search ability. Thus, the correct choices of a in formula (7) and c in formula (9) are important to make the disturbance intermediate. On one hand, larger a and c allow particles with more opportunities to jump out of the local optimum, but with less local search ability. On the other hand, smaller a and c allow particles to have a more precise exploitation ability. Simulations were carried out with benchmark problems f1, f2, f7, and f9 on dimensions 20 and 40, as given in Section 4.5, to ?nd out the best choices of a and c. The reason why the improved PSO algorithm was named as IDPSO is that the disturbance ability of b was at the intermediate level. The results shown in Section 4.5 demonstrate that the IDPSO achieves a better balance between different unimodal and multimodal functions when a = 0.5 and c = 0.35. Therefore, we considered that the disturbance effect was at the intermediate level. By using proper pd and b, it is easier for particles in the IDPSO to ?nd the global optimum and exploit the already sampled regions in the solution space. The main steps of the IDPSO algorithm are listed as follows:

4. Performance evaluation 4.1. Experimental setup To test the performance of the IDPSO, 16 benchmark functions [3,56] listed in Table 1 were used for comparison with standard PSO [49], QPSO [52], HRPSO [16], HWPSO [37], CCPSO [42], NPSO [25], GCPSO [33], CLPSO [36], ISPSO [39], and IDPSO algorithms under the same maximum function evaluations (FEs). The parameters of the compared algorithms were those that achieved the best results, as described in their respective papers. The range of population initialization, the global solution X?, the dynamic range of each function, and the global optimum of these functions are shown in Table 1. Van den Bergh and Engelbrecht [64] showed that the effect of population size on the performance of PSO method was of minimum significance. Therefore, all empirical experiments in this study were conducted with a population size of 20. Furthermore, we used the same set of initial random populations to evaluate different algorithms. The following parameters of the IDPSO were used: (1) Number of runs: 50. (2) The parameter a of the IDPSO: 0.5. (3) The parameter c of the IDPSO: 0.35.

H. Gao et al. / Information Sciences 250 (2013) 82–112 Table 4 Comparison between different PSO algorithms on f5 and f6 (Category 1). Alg. f5 Dim = 10 FEs = 20000 PSO QPSO MRPSO HWPSO CCPSO NPSO GCPSO CLPSO ISPSO IDPSO 5.1E?19 (2.5E?18) 5.81E?21 (4.1-20) 1.1E?66 (7.5E?66) 6.7E?14 (2.4E?13) 1.1E?57 (7.2E?57) 1.4E?20 (6.4E?20) 1.6E?36 (1E?35) 3.3E?10 (4.9E?10) 1.8E?12 (8E?12) 1.1E?118+ (7.1E?118) 20 40000 6.29E?6 (4.402E?5) 1.34E?10 (8.52E?10) 3.03E?66 (2.11E?65) 6.57E?10 (3.73E?9) 5.47E?52 (2.72E?51) 1.712E?13 (8.13E?13) 5.01E?15 (3E?14) 1.64E?9 (1.45E?9) 2.62E?6 (1.83E?5) 4.08E?149+ (1.74E?148) 30 60000 3.572 (1.545) 0.004 (0.0203) 9E?65 (4.1E?64) 3.4E?8 (7.02E?8) 1.9E?45 (6.3E?45) 8.7E?10 (4.5E?9) 1.4E?5 (1.01E?4) 2.5E?9 (2.03E?9) 3.2E?6 (1.3E?5) 1.5E?155+ (6.3E?155) 40 80000 17.489 (2.84) 0.748 (0.338) 2.067E?5 (1.44E?5) 4.62 (0.6438) 2.255 (0.778) 30.385 (5.188) 2.446 (0.783) 15.62 (1.7772) 20.955 (3.898) 7.54E?30+ (3.88E?29) f6 10 20000 3.2E?5 (3.2E?5) 2.1E?8 (4.7E?8) 5.4E?6 (1.8E?5) 0.0013 (0.001) 1.2E?11 (2.6E?11) 0.4089 (0.5289) 1.1E?10 (6.5E?10) 4.04 (1.5561) 5E?4 (5.8E?4) 6.6E?47+ (4.3E?46) 20 40000 0.546 (0.3352) 8.08E?4 (9.73E?4) 3.32E?6 (5.02E?6) 0.548 (0.2053) 3.224E?4 (2.65E?4) 15.3666 (5.0271) 0.0135 (0.0161) 8.768 (1.5606) 1.7862 (1.1303) 3.277E?44+ (9.86E?44) 30 60000 7.4 (2.14) 0.07 (0.415) 5.5E?6 (6.1E?6) 2.42 (0.52) 0.16 (0.102) 26.15 (5.34) 0.61 (0.31) 13.1 (1.95) 11.36 (2.67) 4.2E?40+ (2.4E?39) 40 80000 72.18 (17.16) 40.55 (9.079) 29.91 (10.257) 72.335 (16.166) 42.306 (11.152) 53.29 (11.178) 84.81 (17.904) 0.2595 (0.4796) 53.15 (14.379) 4.46E?37+ (2.417E?37)

91

Table 5 Comparison between different PSO algorithms on f7 and f8 (Category 2). Alg. f7 Dim = 10 FEs = 20000 PSO QPSO MRPSO HWPSO CCPSO NPSO GCPSO CLPSO ISPSO IDPSO 5.647 (2.6792) 3.21 (1.6981) 4.6031 (3.2184) 5.7106 (3.0493) 2.394 (1.0716) 7.3661 (3.6179) 8.0791 (3.9128) 1.54E?5+ (2.552E?5) 5.4146 (2.7201) 1.97671 (1.6629) 20 40000 21.5625 (4.7972) 12.0516 (3.683) 11.4728 (4.8355) 21.71 (7.6649) 9.0342 (4.3861) 21.9222 (6.1959) 33.2714 (11.9015) 0.0201+ (0.1393) 19.7917 (6.9729) 2.34 (1.4506) 30 60000 45.043 (10.249) 26.5302 (8.997) 18.942 (5.659) 45.53 (10.853) 23.44 (5.425) 35.367 (8.51) 55.738 (15.453) 0.0801+ (0.2698) 34.9112 (10.848) 3.8251 (1.614) 40 80000 72.177 (17.16) 40.55 (9.079) 29.91 (10.257) 72.3345 (16.166) 42.31 (11.153) 53.29 (11.178) 84.81 (17.904) 0.26+ (0.4796) 53.15 (14.379) 3.9967 (2.3919) f8 10 20000 5.87E?11 (2.22E?10) 3.3E?15 (1.36E?15) 0.201 (0.506) 3.18E?8 (4.31E?8) 0.0231 (0.162) 0.7433 (1.2823) 2.21E?14 (1.71E?14) 6.29E?6 (4.34E?6) 5.23E?8 (6.25E?8) 1.17E?15 (1.75E?15) 20 40000 7.5497E?9 (2.05E?8) 2.661E?14 (2.979E?14) 9.35E?15 (3.803E?15) 1.0739E?6 (1.058E?6) 0.0231 (0.1617) 3.0067 (1.5516) 0.2825 (0.5447) 1.459E?5 (7.49E?6) 1.0414E?6 (1.85E?6) 1.954E?15+ (1.42E?15) 30 60000 0.0462 (0.2264) 3.25E?11 (1.63E?10) 1.588E?14 (3.76E?15) 3.36E?5 (1.07E?4) 0.03 (0.2102) 5.263 (1.796) 1.0827 (0.7376) 1.314E?5 (5.26E?6) 4.2E?5 (6.98E?5) 2.5E?15+ (6.96E?16) 40 80000 0.172 (0.476) 3.45E?9 (9.96E?9) 2.15E?14 (4.16E?15) 0.1414 (0.427) 0.0308 (0.2155) 7.687 (2.29) 1.269 (0.7038) 1.15E?5 (3.677E?6) 7.85E?4 (0.0015) 2.59E?15+ (4.97E?16)

To demonstrate the merits of our algorithm, we performed three categories of the experiment. In unimodal functions (Category 1) and multimodal functions (Category 2) tests, the maximum number of FEs were set as 20 ? 1000 = 20000, 20 ? 2000 = 40000, 20 ? 3000 = 60000, and 20 ? 4000 = 80000 for solving 10-D, 20-D, 30-D, and 40-D problems, respectively. In low-dimensional functions (Category 3), the FEs was set as 10,000, except that the FEs for f16 was set as 2000. In the test of

92

H. Gao et al. / Information Sciences 250 (2013) 82–112

Table 6 Comparison between different PSO algorithms on f9 and f10 (Category 2). Alg. f9 Dim = 10 FEs = 20000 PSO QPSO MRPSO HWPSO CCPSO NPSO GCPSO CLPSO ISPSO IDPSO 3.59E?23 (9.42E?23) 4.71E?32 (3.28E?47) 0.0746 (0.1593) 2.818E?18 (8.91E?18) 0.0062 (0.0435) 0.0826 (0.165) 4.42E?30 (2.49E?29) 5.04E?12 (5.93E?12) 2.93E?16 (8.78E?16) 4.71E?32 (3.28E?47) 20 40000 0.0218 (0.054) 7.8E?20 (5.47E?19) 0.0404 (0.1069) 0.0031 (0.0218) 0.0405 (0.1788) 1.0545 (3.3767) 0.0155 (0.0641) 5.33E?11 (5.889E?11) 0.0124 (0.0524) 2.356E?32+ (1.64E?47) 30 60000 0.1456 (0.5097) 0.984 (6.4432) 0.0643 (0.126) 0.0207 (0.0464) 0.056 (0.104) 1.1125 (2.0861) 0.0477 (0.11) 6.618E?11 (7.4E?11) 0.0498 (0.11) 1.57E?32+ (1.09E?47) 40 80000 1.8468 (4.588) 2.5551 (8.942) 0.0343 (0.159) 0.0093 (0.0253) 0.0998 (0.292) 3.7275 (7.622) 0.0296 (0.0513) 6.798E?11 (5.07E?11) 0.068 (0.275) 1.178E?32+ (8.21E?48) f10 10 20000 7.55E?18 (4.88E?17) 1.349E?32 (2.74E?48) 18.4238 (37.695) 4.09E?15 (1.22E?14) 9.945 (30.43) 94.5145 (110.267) 1.68E?30 (7.09E?30) 3.08E?9 (7.61E?9) 2.195E?12 (6.82-12) 1.349E?32 (2.74E?48) 20 40000 69.9335 (67.87) 11.5078 (31.893) 16.235 (43.294) 2.3174 (16.222) 63.731 (79.03) 694.403 (302.37) 97.4366 (147.85) 2.27E?8 (1.782E?8) 27.0875 (48.579) 1.349E?32+ (2.74E?48) 30 60000 574.668 (241.17) 125.014 (120.8) 14.855 (37.512) 198.96 (208.83) 247.9 (162.115) 1.46E3 (414.97) 606.88 (340.99) 3.347E?8 (2.87E?8) 442.915 (291.423) 1.349E?32+ (2.74E?48) 40 80000 1.22E3 (381.34) 377.353 (246.39) 3.2351 (16.1896) 530.87 (356.7) 536.041 (240.787) 2.07E3 (456.98) 1.33E3 (424.33) 4.4E?8 (2.46E?8) 1.3E3 (480.88) 1.349E?32+ (2.74E?48)

Table 7 Comparison between different PSO algorithms on f11 and f12 (Category 2). Alg. f11 Dim = 10 FEs = 20000 PSO QPSO MRPSO HWPSO CCPSO NPSO GCPSO CLPSO ISPSO IDPSO ?209.953 (0.068) ?209.023 (1.291) ?209.474 (0.648) ?209.952 (0.069) ?209.93 (0.2113) ?206.931 (3.162) ?209.9454 (0.1649) ?209.776 (19.754) ?209.83 (0.1825) ?209.9798 (0.0499) 20 40000 ?838.406 (665.208) ?856.974 (573.8) ?1.385E3 (157.003) ?1.0597E3 (407.496) ?1.221E3 (298.496) ?793.4835 (498.606) ?1.411E3 (107.769) 49.8715 (586.349) ?750.758 (752.572) ?1.438E3+ (44.183) 30 60000 2.193E3 (5.33E3) 1.422E3 (7.39E3) ?3.353E3 (1.333E3) ?757.312 (3.173E3) ?2.158E3 (2.656E3) ?869.782 (1.781E3) ?2.748E3 (1.484E3) 4.103E3 (4.065E3) 807.748 (3.015E3) ?3.705E3+ (694.215) 40 80000 8.269E3 (1.824E4) 2.712E4 (2.504E4) ?5.595E3 (7.075E3) 4.974E3 (8.365E3) ?911.09 (5.839E3) 4.659E3 (4.899E3) ?76.965 (3.611E3) 2.042E4 (9.531E3) 8.495E3 (1.06E4) ?6.145E3+ (2.04E3) f12 10 20000 8.57E?6 (3E?5) 7.53E?6 (2.82E?5) 0.1555 (0.3842) 0.1 (0.3742) 0 (0) 0.3577 (0.4831) 0.5298 (0.8878) 3.77E?5 (1.18E?5) 2.92E?5 (2.97E?5) 0 (0) 20 40000 0.0084 (0.013) 4.19E?6 (6.969E?6) 0.1066 (0.373) 0.2025 (0.7487) 0.1095 (0.3729) 2.3855 (1.6509) 1.5931 (1.2037) 1E?4 (3.13E?5) 0.1267 (0.3738) 0+ (0) 30 60000 0.0772 (0.0919) 5.547E?4 (6.82E?4) 0.1088 (0.373) 0.5488 (0.895) 0.1881 (0.4023) 6.3174 (1.6294) 4.2481 (2.373) 1.709E?4 (5.969E?5) 0.2494 (0.4782) 0+ (0) 40 80000 0.6028 (0.7849) 0.0051 (0.0077) 0.4308 (0.6547) 1.2198 (1.0475) 0.6798 (0.6914) 10.3212 (2.6282) 7.7085 (3.004) 0.00162 (0.0035) 1.1758 (1.3195) 0.0014 (0.0036)

unimodal functions, we not only provided the ?nal results but also the convergence rate. The average ?nal best value and the standard deviation (in parenthesis) over 50 runs for the experimental results of the IDPSO are summarized in Tables 2–8; the best result is emphasized in bold letters. Furthermore, the Welch test is performed to accomplish this task. In the statistical test, 95% con?dence level is adopted to compare the signi?cance between two competing algorithms, with + indicating that the algorithm with the best result is signi?cantly better than all its competitors in the corresponding table.

H. Gao et al. / Information Sciences 250 (2013) 82–112 Table 8 Comparison between different PSO algorithms on low dimension function (Category 3). Alg. f13 Dim = 2 FEs = 10000 7.32E?10 (8.959E?10) 2.33E?25 (8.83E?25) 4.69E?32 (1.81E?31) 2.19E?7 (2.509E?7) 1.41E?22 (6.527E?22) 2.11E?8 (1.13E?7) 1.557E?19 (1.089E?19) 9.65E?10 (1.97E?9) 1.1899E?6 (1.042E?6) 1.93E?36+ (4.322E?36) f14 3 10000 5.07E?31 (6.2E?31) 8.16E?103 (7.88E?103) 4.24E?148 (1.08E?147) 2.186E?24 (9.13E?24) 1.72E?101 (4.24E?101) 1.53E?63 (6.79E?63) 5.33E?74 (1.677E?73) 2.83E?24 (7.45E?24) 8.5E?21 (3.65E?20) 1.52E?150+ (5.52E?150) f15 4 10000 5.53E?12 (8.4E?12) 1.833E?9 (2.35E?9) 4.896E?8 (4.81E?8) 1.64E?12 (2.92E?12) 8.89E?11 (1.199E?10) 5.8E?9 (4.04E?8) 5.199 (4.51E?16) 2.84E?6 (5.88E?6) 4.73E?12 (5.14E?12) 1.23E?18+ (1.899E?18) f16 4 2000 2.18E?7 (3.58E?7) 0 (0) 0 (0) 1.02E?5 (3.52E?5) 0 (0) 4.67E?12 (1.5E?11) 4.57E?15 (1.01E?14) 0.0012 (0.0058) 2.655E?5 (4.296E?5) 0 (0)

93

PSO QPSO MRPSO HWPSO CCPSO NPSO GCPSO CLPSO ISPSO IDPSO

Fig. 5. Comparisons between different PSO methods for f1 and f2.

4.2. Testing IDPSO on unimodal functions Tables 2–4 show the performances of the compared PSO algorithms on unimodal functions. f1, a Sphere function, is probably the most widely used unimodal test function. Compared with the other algorithms, the IDPSO gives the best results on this function. Similar results as that with unimodal functions are obtained with other functions (Tables 3 and 4). The differences between the compared algorithms on these six unimodal functions suggest that the IDPSO is better at a ?ne-gained search than the other algorithms. The key difference is mainly due to P and the disturbance operator b in the IDPSO. P offers a promising region to search even when all pbests equals to gbest, and b enables particles focus on the value search space-region. According to the results shown in this section, we found that the results of the CCPSO and the MRPSO also obtains favorable results. Since particles of the MRPSO have more opportunities to search nearby Q, they show faster convergence rate. The inertia weight in the CCPSO is more likely to get small values, and particles in the CCPSO have more opportunities to learn from pbests and gbest. Thus, the CCPSO gets favorable results on unimodal functions.

94

H. Gao et al. / Information Sciences 250 (2013) 82–112

Fig. 6. Comparisons between different PSO methods for f3 and f4.

Fig. 7. Comparisons between different PSO methods for f5 and f6.

The rapid convergence of the IDPSO (Figs. 5–7) supports our explanations. In short, the IDPSO is best for solving unimodal functions among all compared algorithms.

4.3. Testing IDPSO on multimodal function f7–f12 are functions in that the number of local minima increases exponentially as the dimension of the function increases. In this comparison, we mainly investigated the global search ability, especially in each of the hybrid PSOs. f7 is the generalized Rastrigin’s function. It is the most commonly used test multimodal function in PSO algorithm, and an algorithm can be easily trapped in a local minimum. Considering more opportunities to search in the global solution space, the IDPSO gets more favorable results than the compared algorithms, except CLPSO. Because the comprehensive strategy used in the CLPSO slows down the convergence rate of particle to its own pbest, the particle searches the solution space more thoroughly than in the IDPSO. Therefore, the CLPSO gets better results than the IDPSO on f7. f8 is Ackley function, according to the results shown in Table 5, the performances of the IDPSO have little in?uences with the change of dimension and show the best results on each dimension. f9 and f10 are Levy and Montalvo functions, which have approximately 5n and 15n local minima, respectively. From Table 6, the IDPSO shows the best results on both the functions. Moreover, when the dimension of f10 is increasing to 20, 30, and 40, all of the compared algorithms, except the IDPSO and the CLPSO, have been trapped in the local optima. Since the intermediate disturbance operator b enables particles to have more chances to jump out of the local optimum and the disturbance term of pd enlarges the local search region, the IDPSO gets more favorable results. The

H. Gao et al. / Information Sciences 250 (2013) 82–112

95

Fig. 8. The Box Plot for f7 and f8.

Fig. 9. The Box Plot for f9 and f10.

Fig. 10. The Box Plot for f11 and f12.

96

H. Gao et al. / Information Sciences 250 (2013) 82–112

Fig. 11. Comparisons between different PSO methods for f15 and f16.

n?n?4??n?1? number of local minima of f11 is not known. The global solution and optimum are x? . As shown i ? i?n ? 1 ? i? and ? 6 in Table 7, the IDPSO achieves the best results on this function. Function f12 is Weierstrass Problem. The property of this function is that it is continuous everywhere, but differentiable nowhere. The IDPSO gets the global optimum on 10, 20, and 30 dimensions. In short, the IDPSO is the best to tackle multimodal functions as compared with the other algorithms. The Box Plot for the different PSOs on 30 dimensions (Figs. 8–10) also supports our explanations. Thus, the IDPSO is suitable for tackling multimodal functions.

4.4. Test IDPSO on low-dimension functions To evaluate the IDPSO in detail, additional benchmark functions were included in our experiments, i.e., f13–f16, where the number of local minima for each function and the dimension of the functions are small. Table 8 summarizes the average results over 50 runs. We obtained similar results to the previous ones despite the large differences in the dimensionality of functions. The IDPSO still outperformed the other algorithms signi?cantly, even when the dimension was low. Achieving an effective balance between exploration and exploitation searches enables the IDPSO to get the best achievements on all functions (Table 8). Although the QPSO, the MRPSO, and the CCPSO also get the global optimum on f16, according to the comparison shown in

Table 9 Comparison different a and c in the IDPSO for f1.

c

a

0.9 Dim = 20 40 9.9E4 (8.3E3) 1E5 (1.1E4) 9.9E4 (9.8E3) 10.09 (2.86) 6.2E?6 (1.1E?5) 1.8E?46 (7.9E?46) 0 (0) 0.7 20 4.17E4 (5.16E3) 1.85E3 (6.2E3) 0.0133 (0.0119) 2.86E?50 (1.2E?49) 5.25E?82 (2.3E?81) 3.8E?154 (8.5E?154) 0 (0) 40 9.9E4 (8.3E3) 1E5 (1.1E4) 690.1 (222.8) 1.1E?24 (3.2E?24) 8.4E?57 (2.1E?56) 1.5E?163 (0) 0 (0) 0.5 20 203.78 (94.586) 0.0073 (278.03) 7.19E?41 (2.8E?40) 4.1E?153 (2E?152) 3.8E?190 (0) 7.9E?274 (0) 0 (0) 40 9.9E4 (8.3E3) 740.8 (222.6) 7.7E?15 (1.9E?14) 7E?160 (5E?159) 3E?221 (0) 0 (0) 0.8 (3.4) 0.3 20 4.23E?27 (7E?27) 1.46E?83 (3.342) 1.5E?190 (0) 0 (0) 0 (0) 0 (0) 125.05 (462.15) 40 7.1E?6 (8.1E?6) 1.5E?57 (3.5E?57) 4.6E?222 (0) 0 (0) 0 (0) 0 (0) 774.1 (917.4) 0.1 20 0 (0) 0 (0) 0 (0) 6.7 (26.1) 0 (0) 30.96 (130.4) 1.3E3 (688.3) 40 0 (0) 0 (0) 7.2 (31) 114.4 (184) 584.8 (906) 684.7 5E3 (3E3)

0.9 0.7 0.5 0.35 0.3 0.25 0.1

4.19E4 (5.2E3) 4.18E4 (6.07E3) 207.65 (60.84) 3.44E?10 (6.7E?10) 1.7E?26 (7.1E?26) 5.7E?72 (1.6E?71) 0 (0)

H. Gao et al. / Information Sciences 250 (2013) 82–112 Table 10 Comparison different a and c in the IDPSO for f2.

97

c

a

0.9 Dim = 20 40 2.6E5 (8.6E4) 2.8E5 (9.4E4) 2.9E5 (8.3E4) 9.96E3 (2.9E3) 1.67E3 (624.02) 116.493 (48.79) 22.45 (12.2) 0.7 20 7.9E4 (2.1E4) 2.95E4 (1.3E4) 209.78 (90.87) 34.735 (0.298) 1.5E?5 (6.5E?5) 1.33E?18 (5.3E?18) 48.31 (96.1) 40 2.6E5 (8.64E4) 2.8E5 (9.4E4) 8.2E4 (4.6E4) 353.76 (102.4) 46.01 (19.7) 14.28 (4.74) 260.32 (359.39) 0.5 20 4.9E3 (2.4E3) 168.86 (60.4) 0.626 (0.43) 1.69E?21 (1.2E?20) 1.1E?20 (4.8E?20) 2.34E?14 (4.9E?14) 690.3 (642.4) 40 2.5E5 (8.3E4) 7.5E4 (3.9E4) 469.5 (122.9) 9.05 (2.83) 8.25 (3.34) 6.677 (2.1) 3.59E3 (2.4E3) 0.3 20 1.67 (1.049) 3.8E?6 (1.6E?5) 3.26E?31 (1.1E?30) 0.1085 (0.2543) 1.16 (1.03) 40.666 (121.1) 2.28E3 (1.6E3) 40 1.6E3 (589.8) 45.62 (13.93) 8.74 (3.63) 9.69 (2.62) 23.53 (14.2) 93.753 (66.68) 9.1E3 (4.63E3) 0.1 20 0.958 (1.33) 44.24 (67.99) 934.2 (822.3) 2.19E3 (1.9E3) 2.9E3 (1.8E3) 2.28E3 (2.1E3) 4.04E3 (2.4E3) 40 22.25 (8.43) 194.6 (265.8) 3.2E3 (2E3) 8.76E3 (4E3) 1.3E4 (6.9E3) 1.22E4 (4.6E3) 1.72E4 (6.3E3)

0.9 0.7 0.5 0.35 0.3 0.25 0.1

8.04E4 (2.1E4) 6.83E4 (2.5E4) 5.04 E3 (3.2E3) 25.02 (6.932) 1.9 (0.93) 0.0501 (0.187) 0.436 (0.68)

Table 11 Comparison different a and c in the IDPSO for f7.

c

a

0.9 Dim = 20 40 626.5 (25.03) 631.01 (27.3) 620.5 (29.2) 231.4 (30.38) 173.4 (11.44) 39.7 (18.8) 13.29 (6.03) 0.7 20 276.2 (18.7) 152.3 (18.6) 83.3 (13.1) 11.2 (9.45) 20.88 (5.46) 3.03 (1.35) 15.23 (5.56) 40 626.5 (25.03) 631.01 (27.3) 305.5 (25.36) 72.7 (28.19) 72.59 (10.29) 3.91 (2.557) 16.63 (3.38) 0.5 20 125.2 (12.7) 82.89 (12.8) 25.33 (13.01) 2.34 (1.45) 6.29 (1.78) 7.562 (2.59) 17.56 (5.8) 40 626.7 (25.03) 307.7 (28.9) 151.9 (39.63) 3.99 (2.39) 23.09 (6.11) 6.22 (2.69) 29.6 (8.33) 0.3 20 36.4 (15.9) 11.63 (5.38) 5.44 (2.89) 7.81 (2.64) 6.57 (3.72) 14.88 (4.55) 26.67 (7.02) 40 169.6 (27.64) 73.07 (22.86) 31.47 (18.84) 10.1 (2.78) 12.84 (4.11) 16.47 (4.03) 47.67 (11.35) 0.1 20 9.9 (2.89) 13.08 (4.8) 20.85 (6.54) 31.74 (13.33) 16.52 (8.9) 36.17 (12.64) 37.86 (13.37) 40 11.75 (3.26) 19.6 (6.03) 31.25 (5.95) 40.4 (9.08) 52.09 (7.49) 64.5 (17.3) 73.04 (22.31)

0.9 0.7 0.5 0.35 0.3 0.25 0.1

279 (19.3) 286.3 (26.9) 127.6 (13.4) 47.954 (15.68) 57.66 (17.7) 5.99 (3.57) 10.34 (3.37)

the Fig. 11, the IDPSO shows better convergence rate. The category 3 experiment proves that the dimension is not the key factor that determines the search behavior of the IDPSO.

4.5. Sensitivity of the parameter for IDPSO In the IDPSO, a and c control the convergence rate. In this section, we attempted to improve the two parameters by tested them for the unimodal functions f1 and f2 and the multimodal functions f7 and f9 on dimensions 20 and 40. The functions were tested using the values a = 0.9, 0.7, 0.5, 0.3, 0.1 and c = 0.9, 0.7, 0.5, 0.35, 0.3, 0.25, 0.1. The maximum numbers of evaluations were the same as in Section 4.1. The mean cost value and the standard deviation of the IDPSO with different values for a and c are tabulated in Tables 9–12. The smaller the values of a and c are, the greater the opportunity for a particle to search in the local space, and the lower the chance to jump out of the local optimum. The results shown in Tables 9–12 illustrate this phenomenon. As per the results on the unimodal functions, smaller a and c indicate small step size for the movement of particles. Thus, the particles need more iterations to get closer to the global optimum. On the contrary, larger a and c enable few particles to make a precise

98 Table 12 Comparison different a and c in the IDPSO for f9.

H. Gao et al. / Information Sciences 250 (2013) 82–112

c

a

0.9 Dim = 20 40 3.1E8 (1.3E8) 2.9E8 (1.1E8) 2.8E8 (1.1E8) 8.2E7 (5.3E7) 3.9E7 (3.6E7) 8.4E5 (1.1E6) 0.0039 (0.017) 0.7 20 2.5E7 (2.6E7) 3.1E6 (4.1E6) 1.1E6 (1.1E6) 3.812 (15.35) 1.9E?14 (8.2E?14) 2.36E?32 (8.2E?48) 0.0156 (0.0467) 40 3.1E8 (1.3E8) 2.3E8 (1.1E8) 1.9E8 (1.1E8) 4.5E6 (4.2E6) 1.3E6 (1.1E6) 0.0067 (0.029) 0.047 (0.156) 0.5 20 1E5 (1.8E5) 5.8E?28 (5.6E?10) 5.7E?28 (2.5E?27) 2.36E?32 (8.2E?48) 2.36E?32 (8.2E?48) 2.36E?32 (8.2E?48) 0.5926 (1.5585) 40 2.4E7 (2.3E7) 1.3E6 (1.6E6) 3.1E5 (4.1E5) 1.2E?32 (4E?48) 1.2E?32 (4E?48) 1.2E?32 (4E?48) 3.701 (3.171) 0.3 20 2.36E?32 (8.2E?48) 2.36E?32 (8.2E?48) 2.36E?32 (8.2E?48) 0.0078 (0.0339) 0.0078 (0.048) 0.0156 (0.0467) 8.7552 (8.341) 40 1.2E?32 (4E?48) 1.2E?32 (4E?48) 1.2E?32 (4E?48) 1.2E?32 (4E?48) 0.0314 (0.14) 0.0978 (0.147) 13.96 (21.59) 0.1 20 2.36E?32 (8.2E?48) 2.36E?32 (8.2E?48) 2.36E?32 (8.2E?48) 0.4533 (1.056) 1.765 (3.7423) 2.1888 (3.7423) 102.914 (373.39) 40 1.2E?32 (4E?48) 0.06 (0.172) 0.235 (0.92) 0.57 (0.985) 1.757 (1.84) 3.4853 (2.27) 28.23 (34.95)

0.9 0.7 0.5 0.35 0.3 0.25 0.1

5.19E7 (4.7E7) 3.1E7 (3.4E7) 3.1E7 (3.3E7) 3.3E5 (4E5) 6.29E4 (1.7E5) 5.6E?21 (2.4E?20) 2.36E?32 (8.2E?48)

Fig. 12. The number of FEs used by different EAs in solving the benchmark functions except f11.

Fig. 13. Comparisons between different IDPSOs for f1 and f2 on 20-dim.

search in the local area of pd. As per the results on the multimodal functions, smaller a and c indicate that particles have lesser ability to jump out of the local area. Thus, the particles can be easily trapped in the local optima. In summary, according to the results shown in Tables 9–12, when a = 0.5, the results on c = 0.35, 0.3, 0.25 have little differences on unimodal functions and f9, while, with c = 0.35, more favorable results are obtained on f7. In summary, we

H. Gao et al. / Information Sciences 250 (2013) 82–112

99

Fig. 14. The comparison of population distribution observed at various stages among IDPSO, IDPSO-Q, and IDPSO-R process. (a) Generation = 1. (b) Generation = 20. (c) Generation = 80. (d) Generation = 100.

100

H. Gao et al. / Information Sciences 250 (2013) 82–112

selected a = 0.5 and c = 0.35, as this choice is at intermediate levels of disturbance for the IDPSO, which shows a better balance between global and local searches. The results of IDPSO when a = 0.5 and c = 0.35 in a comparison is emphasized in bold letters. We compared our algorithm with the GSO [22], the ABC [27], and the SaDE [41] algorithms, results are summarized in Fig. 12. The average number of the FEs of the algorithms required to reach an accuracy of the convergence was <10?6 for all the functions in 50 runs. A smaller number of FEs suggests that this algorithm converges to the global optimum faster. The dimension and the maximum number of FEs were set to 30 and 200,000, respectively. In this comparison, the results of f11 is not shown because none of the compared algorithms reached the accuracy of 10?6. As shown in Fig. 12, the IDPSO outperformed the other algorithms. In general, from the results shown in this section, the IDPSO achieved a better balance between global and local searches, as compared to the other algorithms.

4.6. The role of P and b in IDPSO As described in Section 3, P in the IDPSO was set as the main search direction of particles. Furthermore, we used an intermediate disturbance operator b to balance the global and local search abilities. For showing their impact on the IDPSO, we designed two algorithms, IDPSO-Q and IDPSO-R, to compare with the IDPSO. The difference between the IDPSO and IDPSO-Q is that the former uses P and the later uses Q as the main searching direction. The IDPSO-R algorithm uses a random operator within [0, 1] instead of b in the IDPSO to de?ne the step size of particles. To ensure that the search space of particles de?ned by the random operator belongs to (mbestd ? xid(t)), a in (7) was set as 1. We used f1, f2, f7, and f9 as the test functions. Furthermore, to illustrate the dynamics of particles distribution in different processes, we set the population size of the three algorithms as 50 to solve f7 on 10-dim, and the maximum generation was set as 100. The population distributions of these algorithms on the ?rst two dimensions were observed (Fig. 14). For the unimodal functions, the convergence rate of the algorithms is more interesting than the ?nal results of optimization. Therefore, in this comparison, we only showed the convergence rate of the tested algorithms on unimodal functions. As the results shown in Fig. 13, we found that the IDPSO performed better than the other algorithms in terms of convergence rate on f1 and f2. As compared to the IDPSO-Q, since the IDPSO uses P as the main search direction, it enables particles to have more opportunities to make a precise search in the competitive region P. Furthermore, the IDPSO shows a faster convergence rate than the IDPSO-R because of the intermediate disturbance operator in IDPSO makes particles focus their search on P. f7 and f9 are multimodal functions. In this comparison, we focused mainly on the algorithm that gets more favorable ?nal results. In Table 13, the IDPSO gets the best results. The IDPSO-R uses a random operator to de?ne the step size of particles is easier to be trapped into the local optimum, and it has a little chance to make a good search in the local area. Compared with the IDPSO-R, the intermediate disturbance operator in the IDPSO achieves a balance between the global and local searches, which enables the IDPSO to obtain better results than the IDPSO-R. The results in Table 13 showed that using P in the IDPSO allows particles to search more precisely than Q in the IDPSO-Q. To further evaluate the role of P and b, we recorded the distribution of particles in the IDPSO, IDPSO-Q, and IDPSO-P processes. Following the initialization (Fig. 14(a)), the particles start to explore throughout the search space without a control center. Then, from the twentieth generation (Fig. 14(b–d)), P served as the main searching direction for particles to swarm together toward the optimal region. We also found that, by using b, the IDPSO algorithm not only enables particles to focus their search on the optimal region but also helps them to jump out of the local region. Because the major difference between P and Q is the disturbance term and as the value generated by the disturbance term is very small, the results achieved by the IDPSO and the IDPSO-Q show little differences in this comparison. However, the results in Fig. 13 showed that P is more effective than Q when it is set as the moving direction. As depicted in Fig. 14, without using the intermediate disturbance operator, it is found that the particles in the IDPSO-R had little opportunities to focus on searching in P to get precise results.

Table 13 Comparison between different IDPSOs for f7 and f9. Fun Dim Average (Std. dev.) IDPSO f7 20 40 2.34 (1.4506) 3.9967 (2.3919) 2.356E?32 (1.64E?47) 1.178E?32 (8.21E?48) IDPSO-Q 11.3115 (4.537) 33.6104 (13.849) 0.0933 (0.2547) 0.0343 (0.1659) IDPSO-R 61.0595 (21.2102) 625.6663 (29.1477) 1.724E3 (5.599E3) 3.04E8 (1E8)

f9

20 40

H. Gao et al. / Information Sciences 250 (2013) 82–112

101

(a) Lenna

(b) Pepper

(a’)

(b’)

(c) Hunter

(d) 24077.jpg

(c’)

Fig. 15. Test images and their histograms.

(d’)

102

H. Gao et al. / Information Sciences 250 (2013) 82–112

(e) 145086.jpg

(f) 157055.jpg

(e’)

Fig. 15 (continued)

(f’)

Table 14 Objective values and thresholds by the OTSU method. Image M?1=2 Objective values Lena Pepper Hunter 24077 145086 157055 Mean Cpu time 9.3449E3 9.3515E3 5.4491E3 1.925E4 1.2021E4 1.3187E4 2.1478 Optimal thresholds 134, 165 134,176 102, 146 168, 255 150, 252 166, 205 M?1=3 Objective values 1.1334E4 1.1269E4 6.4261E3 2.2053E4 1.4957E4 1.6106E4 175.439 Optimal thresholds 121, 151, 176 113, 158, 184 86, 129, 155 126, 220, 255 124, 170, 252 150, 184, 213 M?1=4 Objective values 1.2558E4 1.2525E4 6.9725E3 2.37E4 1.664E4 1.7918E4 8226.187 Optimal thresholds 111, 140, 158, 180 103, 140, 167, 189 69, 112, 137, 158 104, 178, 241, 255 107, 142, 189, 252 137, 174, 198, 219

Table 15 Objective values and thresholds by the Kapur method. Image M?1=2 Objective values Lena Pepper Hunter 24077 145086 157055 Mean Cpu time 12.1199 12.4236 13.0587 12.6086 12.9396 12.6252 2.074 Optimal thresholds 103, 167 84, 150 93, 234 108, 176 88, 164 104, 166 M?1=3 Objective values 15.1301 15.4243 16.4327 15.8409 16.0505 15.7539 159.951 Optimal thresholds 103, 168, 234 73, 121, 168 89, 180, 234 67, 125, 188 69, 117, 173 100, 156, 205 M?1=4 Objective values 18.0502 18.2124 19.6722 18.9018 19.0255 18.6648 8100.836 Optimal thresholds 91, 58, 65, 61, 69, 76, 131, 173, 234 91, 131, 171 118, 180, 234 113, 163, 213 119, 176, 219 115, 160, 208

This simple investigation reveals that the balance between exploration and exploitation can be maintained in the IDPSO during iteration. From the results shown in this section, we can conclude that, based on setting P as the main search direction, the intermediate disturbance operator in the IDPSO keeps a balance between exploration and exploitation searches.

H. Gao et al. / Information Sciences 250 (2013) 82–112 Table 16 The mean CPU time of the compared EA-based methods on Kapur algorithm. Dim Alg. IDPSO-K 2 3 4 5 7 9 dim dim dim dim dim dim 0.4517 0.4524 0.4532 0.4558 0.7385 0.7418 ABC-K 0.4774 0.4789 0.4811 0.4814 0.7928 0.7969 ACO-K 0.4836 0.4955 0.5 0.5053 0.7777 0.7802 GA-K 0.4522 0.4566 0.4637 0.471 0.7971 0.8152 PSO-K 0.4528 0.4546 0.4556 0.4574 0.7545 0.7608

103

QPSO-K 0.4537 0.4553 0.4577 0.459 0.7712 0.7838

Table 17 The mean CPU time of the compared EA-based methods on OTSU algorithm. Dim Alg. IDPSO-O 2 3 4 5 7 9 dim dim dim dim dim dim 0.4325 0.4353 0.4364 0.4387 0.9042 0.9069 ABC-O 0.4543 0.4563 0.458 0.4593 0.9292 0.9338 ACO-O 0.4568 0.4728 0.4825 0.475 0.9131 0.9144 GA-O 0.4328 0.4375 0.4422 0.4506 0.9384 0.9632 PSO-O 0.4322 0.4344 0.4358 0.4363 0.8853 0.8906 QPSO-O 0.4329 0.4352 0.4391 0.4414 0.8917 0.9037

5. Multilevel thresholding for image segmentation through IDPSO The goal of image segmentation is to extract meaningful objects from an input image. It is useful in discriminating an object from other objects having distinct gray levels or different colors. Among all of the existing segmentation techniques, thresholding is one of the most popular techniques because of its simplicity, robustness, and accuracy. Over the years, many thresholding techniques have been proposed. These techniques can be roughly categorized into two. The ?rst category contains approaches that determine the optimal thresholds by analyzing the pro?le characteristics of the image histogram. The second category belongs to those techniques that determine the optimal thresholds by optimizing a certain ?tness function; such methods include entropy-based thresholding methods, minimization of Bayesian error, and clustering-based methods. These methods have been proven to be useful in image, video, and computer vision applications. However, the computation time of some methods (e.g., OTSU, Kapur algorithm) grow exponentially with the number of thresholds because of its exhaustive searching strategy, which limits the multilevel thresholding applications. Recently, several EAs have been introduced to the ?eld of image segmentation owing to their fast computing abilities. Tao et al. [53] used an ACO method to obtain optimal parameters of the entropy-based object segmentation approach. For accelerating the convergence rate of the traditional genetic algorithm (GA), Cao et al. [7] proposed a GA with a learning operator to have more opportunities to search in the promising region. When a low-performance individual chromosome learns from the best performance one with a certain probability, the average ?tness of each generation can increase and the stability can improve. Comparing with other EAs, except PSO, the ABC algorithm has more chances to learn from the favorable results, which enables the ABC algorithm to have a faster convergence rate. Furthermore, if a solution in the ABC cannot be improved by local searches in a limited iteration, a new random solution is generated. Therefore, comparing with the PSO algorithm, ABC has a higher probability of jumping out of the local area. Akay [1] reported that ABC algorithm performs better than PSO when they are both applied into the multilevel thresholding segmentation. PSO and its variants have been successfully applied to image segmentation [17,58]. However, further balancing the convergence rate and global search ability of PSO needs to be investigated. In this paper, we have proposed a new and optimal thresholding algorithm that is particularly suitable for multilevel image segmentation as it uses the IDPSO technique. 5.1. OTSU criterion-based measure The OTSU criterion [41] proposed by Otsu has been popularly employed in determining whether the optimal thresholding method can provide image segmentation with satisfactory results. The original algorithm in OTSU can be described as follows: Initially, we considered an image containing N pixels of gray levels from 0 to L. h(i) represents the number of the ith gray level pixel and PRi represents the probability of i. Then, we obtain

PRi ? h?i?=N

?10?

104

H. Gao et al. / Information Sciences 250 (2013) 82–112

Table 18 Objective value and standard deviation by the compared EA-based methods on OTSU algorithm. Images M?1 Objective value (standard deviation) IDPSO-O LENNA 2 3 4 9.3449E3 (5.46E?12) 1.1334E4+ (9.095E?12) 1.2558E4+ (0.4336) 9.3515E3 (1.27E?11) 1.1269E4+ (5.46E?12) 1.2525E4+ (5.45E?12) 5.4491E3 (0) 6.4260E3+ (0.1774) 6.9721E3+ (1.4463) 1.925E4 (3.64E?12) 2.2053E4 (7.28E?12) 2.37E4 (2.6719) 1.2021E4 (9.09E?12) 1.4957E4 (0) 1.664E4+ (0.2012) 1.3187E4 (9.09E?12) 1.6106E4 (2.64E?11) 1.7918E4+ (0.1636) ABC-O 9.3449E3 (5.46E?12) 1.1333E4 (0.4378) 1.2553E4 (5.3372) 9.3515E3 (1.27E?11) 1.1267E4 (2.5133) 1.2518E4 (9.213) 5.4491E3 (0) 6.4247E3 (3.1846) 6.9705E3 (3.9025) 1.925E4 (3.64E?12) 2.2053E4 (0.5491) 2.37E4 (2.0471) 1.2021E4 (9.09E?12) 1.4957E4 (0) 1.6639E4 (0.7188) 1.3187E4 (9.09E?12) 1.6105E4 (1.7938) 1.7917E4 (1.5336) ACO-O 9.3446E3 (0.82) 1.1333E4 (4.8772) 1.2499E4 (61.4296) 9.3513E3 (0.4969) 1.1258E4 (26.5569) 1.2478E4 (68.3956) 5.4485E3 (1.9238) 6.417E3 (8.5332) 6.9318E3 (26.7875) 1.8941E4 (915.65) 2.1993E4 (203.9826) 2.3594E4 (175.684) 1.0248E4 (848.32) 1.3716E4 (1.91E3) 1.5764E4 (1.31E3) 1.3186E4 (1.9038) 1.6098E4 (11.1012) 1.7822E4 (169.6271) GAL-O 9.3409E3 (6.2918) 1.13E4 (21.1507) 1.2402E4 (71.9929) 9.3491E3 (2.9651) 1.1221E4 (42.46) 1.2368E4 (80.0083) 5.4452E3 (3.5445) 6.3869E3 (19.8339) 6.8747E3 (40.5011) 1.9246E4 (11.6326) 2.2001E4 (55.9708) 2.3511E4 (116.2485) 1.2005E4 (42.1637) 1.4821E4 (156.83) 1.639E4 (198.68) 1.3178E4 (9.1442) 1.6058E4 (31.549) 1.7691E4 (116.7908) PSO-O 9.3449E3 (5.46E?12) 1.1333E4 (0.9798) 1.2539E5 (21.8875) 9.3515E3 (1.27E?11) 1.1268E5 (1.8339) 1.2514E4 (11.6047) 5.4491E3 (0) 6.4244E3 (3.0875) 6.9541E3 (19.1643) 1.925E4 (3.64E?12) 2.2053E4 (0.1689) 2.37E4 (2.9525) 1.2021E4 (9.09E?12) 1.4957E4 (0) 1.6639E4 (2.39) 1.3187E4 (9.09E?12) 1.6105E4 (1.6998) 1.7882E4 (106.187) QPSO-O 9.3449E3 (0.0542) 1.1333E4 (0.2394) 1.2556E4 (3.0216) 9.3515E3 (1.27E?11) 1.1269E4 (0.2371) 1.2517E4 (11.39) 5.4491E3 (0) 6.4232E3 (4.0864) 6.9683E3 (7.4146) 1.925E4 (3.64E?12) 2.2053E4 (0.3694) 2.3697E4 (6.9711) 1.2021E4 (9.09E?12) 1.4957E4 (0.1361) 1.6639E4 (2.8905) 1.3187E4 (9.09E?12) 1.6106E4 (0.5066) 1.7908E4 (11.2889)

PEPPL

2 3 4

HUNTER

2 3 4

24077

2 3 4

145086

2 3 4

157055

2 3 4

P in which, N ? L i?0 h?i?. Assuming that there are M ? 1 thresholds {t1, t2, ... , tM-1} that divide the original image into M classes (C1 for [0, t1], C2 for [t1, t2], and CM for [tM?1, L]), the optimal thresholds {t1, t2, ... , tM?1} chosen by the OTSU method are depicted as follows:

? ? 2 ft ? 1 ; t 2 ; . . . ; t M?1 g ? arg maxfrB ?t 1 ; t 2 ; . . . ; t M?1 ?g

?11?

uk ? P

i2C k iPRi =

where,

r2 B ?

PM

k?1

xk ? ?uk ? ut? ? , with xk ? k

2

P

i2C k PRi ;

xk ; k ? 1; 2; . . . ; M.

5.2. Kapur criterion-based measure Another ef?cient segmentation technique is entropy-based thresholding based on the probability distribution of the gray level histogram [48]. The entropy is maximum when the optimal thresholds separating the classes are assigned properly. Therefore, it is aimed to ?nd the optimal thresholds that can yield the maximum entropy. For multilevel thresholding, Kapur’s entropy may be described as follows:

X PRi PRi Hi ? ? ln

i

xi

xi

?12?

H. Gao et al. / Information Sciences 250 (2013) 82–112 Table 19 Objective value and standard deviation by the compared EA-based methods on Kapur algorithm. Images M?1 Objective value (standard deviation) IDPSO-K LENNA 2 3 4 12.1199 (1.78E?15) 15.0836 (0.0172) 17.9731 (0.0468) 12.4236 (8.88E?15) 15.4243 (5.3E?15) 18.2095 (0.0024) 13.0587 (7.11E?15) 16.4170 (0.0342) 19.6652 (0.0098) 12.6085 (4.1E?4) 15.8405 (7.1E?4) 18.9018 (3.55E?15) 12.9396 (3.55E?15) 16.0503 (2.13E?4) 19.0224 (0.0267) 12.6252 (5.33E?15) 15.7535 (7.9E?4) 18.6648 (3.55E?15) ABC-K 12.1199 (1.78E?15) 15.0836 (0.041) 18.0197 (0.0744) 12.4236 (8.88E?15) 15.4241 (5.45E?4) 18.2094 (0.0032) 13.0587 (7.11E?15) 16.4173 (0.032) 19.6507 (0.0183) 12.6084 (5.4E?4) 15.8401 (9.6E?4) 18.8953 (0.0045) 12.9396 (3.55E?15) 16.0502 (4.66E?4) 19.0193 (0.0061) 12.6252 (5.33E?15) 15.753 (0.0012) 18.6619 (0.002) ACO-K 12.1198 (3.24E?4) 15.0521 (0.03) 17.7881 (0.1685) 12.4233 (5.97E?4) 15.4208 (0.0134) 18.1792 (0.0236) 13.0408 (0.0331) 16.3458 (0.0507) 19.5024 (0.1304) 12.6059 (0.0025) 15.8328 (0.0109) 18.8809 (0.0315) 12.9379 (0.0011) 16.0326 (0.0376) 19.009 (0.0364) 12.6216 (0.0058) 15.7509 (0.002) 18.632 (0.0412) GAL-K 12.1184 (0.0018) 15.0725 (0.0322) 17.9244 (0.088) 12.4229 (7.14E?4) 15.4161 (0.0062) 18.1689 (0.0233) 13.0554 (0.0033) 16.3455 (0.0341) 19.482 (0.0904) 12.607 (0.0011) 15.83 (0.0061) 18.8484 (0.0231) 12.9372 (0.0014) 16.0339 (0.0082) 18.9627 (0.0274) 12.6237 (0.0017) 15.743 (0.006) 18.6239 (0.0229) PSO-K 12.1199 (1.78E?15) 15.0658 (0.0379) 17.9487 (0.15) 12.4236 (8.88E?15) 15.4241 (6.37E?4) 18.206 (0.0103) 13.0587 (7.11E?15) 16.4168 (0.261) 19.6579 (0.0129) 12.6085 (4.1E?4) 15.8401 (0.0012) 18.8969 90.0077) 12.9395 (4.48E?4) 16.0503 (3.86E?4) 19.0192 (0.0072) 12.6252 (5.33E?15) 15.7527 (0.0022) 18.6616 (0.0042) QPSO-K

105

12.1199 (1.78E?15) 15.0513 (0.0215) 17.8126 (0.1562) 12.4236 (8.88E?15) 15.4242 (2.27E?4) 18.2038 (0.0144) 13.0587 (7.11E?15) 16.3943 (0.032) 19.6491 (0.0328) 12.6081 (0.0017) 15.8405 (0.0011) 18.8981 (0.0042) 12.9395 (3.69E?4) 16.0503 (6.16E?4) 19.0219 (0.0053) 12.6252 (5.33E?15) 15.753 (0.0012) 18.664 (0.0015)

PEPPL

2 3 4

HUNTER

2 3 4

24077

2 3 4

145086

2 3 4

157055

2 3 4

{t1, t2, ... , tM?1} are obtained from (13) that tries to maximize the objective function:

(

? ? ft ? M ?1 ; t M ?1 ; . . . ; t M ?1 g

? arg max

M ?1 X i?1

) Hi ?13?

According to the description of OTSU and Kapur, both the algorithms are exhaustive searching method. When they are applied to multilevel thresholding, the computation time grows exponentially with the increase in the number of thresholds. Particularly, the computational complexity of an exhaustive search is O(LM?1). As shown earlier, the IDPSO not only inherits the fast convergence rate of PSO, which shortens the computation time of the two methods, but also conquers the premature convergence of PSO. Therefore, we employed the IDPSO for solving multilevel thresholding image segmentation. 5.3. Experimental setup For evaluating the performance of the proposed algorithm, we implemented it on a wide variety of images. These images include the popular tested images used in previous studies [53,7,1,58,17] and 24077.jpg, 145086.jpg, and 157055.jpg images

106

H. Gao et al. / Information Sciences 250 (2013) 82–112

Table 20 Objective value and standard deviation by the compared EA-based methods on OTSU algorithm. Images M?1 Objective value (standard deviation) IDPSO-O LENNA 5 7 9 1.3399E4+ (0.0201) 1.4418E4+ (4.2087) 1.4984E4+ (6.457) 1.3366E4+ (0.5019) 1.4293E4+ (11.9248) 1.4792E4+ (9.3027) 7.3501E3+ (5.1693) 7.7522E3+ (9.7143) 7.9743E3+ (16.2031) 2.4793E4+ (1.4902) 2.5984E4+ (5.7525) 2.6563E4+ (6.8932) 1.7656E4+ (0.7331) 1.8772E4+ (1.6985) 1.9374E4+ (7.9294) 1.9048E4+ (3.1508) 2.0914E4+ (3.7347) 2.1925E4+ (10.1591) ABC-O 1.3389E4 (7.9827) 1.4407E4 (9.637) 1.4974E4 (9.7015) 1.3358E4 (9.1973) 1.4264E4 (16.0893) 1.4766E4 (19.195) 7.3442E3 (6.8386) 7.7428E3 (10.5264) 7.9674E3 (14.7989) 2.478E4 (7.9103) 2.5976E4 (7.9865) 2.655E4 (14.7976) 1.7652E4 (3.3981) 1.8762E4 (7.0687) 1.9359E4 (10.6268) 1.9046E4 (9.7591) 2.0873E4 (29.1414) 2.1905E4 (21.4712) ACO-O 1.3327E4 (102.6248) 1.419E4 (227.11) 1.4631E4 (187.875) 1.3311E4 (57.8985) 1.4123E4 (131.3038) 1.449E4 (154.2071) 7.2793E3 (89.7807) 7.5178E3 (138.666) 7.7093E3 (102.3678) 2.4589E4 (283.8571) 2.5676E4 (326.5765) 2.626E4 (237.7194) 1.7075E4 (1.18E3) 1.8169E4 (967.628) 1.8893E4 (496.404) 1.8916E4 (146.5981) 2.0484E4 (238.492) 2.1339E4 (386.8378) GAL-O 1.318E4 (80.5775) 1.411E4 (97.9498) 1.4603E4 (77.131) 1.3105E4 (104.8415) 1.39E4 (119.2665) 1.4373E4 (93.659) 7.202E3 (45.9427) 7.5193E3 (150.307) 7.6988E3 (62.8399) 2.4453E4 (211.3208) 2.5589E4 (137.5233) 2.6239E4 (93.6424) 1.7242E4 (243.75) 1.8342E4 (147.684) 1.8893E4 (109.1568) 1.8745E4 (124.7023) 2.0185E4 (169.307) 2.1026E4 (208.846) PSO-O 1.3346E4 (37.9331) 1.426E4 (93.5188) 1.4808E4 (93.062) 1.3328E4 (40.0075) 1.4119E4 (95.4666) 1.4644E4 (70.3313) 7.3003E3 (34.2612) 7.6616E3 (86.1592) 7.8655E3 (39.131) 2.4788E4 (15.3314) 2.5974E4 (15.5702) 2.6523E4 (29.6001) 1.7649E4 (10.4318) 1.8747E4 (28.1634) 1.929E4 (52.3812) 1.8944E4 (85.4991) 2.0831E4 (66.7213) 2.1755E4 (93.2118) QPSO-O 1.3374E4 (19.4197) 1.4367E4 (41.1998) 1.4904E4 (64.8451) 1.3350E4 (19.1558) 1.4216E4 (57.5307) 1.4701E4 (68.2625) 7.34E3 (9.5728) 7.707E3 (49.1256) 7.9076E3 (42.4057) 2.4779E4 (10.5842) 2.5966E4 (21.127) 2.6488E4 (49.1653) 1.765E4 (7.8796) 1.874E4 (27.2211) 1.9285E4 (54.8401) 1.9021E4 (59.767) 2.084E4 (63.305) 2.1787E4 (73.031)

PEPPL

5 7 9

HUNTER

5 7 9

24077

5 7 9

145086

5 7 9

157055

5 7 9

provided by the Berkeley segmentation data set (Available at http://www.eecs.berkeley.edu/Research/Projects/CS/vision/ bsds/). The image size of Lena, Pepper, and Hunter is 512 ? 512. In addition, the size of the other test images is 481 ? 321. Each image has a unique grey-level histogram. Most of the images are dif?cult to segment due to multimodality of the histograms. The performance of the proposed algorithm is evaluated by comparing its results with those of other popular algorithms developed in the literature so far, e.g., ACO algorithm [53], GA-L algorithm [7], ABC algorithm [1], PSO algorithm [58], and QPSO algorithm [17]. All of the tested algorithms, except for the traditional OTSU [41] and Kapur [48] methods, belong to the EA-based thresholding algorithm. The parameters of these algorithms are set as described in their papers, which achieved the best results. Because the classical OTSU and Kapur methods are exhaustive searching algorithms, we regarded their results as the basis for comparison. The EA-based algorithms used for image segmentation, including the IDPSO, may only accelerate the velocity of computation, but fail to improve the optimal result. Therefore, we strived to obtain larger ?tness values with fast computation ability. Since the optimal thresholds in image segmentation are positive integer, pid item in the IDPSO is amended as follows:

pid ? gbest ? rand ? ?pbest id ?t ? ? gbest? ? round?4 ? rand ? 2?

round() function rounds the argument toward the nearest integer.

?14?

H. Gao et al. / Information Sciences 250 (2013) 82–112 Table 21 Objective value and standard deviation by the compared EA-based methods on Kapur algorithm. Images M?1 Objective value (standard deviation) IDPSO-K LENNA 5 7 9 20.6830 (0.0076) 25.4186 (0.0318) 29.4914 (0.0273) 20.9058 (2.76E?4) 25.6966 (0.0326) 29.8398 (0.0139) 22.5681 (0.0013) 27.6633 (0.0157) 32.3371 (0.0706) 21.6599 (0.0043) 26.9099 (0.0029) 31.6583 (0.0592) 21.8095 (0.0103) 26.9205 (0.0227) 31.5341 (0.0488) 21.4102 (5.53E?4) 26.238 (0.0083) 30.5298 (0.0093) ABC-K 20.6832 (0.0023) 25.4029 (0.0307) 29.4643 (0.0449) 20.9045 (0.0015) 25.6947 (0.0075) 29.8102 (0.0164) 22.5631 (0.0029) 27.6549 (0.0177) 32.3239 (0.0477) 21.6513 (0.0064) 26.897 (0.0071) 31.6113 (0.0232) 21.799 (0.0108) 26.8988 (0.0128) 31.5156 (0.0386) 21.398 (0.0077) 26.2264 (0.0102) 30.5058 (0.0227) ACO-K 20.4277 (0.2197) 24.9281 (0.2238) 28.8916 (0.2827) 20.8689 (0.0565) 25.5263 (0.1103) 29.535 (0.2094) 22.3735 (0.1468) 27.3196 (0.2079) 31.7712 (0.2182) 21.6323 (0.0391) 26.7989 (0.089) 31.3531 (0.2442) 21.7497 (0.0567) 26.7987 (0.1108) 31.2091 (0.2162) 21.3755 (0.0566) 26.1681 (0.0912) 30.2966 (0.186) GAL-K 20.4892 (0.095) 24.8544 (0.1609) 28.7148 (0.1938) 20.8263 (0.0382) 25.4006 (0.0937) 29.3473 (0.1113) 22.3262 (0.1005) 27.2536 (0.1119) 31.6547 (0.1628) 21.5733 (0.0311) 26.7112 (0.0727) 31.1651 (0.1072) 21.6825 (0.0385) 26.5537 (0.0697) 31.0451 (0.1316) 21.279 (0.0527) 25.9829 (0.0854) 30.0365 (0.1458) PSO-K 20.6668 (0.0752) 25.3412 (0.0878) 29.198 (0.2004) 20.9035 (0.0035) 25.6416 (0.0591) 29.5987 (0.1302) 22.5602 (0.0132) 27.5883 (0.0646) 32.0394 (0.1629) 21.6418 (0.0191) 26.8617 (0.0514) 31.4183 (0.1422) 21.7768 (0.0264) 26.875 (0.0472) 31.3181 (0.1072) 21.3887 (0.0271) 26.185 (0.0534) 30.3323 (0.1084)

107

QPSO-K 20.6109 (0.1335) 25.281 (0.1473) 29.1678 (0.2565) 20.9035 (0.0034) 25.6523 (0.0742) 29.6871 (0.1298) 22.5526 (0.0226) 27.6047 (0.0783) 32.0616 (0.1919) 21.6455 (0.0153) 26.8761 (0.0452) 31.4624 (0.1363) 21.7972 (0.0214) 26.8616 (0.0623) 31.3379 (0.1394) 21.3999 (0.0149) 16.2042 (0.0444) 30.387 (0.1293)

PEPPL

5 7 9

HUNTER

5 7 9

24077

5 7 9

145086

5 7 9

157055

5 7 9

The difference between pid de?ned in Eqs. (6) and (14) is the intermediate term. Since the value generated by the disturbance term in Eq. (6) is a random decimal within [?1, 1] and the values used in the thresholding segmentation problem must be a positive integer. Therefore, the original intermediate term de?ned in Eq. (6) is not suitable for thresholding segmentation. As suggested in a previous study [17], the setting of gbest ± 2 is a good choice for guiding the local search in the region, and therefore, we set it as the disturbance term in Eq. (14). The numbers of thresholds M ? 1 investigated in the experiments were 2, 3, 4, 5, 7, and 9. Because no original EA-based algorithm has been tested on the condition of M ? 1 > 5, we tested the compared algorithms on M ? 1 = 7, 9 to show which algorithm is more suitable for multilevel image segmentation. The population size is 20. When M ? 1 = 2, 3, 4, 5, the maximum number of FEs is 2000, so that it can test the convergence rate. When M ? 1 = 7, 9, the maximum number of FEs is 4000, so that all of the compared algorithms have suf?cient time to search on high dimension problems. Since the EA-based algorithms have randomized characteristics, all the experiments were repeated 50 times for each image for each M ? 1 value. In all experiments, to ensure that the initial values of each random algorithm are equal, we used the MATLAB command rand(‘state’, sum(i ? 30)). The value of i used at the beginning of each run in all algorithms is equal. Fig. 15 presents these six original images and their histograms.

108

H. Gao et al. / Information Sciences 250 (2013) 82–112

(a) Lenna IDPSO-O

(a’) Lenna PSO-O

(b) Pepper IDPSO-O

(b’) Pepper PSO-O

(c) Hunter IDPSO-O

(c’) Hunter PSO-O

Fig. 16. The comparison of the thresholded images by IDPSO-based and PSO-based algorithms (M ? 1 = 9).

5.4. Multilevel thresholding results and ef?ciency and different methods with M ? 1 = 2, 3, 4 We used Eqs. (11) and (13) as the ?tness functions to provide a comparison of performance. Because the classical OTSU and Kapur methods were exhaustive searching algorithms, we regarded their results as the basis for comparison. Tables 14 and 15 present the ?tness function values, mean computation time, and corresponding optimal thresholds (with M ? 1 = 2, 3, 4) attained by these two methods. As the consumption of CPU time was extremely long when M ? 1 > 4, we have not listed any correlative values in our experiment. However, other results attained by the EA-based algorithms were compared. Because real-time applications require a competitive running time in addition to high performance, the CPU times of the EA-based methods was analyzed. Since the max FEs used in each of the compared EA-based methods are the same on each dimension. According to the results shown in Tables 16 and 17, the difference of CPU times between the EA-based methods is not obvious, and they are suitably ef?cient and practical in terms of time complexity for high-dimensional image segmentation problems. Therefore, in comparison with the traditional OTSU and Kapur algorithms, all EA-based methods effectively

H. Gao et al. / Information Sciences 250 (2013) 82–112

109

(d) 24077 IDPSO-K

(d’) 24077 PSO-K

(e) 145086 IDPSO-K

(e’) 145086 PSO-K

(f)157055 IDPSO-K

Fig. 16 (continued)

(f’) 157055PSO-K

shorten the computation time based on the population search strategy. We therefore introduced the EA-based methods into image segmentation. The IDPSO-based algorithm obtains results equal to or close to that obtained in the traditional methods when M ? 1 = 2, 3, 4 (Tables 18 and 19). Compared with exhaustive searching strategy, the IDPSO uses a stochastic searching and population set-based mechanism, which enables the IDPSO-based method to have a faster computing ability. Therefore, the IDPSObased algorithm consumes less computing times than the traditional ones. Furthermore, the IDPSO-based algorithm gets the best achievements among the EA-based methods in most cases. This is mainly due to the fact that the IDPSO has a good balance between the global and local search abilities. Compared with the EA-based algorithms, the IDPSO uses P as the main searching direction of particles, which in turn enables each particle to make a precise search in the promising area. Furthermore, the IDS helps the particle to search in the solution space more thoroughly. Therefore, the IDPSO has a more powerful and faster global search ability, which ensures it to perform better than the other EA-based algorithms in limited FEs.

5.5. Multilevel thresholding results and ef?ciency and different methods with M ? 1 = 5, 7, 9 For comparing the searching ability of each EA-based method in high dimension, we tested these methods on each image with M ? 1 = 5, 7, 9. Tables 20 and 21 show the average ?tness value and standard deviation (in parenthesis) that were

110

H. Gao et al. / Information Sciences 250 (2013) 82–112

obtained by each of the EA-based algorithms. In this comparison, the larger the ?tness values in the EA-based algorithm, the better is the achievement. With the growth in dimension, the gap between the compared algorithms enlarged. It can be mainly attributed to the different characteristic of EA-based methods. Although using a strongest scheme by extracting from several highest performance genes, the mutation and crossover operators used in the GAL-based algorithm slows down its convergence rate. Therefore, the GAL-based algorithm obtains the worst results in the limited iteration. Compared with the GA, the pheromone used in the ACO-based method provides ants with the ability to learn from the population, which is the main reason why ACO-based method gets better results than the GAL-based algorithm. Depending on the population and the individual experiences, the PSO-based method shows faster convergence rates than the ABC-based, GAL-based, and ACO-based algorithms. However, the premature convergence hinders the PSO-based algorithm to ?nd the global optimum. A Delta potential well used in QPSO algorithm enables particles to have more opportunities to search in the solution space than PSO; thus, the QPSO-based method gets better results than the PSO-based method in most cases. By using employed bees (bees applying a greedy selection scheme between the new and old solution) and onlooker bees (bees that decide on a food source to exploit based on the information shared by the employed bees), comparing with GA-based and ACO-based algorithms, the ABCbased method has a relative stronger exploitation ability. Furthermore, it uses scout bees to search in the global solution space. Therefore, the ABC-based method shows a better exploration ability than the PSO-based and QPSO-based methods. The IDPSO-based algorithm achieves an effective balance between global and local searches by using the IDS method. The results shown on Tables 20 and 21 prove that the IDPSO-based algorithm is more suitable for resolving multilevel image segmentation problems. Segmented images based on the IDPSO and PSO methods with M ? 1 = 9 are shown in Fig. 16. As mentioned above, the EA-based methods use different strategies to conduct global and local searches. A good optimization algorithm should have a better balance between exploration and exploitation search abilities. Our experimental results show that the IDPSO-based method outperform the other EA-based segmentation algorithms, especially on high-dimensional thresholding (with M ? 1 = 5, 7, 9). Moreover, it shortens the computational time for low-dimensional segmentation problems (with M ? 1 = 2, 3, 4) when compared with the traditional OTSU and Kapur methods. 6. Conclusion Inspired by the balance of competition and diversity in the ecology, we have presented a novel PSO algorithm incorporated with IDS strategy, which was based on identifying a special search region with more particles in it. The objective is to deploy the proposed strategy to not only accelerate convergence speed but also avoid the local optima of PSO. As the intermediate disturbance operator enables particles in the IDPSO to appear anywhere during iterations, it enhances the global search ability. Furthermore, the intermediate disturbance item in the IDPSO also improves the convergence rate for particles by focusing on their searches in the competitive regions. As a result, the proposed algorithm showed promising results among the compared algorithms. Finally, we applied the IDPSO algorithm in the multilevel image segmentation problem. The proposed PSO algorithm gets favorable results than the other compared methods. The use of the IDPSO in other realworld problems merits further research in our future work. Acknowledgement The authors acknowledge support from City University of Hong Kong Strategic Research Grant (No. 7002826), the Introduction Foundation for the Talent of Nanjing University of Tele. and Com. (No. NY212025), National Natural Science Foundation of China (No. 61203270). References

[1] B. Akay, A study on particle swarm optimization and arti?cial bee colony algorithms for multilevel thresholding, Applied Soft Computing 13 (2012) 3066–3091. [2] B. Akay, D. Karaboga, A modi?ed arti?cial bee colony algorithm for real-parameter optimization, Information Sciences 192 (2012) 120–142. [3] M.M. Ali, C. Khompatraporn, Z.B. Zabinsky, A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems, Journal of Global Optimization 31 (2005) 635–672. [4] F. Altermatt, S. Schreiber, M. Holyoak, Interactive effects of disturbance and dispersal directionality on species richness and composition in meta communities, Ecology 92 (2011) 859–870. [5] P.S. Andrews, An investigation into mutation operators for particle swarm optimization, in: IEEE Congress on Evolutionary Computation, Vancouver, Canada, 2006, pp. 1044–1051. [6] P.P. Boyle, A Monte Carlo approach, Journal Financial Economics 4 (1977) 323–338. [7] L. Cao, P. Bao, Z.K. Shi, The strongest schema learning GA and its application to multilevel thresholding, Image Vision Computing 26 (2008) 716–724. [8] K.Y. Chan, T.S. Dillon, C.K. Kwong, Modeling of a liquid epoxy molding process using a particle swarm optimization-base fuzzy regression approach, IEEE Transactions on Industrial Informatics 7 (2011) 148–158. [9] J.S. Clark, Individuals and the variation needed for high species diversity in forest trees, Science 327 (2010) 1129–1132. [10] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence in a multi-dimensional complex space, IEEE Transactions on Evolutionary Computation 6 (2002) 58–73. [11] David C. Culver, Competition and community, Nature 261 (1976) 527. [12] J.H. Connell, Diversity in tropical rain forests and coral reefs, Science 199 (1978) 1302–1310. [13] M.W. Denny, B. Gaylord, Marine ecomechanics, Annual Review of Marine Science 2 (2010) 89–114.

H. Gao et al. / Information Sciences 250 (2013) 82–112

111

[14] R.C. Eberhard, Y.H. Shi, Comparison between genetic algorithms and particle swarm optimization, Evolutionary Programming VII 1447 (1998) 611– 616. [15] B.C. Emerson, N. Kolm, Species diversity can drive speciation, Nature 434 (2005) 1015–1017. [16] H. Gao, W.B. Xu, A new particle swarm optimization and its globally convergent modi?cations, IEEE Transactions on System, Man, and Cybernetics, Part B 41 (2011) 1334–1351. [17] H. Gao, W.B. Xu, J. Sun, et al, Multilevel thresholding for image segmentation through an improved quantum-behaved particle swarm algorithm, IEEE Transactions on Instrumentation and Measurement 59 (2010) 934–946. [18] D.E. Goldberg, Genetic Algorithms in Search, Optimization & Machine Learning, Addison-Wesley, 1989. [19] J.P. Grime, Competitive exclusion in herbaceous vegetation, Nature 242 (1973) 344–347. [20] M.G. Hall, D.C. Alexander, Convergence and parameter choice for Monte-Carlo simulations of diffusion MRI, IEEE Transactions on Medical Imaging 28 (2009) 1354–1364. [21] M. Han, J.C. Fan, J. Wang, A dynamic feed forward neural network based on Gaussian particle swarm optimization and its application for predictive control, IEEE Transactions on Neural Networks 22 (2011) 1457–1468. [22] S. He, Q.H. Wu, J.R. Saunders, Group search optimizer: an optimization algorithm inspired by animal searching behavior, IEEE Transactions on Evolutionary Computation 13 (2009) 973–990. [23] G. Iacca, F. Neri, E. Mininno, Y.S. Ong, M.H. Lim, Ockham’s Razor in memetic computing: three stage optimal memetic exploration, Information Science 188 (2012) 17–43. [24] D. Jia, G.X. Zheng, M.K. Khan, An effective memetic differential evolution algorithm based on chaotic local search, Information Science 181 (2011) 3175–3187. [25] Q. Jin, Z.J. Liang, A na?ve particle swarm optimization, in: IEEE Congress Evolutionary Computation, Brisbane, Australia, 2012, pp. 1–8. [26] C.F. Juang, C.M. Hsiao, C.H. Hsu, Hierarchical cluster-based multispecies particle swarm optimization for fuzzy system optimization, IEEE Transactions on Fuzzy Systems 18 (2010) 14–26. [27] F. Kang, J.J. Li, Z.Y. Ma, Rosenbrock arti?cial bee colony algorithm for accurate global optimization of numerical functions, Information Science 181 (2011) 3508–3531. [28] D. Karaboga, An idea based on honey bee swarm for numerical optimization technical report TR 06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005. [29] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceeding IEEE International Conference Neural Network, Perth, Western Australia, 1995, pp. 1942–1948. [30] J. Kennedy, R. Mendes, Population structure and particle swarm structure, in: IEEE Congress on Evolutionary Computation, Hawaii, America, 2002, pp. 1671–1676. [31] D. Kristensen, A. Mele, Adding and subtracting black-scholes: a new approach to approximating derivative prices in continuous-time models, Journal of Financial Economics 102 (2011) 390–415. [32] A. Kusiak, Z.J. Zhang, Adaptive control of a wind turbine with data mining and swarm intelligence, IEEE Transactions on Sustainable Energy 2 (2011) 28–36. [33] Dos S.C. Leandro, An ef?cient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications, Reliability Engineering and System Safety 94 (2009) 830–837. [34] H.S. Li, T. Shen, X.L. Huang, Approximately global optimization for robust alignment of generalized shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence 33 (2011) 1116–1131. [35] J.Y. Lin, Y.P. Chen, Analysis on the collaboration between global search and local search in memetic computation, IEEE Transactions on Evolutionary Computation 15 (2011) 608–623. [36] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal function, IEEE Transactions on Evolutionary Computation 10 (2006) 281–295. [37] S.H. Ling, H.H.C. Iu, K.Y. Chan, et al, Hybrid particle swarm optimization with wavelet mutation and its industrial applications, IEEE Transactions on System, Man, and Cybernetics, Part B 38 (2008) 743–763. [38] J.F. Molino, D. Sabatier, Tree diversity in tropical rain forests: a validation of the intermediate disturbance hypothesis, Science 294 (2001) 1702–1704. [39] M.A. Montes de Oca, T. Stutzle, K. Van den Enden, M. Dorigo, Incremental social learning in particle swarms, IEEE Transactions on System, Man, and Cybernetics, Part B 41 (2011) 368–384. [40] Q.H. Nguyen, Y.S. Ong, M.H. Lim, A probabilistic memetic framework, IEEE Transactions on Evolutionary Computation 13 (2009) 604–623. [41] N. Otsu, A threshold selection method from gray-level histograms, IEEE Transactions on System, Man, and Cybernetics 9 (1979) 62–66. [42] J.B. Park, Y.W. Jeong, J.R. Shin, K.Y. Lee, An improved particle swarm optimization for nonconvex economic dispatch problems, IEEE Transaction Power Systems 25 (2010) 156–166. [43] A.K. Qin, V.L. Huang, P.N. Suganthan, Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE Transactions on Evolutionary Computation 13 (2009) 398–417. [44] J.S. Rao, Optimization, History of Mechanism and Machine Science 20 (2011) 341–351. [45] A. Ratnaweera, S.K. Halgamuge, H.C. Watson, Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coef?cients, IEEE Transactions on System, Man, and Cybernetics, Part B 8 (2004) 240–255. [46] R.E. Richlefs, S.S. Renner, Global correlations in tropical tree species richness and abundance reject neutrality, Science 335 (2012) 464–467. [47] Andrzej Ruszczynski, Nonlinear Optimization, Princeton University Press, 2011. [48] P.K. Sahoo, J.N. Kapur, A.K.C. Wong, A new method for gray-level picture thresholding using the entropy of the histogram, Computer Vision Graphics Image Processing 29 (1985) 273–285. [49] Y. Shi, R. Eberhart, A modi?ed particle swarm optimizer, in: IEEE International Conference on Evolutionary Computation, Alaska, America, 1998, pp. 69–73. [50] J. Sun, B. Feng, W.B. Xu, Particle swarm optimization with particle having quantum behavior, in: IEEE Congress, Evolutionary Computation, 2004, pp. 325–331. [51] J. Sun, X.J. Wu, W. Fang, et al, Multiple sequence alignment using the Hidden Markov Model trained by an improved quantum-behaved particle swarm optimization, Information Science 182 (2012) 93–114. [52] J. Sun, W.B. Xu, A global search strategy of quantum-behaved particle swarm optimization, in: IEEE International Conference on Cybernetics and Intelligent Systems, Singapore, 2004, pp. 111–116. [53] W.B. Tao, H. Jin, L.M. Liu, Object segmentation using ant colony optimization algorithm and fuzzy entropy, Pattern Recognition Letter 28 (2008) 788– 796. [54] E.M. Voumvoulakis, N.D. Hatziargyriou, A particle swarm optimization method for power system dynamic security control, IEEE Transactions on Power Systems 25 (2010) 1031–1041. [55] H. Wang, Y. Liu, Z. Wu, H. Sun, et al., An improved particle swarm optimization with adaptive jumps, in: IEEE Cong. on Evolutionary Computation, Hong Kong, China, 2008, pp. 392–397. [56] X. Yao, Y. Liu, Evolutionary programming made faster, IEEE Transactions on Evolutionary Computation 3 (1999) 82–102. [57] C.Y. Yeh, W.R. Jeng, S.J. Lee, Data-based system modeling using a type-2 fuzzy neural network with a hybrid learning algorithm, IEEE Transactions on Neural Networks 22 (2011) 2296–2309. [58] P.Y. Yin, Multilevel minimum cross entropy threshold selection based on particle swarm optimization, Applied Mathematics and Computation 184 (2007) 503–513.

112

H. Gao et al. / Information Sciences 250 (2013) 82–112

[59] Q. Yuan, F. Qian, W.L. Du, A hybrid genetic algorithm with the Baldwin effect, Information Science 180 (2010) 640–652. [60] J. Zhang, Y. Li, H.S.-H. Chung, Adaptive particle swarm optimization, IEEE Transactions on System, Man, and Cybernetics, Part B 39 (2009) 1362–1381. [61] S.-Z. Zhao, M.W. Iruthayarajan, S. Baskar, P.N. Suganthan, Multi-objective robust PID controller tuning using two lbests multi-objective particle swarm optimization, Information Sciences 181 (2011) 3323–3335. [62] F. van den Bergh, An analysis of particle swarm optimizers, Ph.D. dissertation, Univ. Pretoria, Pretoria, South, Africa, 2001. [63] F. van den Bergh, A.P. Engelbrecht, A study of particle swarm optimization particle trajectories, Information Science 176 (2006) 937–971. [64] F. van den Bergh, A.P. Engelbrecht, Effect of swarm size on cooperative particle swarm optimizers, in: Proceeding of the Genetic Evolutionary Computation Conference (GECCO), San Francisco, Canada, 2001, pp. 892–899.

相关文章:

- ...Improved Particle Swarm Optimization Algorithm
- 中文翻译--An Improved
*Particle**Swarm**Optimization*Algorithm_英语学习_外语学习_教育专区。专业英语作业,需要找一篇发表的英文文章翻译成中文(该文档是中文翻译) ...

- 粒子群算法资源合辑
- K. Fitness-distance-ratio
*based**p article**swarm**optimization*. Proceedings of...2.Websites: http://www.*swarm*intelligence.org/ http://www.*particleswarm*....

- POS代码
- [Par
*Swarm*,Opt*Swarm*]=*Base*StepPso(Par*Swarm*,Opt*Swarm*,*Particle*Scope,0 .95,0....[Par*Swarm*,Opt*Swarm*]=InitSwarm(*Swarm*Size,*Particle*Size,*Particle*Scope) %function...

- 基本粒子群算法的原理和matlab程序
- [Par
*Swarm*,Opt*Swarm*]=*Base*StepPso(Par*Swarm*,Opt*Swarm*,*Particle*Scope,0.95,0.4,LoopCoun t,k); figure(1); plot(Par*Swarm*(:,1),Par*Swarm*(:,2),'g*','...

- PSO 粒子群算法 Matlab源码
- [Par
*Swarm*,Opt*Swarm*]=StepFindFunc(Par*Swarm*,Opt*Swarm*,AdaptFunc,*Particle*Scope,0.95,0. 4,LoopCount,k); [Par*Swarm*,Opt*Swarm*]=*Base*StepPso(Par*Swarm*,Opt*Swarm*,...

- 基于模拟退火思想改进的粒子群算法求解背包问题
- (2010)12-0085-02
*Particle**Swarm*Algorithm Modified*Based**on*Ideal of Simulated Annealing for Knapsack Problem ZHANG Qi- (1.Tongji University, Shanghai ...

- InitSwarm(粒子群初始化函数)
- Init
*Swarm*(粒子群初始化函数)_计算机软件及应用_IT/计算机_专业资料。粒子群初始化函数function [Par*Swarm*,Opt*Swarm*]=Init*Swarm*(*Swarm*Size,*Particle*Size,*Particle*...

更多相关标签:

- Compact Particle Swarm Optimization
- Alatas2009Chaos embedded particle swarm optimization algorithms
- 2Multiobjective Particle Swarm Optimization With Preference-Based Sort and Its Application to
- General Particle Swarm Optimization Based on Simulated Annealing for
- A Particle Swarm Optimization Based on Immune Mechanism
- Improved Particle Swarm Optimization Based on Neighborhood Search
- Particle swarm optimization based on intermediate disturbance
- Analysis of water resources planning based on particle swarm optimization algorithm
- Microblog User Recommendation Based on Particle Swarm Optimization
- A New CLARANS Algorithm Based on Particle Swarm Optimization
- Parallel Particle Swarm Optimization Based on PAM2-IEEE2010
- Discrete particle swarm optimization based on estimation of distribution
- Feature Selection based on Rough Sets and Particle Swarm Optimization
- A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization
- LS-SVM Based on Chaotic Particle Swarm Optimization with Simulated Annealing