Research Article  Open Access
Bai Li, Ligang Gong, Wenlun Yang, "An Improved Artificial Bee Colony Algorithm Based on BalanceEvolution Strategy for Unmanned Combat Aerial Vehicle Path Planning", The Scientific World Journal, vol. 2014, Article ID 232704, 10 pages, 2014. https://doi.org/10.1155/2014/232704
An Improved Artificial Bee Colony Algorithm Based on BalanceEvolution Strategy for Unmanned Combat Aerial Vehicle Path Planning
Abstract
Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balanceevolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BEABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other stateoftheart modified ABC algorithms.
1. Introduction
Developments in automated and unmanned flight technologies have become an irresistible trend in many countries. As a matter of fact, unmanned combat aerial vehicles (UCAVs) have been of great importance to many military organizations throughout the world due to their capabilities to work in remote and hazardous environments [1]. Path planning is a critical aspect of the autonomous control module in UCAV, which aims to provide an optimal path from the starting point to the desired destination with the artificial threats and some natural constraints considered.
For the UCAV path planning scheme, an optimal solution corresponds to one path that minimizes the flight route, average altitude, fuel consumption, exposure to radar or artillery, and so forth [2]. With the development of the ground defending weapons, the difficulty of describing these artificial threats significantly becomes larger. Therefore, in order to deal with the increasing complexity when modeling a combat field, researchers have gradually shifted their interests away from deterministic algorithms [3–5].
Like most realworld optimization problems, finding the global optimum is enormously difficult. To avoid enumerating for the global optimums, evolutionary algorithms (EAs) have been well investigated and developed as a primary branch of the heuristic algorithms, such as genetic algorithm (GA) [6], differential evolution algorithm (DE) [7], ant colony optimization algorithm (ACO) [8], particle swarm optimization algorithm (PSO) [9], and artificial bee colony algorithm (ABC) [10]. Although these intelligent algorithms do not necessarily guarantee global convergence, some satisfying results can be acquired after all. That is why studies have been brought on developing new algorithms or modifying the existing ones in recent years [11]. For this UCAV path planning scheme, algorithms such as chaos theory based ABC (CABC) [12], immune GA (IGA) [13], PSO [14], quantumbehaved PSO (QPSO) [15], masterslave parallel vectorevaluated GA (MPVGA) [16], and intelligent water drops algorithm (IWD) [17] have been applied.
ABC algorithm is a swarm intelligence algorithm motivated by the foraging behaviors of bee swarms. In this algorithm, the bee swarm mainly consists of three components: the employed bees, the onlooker bees, and the scout bees [18]. Employed bees start the global exploration, then the qualified employed bees will be capable of attracting the onlooker bees to follow. At this point, following means exploiting locally around an employed bee. The qualification of each employed bee is determined by the roulette selection strategy. In the long run of the iterations, those unqualified employed bees eventually perish and scout bees will take their places.
Applications of ABC algorithm span the areas of image processing [19], structure identification [20], bioinformatics [21], neural network training [22], and so forth. At the same time, it is believed that ABC algorithm works well in the global exploration but poorly in the local exploitation [23]. Generally speaking, two prevailing ways have been taken to improve the conventional ABC algorithm. In the first way, strategies, for instance, Rosenbrock’s rotational direction strategy [24], quantum theory [25], chaos theory [26], and Boltzmann selection strategy [27] from the outside world are introduced. The second way mainly focuses on combining ABC algorithm with some other intelligent algorithms. DEABC [28], PSOABC [29], and QEAABC [30] are typical examples. Apart from the two ways mentioned above, efforts have also been made on revising the crossover and mutation equations in the conventional ABC algorithm [21, 31, 32]. Especially, an improved ABC algorithm named IABC has shown fast convergence speed and accurate convergence precision in comparison with ABC algorithm and is regarded as a stateoftheart version of ABC [32].
Viewing improvements ever made for ABC algorithm, attentions have seldom been paid to fully utilizing the convergence messages hiding in the iteration system [33, 34]. In other words, apart from the roulette selection strategy, the convergence status of a previous cycle may be regarded as feedback information to guide a subsequent cycle. In addition, it is noted that the generation of scout bees mainly intends for the escapement of premature convergence but scout bees are confirmed to be noneffective in some numerical cases [35]. Therefore, new rules may be needed to guide the scout bees so as to perform more efficiently. Moreover, it is believed that the exploration and the exploitation procedures need to match with each other so as to cooperate efficiently. That is, it is essential for an intelligent algorithm to capture a balance between global exploration and local exploitation.
In this paper, a novel ABC algorithm modified by a balanceevolution strategy is applied for this path planning scheme. In this new algorithm (which is named BEABC), convergence status in the iteration will be fully utilized so as to manipulate the exploration/exploitation accuracy and to make a tradeoff between local exploitation and global exploration. Besides, the rule guiding the scout bees is modified according to an overall degradation procedure. This work intends for some intensive research to evaluate the performance of BEABC algorithm in this UCAV path planning scheme, in comparison with two recent stateoftheart modifications of ABC.
The remainder of this paper is organized as follows. In Section 2, the mathematical model of the combat field is given. In Section 3, the principles of four versions of ABC are introduced in detail. Simulations and the corresponding results are shown in Section 4, together with some remarks. Conclusions are drawn in the last section.
2. Combat Field Modeling for UCAV Path Planning
There are some threatening installations in the combat field, for instance, missiles, radars, and antiaircraft artilleries. The effects of such installations are presented by circles in the combat field of different radiuses and threat weights [36]. If part of its path falls in a circle, an UCAV will be vulnerable to the corresponding ground defense installation. To be more precise, the damage probability of an UCAV is proportional to its distance away from the threat center. Additionally, when the flight path is outside a circle, the probability of being attacked is 0. Let us define the starting point as and the terminal point as (see Figure 1). The UCAV flight mission is to calculate an optimal path from to , with all the given threat regions in the combat field and the fuel consumption considered.
To make this problem more concrete, let us draw a segment connecting the starting and terminal points first. Afterwards, is divided into equal portions by vertical dash lines as plotted in Figure 1. These lines are taken as new axes; then as many as points (see the small rectangles in Figure 1) along such axes will be connected in sequence to form a feasible path from to . In this sense, the corresponding coordinates are the variables to be optimized so as to acquire an optimal flight path.
To accelerate the processing speed, it is encouraged to take the direction as the axis [37]. In this way, point movement on the can be described more easily. Therefore, the first step before the computation of the optimal flight path is the transfer of axes. Any point on the original combat field gets transferred to in the new axes as defined in (1), where denotes the angle between the original axis and the original direction, as follows:
Regarding the evaluation of one candidate flight path, the threat cost and the fuel consumption are taken into consideration [1], as shown in where is the weighted sum of flight cost for this flight path, represents the weighting parameter, and are variables related to every instantaneous position on the path, and denotes the total length of this candidate flight path.
To simplify the integral operations, the flight cost from the point along to the one along is calculated at five sample points [38], as shown in Figure 2.
If the flight path shown above falls into a threat region, the threat cost is calculated as follows: where denotes the number of threatening circles that the path involves in, refers to the subpath length, stands for the distance between the point on the path and the threat center, and is regarded as the threat grade of the threat. In this work, it is assumed that the velocity of the UCAV is a constant. Then the fuel consumption is considered in direct proportion to (i.e., ).
3. Principles of ABC Relevant Algorithms
The preceding section holds a description of combat field model. The optimal vector is expected to derive using these intelligent algorithms that will be introduced in detail in this section. The conventional ABC algorithm and three stateoftheart versions of ABC are applied for the UCAV flight path optimization scheme.
3.1. Review of Conventional ABC Algorithm
In ABC, three kinds of bees cooperate to search for the optimal nectar sources in the space, namely, the employed bees, the onlooker bees, and the scout bees [18].
Let represent a candidate position searched by the employed bee in the space . Here, a position refers to a feasible solution for the optimization problem. The number of employed bees is set to . In each iteration, onlooker bees search locally around those qualified employed bees. The number of onlooker bees is commonly set to . Here, the qualification standard concerns the roulette selection strategy and will be introduced later in this section. Those employed bees who cannot make any progress for some time will die out and the randomly initialized scout bees will take their places.
At first, all the employed bees need to be randomly initialized in the feasible solution space. In detail, the element of the solution is initialized using where and denote the lower and upper boundaries of this element and denotes the dimension of any feasible solution.
In each cycle of the iterations, an employed bee executes a crossover and mutation process to share information with one randomly chosen companion and search in the new position utilizing
In this equation, the employed bee exchanges information with the employed bee in the element. Here, is a randomly selected integer from 1 to . Similarly, is a randomly selected integer from 1 to while it satisfies the condition that .
After such crossover and mutation process for the employed bees, the greedy selection strategy will be implemented. If is better (i.e., its corresponding objective function value is lower), the previous position is discarded; otherwise, the employed bee remains at .
Afterwards, an index named is calculated as the qualification reflection for each of the employed bees according to Here, denotes the objective function value of the position that the employed bee currently stays in.
Each of the onlooker bees needs an employed bee to follow. In this case, if , the 1st employed bee is chosen for the specific onlooker bee; otherwise, a similar comparison between and will be carried on. If all the are smaller than , such process goes over again until one employed bee is selected. In this way, onlooker bees determine the corresponding employed bees to follow. In this sense, to follow means to search around locally using
In this equation, the employed bee and the element are still randomly chosen. Again, the greedy selection strategy is implemented. If the position searched by the onlooker bee is more qualified than the position by the employed bee (i.e., if ), the employed bee directly moves to the better place.
During the iteration, once an employed bee searches globally but finds no better position, or once an onlooker bee exploits around an employed bee without finding a better position, the invalid trail time adds one. On the other hand, when any better position can be found for the employed bee, the corresponding is set to zero instantly. At the end of each iteration, it is necessary to check whether any exceeds a certain threshold named . If , the employed bee will be directly replaced by a scout bee. A scout bee simply refers to a randomly initialized position in the food source using (4).
3.2. Principle of IABC Algorithm
IABC algorithm was proposed by Li et al. in 2012 [32]. It differs from the conventional ABC merely in the crossover and mutation equations. Consider
Equation (8) is designed for global exploration and replaces (5) as in the conventional ABC. Similarly, (9) replaces (7) for the exploitation process. In (10) and (11), denotes the element of the best position ever emerged in the cycles of iteration. It is notable that, as in (11), is randomly determined by the initialization conditions in the first iteration and is usually a relatively small positive number.
3.3. Principle of IFABC Algorithm
IFABC algorithm mainly differs from the conventional ABC algorithm in the utilization of as the internal feedback information and in the abandonment of the roulette selection strategy [21].
Before the iteration process, as many as employed bees are randomly sent out to explore in the nectar source space. Particularly, the process to initialize the element of the solution is described in (4). Afterwards, the iteration process starts.
In each cycle of iteration, an employed bee executes a crossover and mutation procedure to share information with its one (randomly selected) companion. This procedure is expressed in (5). Then, the greedy selection strategy is implemented so as to select the better position between and (i.e., to select the one with a relatively higher objective function value). Different from that in the conventional ABC algorithm, the number of elements involved in such crossover and mutation procedure is considered flexible. In other words, (5) should implement on each of the employed bees for times, where and will be discussed later. Then the searching procedure by the employed bees in this current cycle is completed, which is usually regarded as the global exploration procedure.
Afterwards, onlooker bees take over the searching process. In the IFABC algorithm, each of the employed bees is given a chance to be followed by an onlooker regardless they are “qualified” or not, pursuing to bring about more chances (i.e., more dynamics and diversity) for evolution and to fight against premature convergence. However, qualifications of the employed bees should be taken into account after all. IFABC algorithm seeks a new way to evaluate the searching performance.
Now that the roulette selection strategy is discarded in IFABC, the onlookers will directly choose their corresponding employed bees to search locally using (12). Here, the companion and the element item (involved in the crossover and mutation procedure) are still randomly selected. Afterwards, the greedy selection strategy is applied on the onlookers to choose between and . Consider where
For each of the employed bees together with the corresponding onlookers, the parameter represents the number of inefficient searching times before even one better position is derived. If the employed bee or the onlooker bee finds a better position, is directly reset to 1 (but not 0 here); otherwise, it is added by 1. If is larger than , the current position should be replaced by a reinitialized position using (4).
Since , it is expected that as many as elements in a candidate feasible solution will be affected by the exploration process. But when it comes to onlooker bees, only one element is involved because here we believe that multicrossover process contributes little to local search ability [33]. Note that a convergence factor appears in (12), which is designed to manipulate the exploitation accuracy according to the current convergence status of the pair of employed bee and onlooker bee. As shown in (13), decreases exponentially to 0.1 as gradually approaches . Here, 0.1 is a userspecified lower boundary of convergence scale, but the selection of such constant can be flexible according to the users. In this sense, the exploitation around one certain employed bee should be gradually intensified before this pair of bees is eventually discarded and replaced by means of reinitialization (when equals ).
To briefly conclude, the variable in IFABC is utilized to manipulate the local exploitation accuracy and to guide the crossover and mutation process in global exploration. Here, convergence performances of the bees are measured not by the corresponding objective function values but by the facts whether they are better than the previous one. In this sense, such change intends for the exploitation of positions where unqualified employed bees stay in.
3.4. Principle of BEABC Algorithm
BEABC algorithm mainly differs from IFABC algorithm in two aspects. One is the utilization of the parameter to manipulate the exploration/exploitation accuracy and the other is a new strategy for the generation of scout bees [39].
Regarding the exploration procedure, a convergence factor is added in the crossover and mutation equation to manipulate the exploration accuracy. Besides, the number of elements involved in the crossover and mutation process is designed to be adaptable.
In detail, the element of the employed bee changes to be as defined in the following: where , for all , , and .
is not necessarily equal to as shown in (14). Here, such modification intends to provide more dynamics for the global exploration. In addition, it is also notable that the lower boundary of is 1 (like that in IFABC). Then is regarded as a manipulator for the exploration, which reflects the convergence efficiency of the employed bee. Besides, the upper boundary of each is set to ; then it is required that as many as out of the elements will be involved in such crossover and mutation procedure for the employed bee. Such idea may be intuitively interpreted as follows: a relatively large corresponds to a relatively inefficient employed bee. Therefore, by changing more elements at one time, the employed bee gradually becomes distrustful of its current position. It is similar regarding the definition of . If is small, will be relatively small (when is temporarily fixed). In this sense, the exploration process is more likely to be an exploitation process. In this work, it is believed that there should not be an explicit difference between the exploration and exploitation procedures. In other words, when the exploration/exploitation ability needs to be enhanced, the searching system should be capable of adaptively catering for such demands. In this sense, the proposed BEABC algorithm aims to effectively capture a balance between the exploration and the exploitation, so as to make the evolution efficient.
After such crossover and mutation process for the employed bees, the greedy selection strategy will be implemented. If (defined in (14)) is more qualified, the previous position is discarded and is set to 1; otherwise, the employed bee remains at .
Regarding the onlooker bees, only one element in the solutions will be changed at one time using (16) so as to guarantee that such procedure is sufficiently “local” as follows: Similarly, it is assumed that the local exploitation needs to be intensified (by using ) when is large. Afterwards, the greedy selection is implemented. If is more qualified, the previous position is discarded and is set to 1; otherwise, adds one.
In each cycle of iteration, after the exploration and the exploitation procedures, any that exceeds will be reset to (rather than generating scout bees in the conventional ABC). Before it proceeds to the next cycle, the average value of (i.e., ) is compared with . If is smaller, which indicates that the overall iteration system is not functioning efficiently, the overall degradation procedure will be carried out. If not, it directly proceeds to the next cycle of iteration. Here, is a userspecified parameter to determine the degree of the inefficient evolution.
In detail, randomly selected employed bees will be reinitialized using (4) in such overall degradation procedure. At the same time, the corresponding will be reset to 1 as well. This idea is named overall degradation strategy in contrast with the conventional rule for scout bees in ABC.
The pseudocode of BEABC for numerical optimization is given in Pseudocode 1.

4. Experimental Results and Discussions
In order to investigate the efficiency and robustness of these ABC relevant algorithms, three simulation cases have been investigated. All the contrast experiments involved in this section were implemented with MATLAB R2010a and each kind of experiment was repeated 50 times with different random seeds. It is constantly set that , , and .
In the first case, the starting point is set to and the terminal point is set to [12]. In the second and third cases, the starting points are both and the terminal points are and , respectively. The threat locations and threat grades for the three cases are listed in Table 1. Some typical simulation results are demonstrated in Figures 3, 4, 5, 6, 7, 8, 9, and 10. In detail, Figures 3 and 4 show the comparative simulation results when and , respectively, for Case 1. The corresponding convergence curves are shown in Figures 5 and 6. Similarly, paths optimized by different algorithms in Cases 2 and 3 are shown in Figures 7 and 9, respectively, when . Their corresponding convergence curves are plotted in Figures 8 and 10. Complete evaluations of convergence performances (i.e., the mean and the standard deviation of the threat cost function values) are listed in Table 2.

 
Those bold values denote the best solutions (mean or S.D.) among four algorithms in every single case. 
Regarding the paths plotted in Figures 3, 4, 7, and 9, the trajectories optimized by BEABC are usually more smooth and advantageous. Specifically, as shown in Figure 7, the path optimized by the conventional ABC happened to be a local optimal solution.
Viewing the results in Table 2 and the comparative curves in Figures 5, 6, 8, and 10, we may notice that the superiority of BEABC gradually show up when the dimension increases. The situation is similar but not so significant regarding IFABC. It is because the roulette selection strategy is discarded in IFABC, abandoning the feedback information hiding in the objective function values. In a way, IFABC sacrifices part of its ability to converge fast for the competence to converge globally [33]. In contrast, improvements made in BEABC are relatively moderate and mild, which may account for its good convergence performance.
In the meantime, in some early cycles of iteration, the convergence speed of BEABC tends to be roughly the same with that of the conventional ABC. Such phenomenon may be due to the fact that, during the early cycles of iteration, it is relatively easy to evolve for each of the employed bees; that is, are not large in general. Therefore, BEABC is similar with the conventional ABC in the convergence performance. However, as the iteration process continues, the efficiency of ABC will be impacted by the obstacles of local optimums. At the same time, the modifications in BEABC algorithm make sense.
5. Conclusion
In this paper, BEABC algorithm is applied for the UCAV path planning optimization problem. Simulation results clearly indicate that BEABC shows more stability and efficiency in this twodimensional flight path planning optimization scheme than ABC, IABC, and IFABC.
BEABC algorithm intends to fully utilize the convergence status within the iteration system so as to manipulate the searching accuracy and also to strike a balance between the local exploitation with global exploration. Previous studies concerning the improvements of ABC always focused on the remedies from the outside world, neglecting the true convergence status hiding in the internal iteration process. In this sense, this work can be regarded as publicity for such idea. Our future work will cover some further comparisons with more stateoftheart intelligent algorithms.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work was supported in part by the School of Advanced Engineering (SAE) in Beihang University and sponsored by the 5th and 6th National College Students’ Innovation and Entrepreneurial Training Programs in China.
References
 Y. Zhang, Y. Jun, G. Wei, and L. Wu, “Find multiobjective paths in stochastic networks via chaotic immune PSO,” Expert Systems with Applications, vol. 37, no. 3, pp. 1911–1919, 2010. View at: Publisher Site  Google Scholar
 V. Roberge, M. Tarbouchi, and G. Labonte, “Comparison of parallel genetic algorithm and particle swarm optimization for realtime UAV path planning,” IEEE Transactions on Industrial Informatics, vol. 9, pp. 132–141, 2013. View at: Google Scholar
 Y. Zhang, P. Agarwal, V. Bhatnagar, S. Balochian, and J. Yan, “Swarm intelligence and its applications,” The Scientific World Journal, vol. 2013, Article ID 528069, 3 pages, 2013. View at: Publisher Site  Google Scholar
 B. Li, L. G. Gong, and C. H. Zhao, “Unmanned combat aerial vehicles path planning using a novel probability density model based on Artificial Bee Colony algorithm,” in Proceedings of the International Conference on Intelligent Control and Information Processing (ICICIP '13), pp. 620–625, Beijing, China, June 2013. View at: Google Scholar
 D. G. MacHaret, A. A. Neto, and M. F. M. Campos, “Feasible UAV path planning using genetic algorithms and Bézier curves,” Proceedings of the Advances in Artificial Intelligence Conference (SBIA '10), Springer, Berlin, Germany, vol. 6404, pp. 223–232, 2010. View at: Publisher Site  Google Scholar
 H. H. John, Adaptation in Natural and Artificial Systems, University of Michigan, Ann Arbor, Mich, USA, 1975.
 R. Storn and K. Price, “Differential evolution: a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Google Scholar
 A. Colorni, M. Dorigo, and V. Maniezzo, “Distributed optimization by ant colonies,” in Proceedings of the 1st European conference on artificial life, pp. 134–142, Paris, France, 1991. View at: Google Scholar
 R. Eberhart and J. Kennedy, “New optimizer using particle swarm theory,” in Proceedings of the 6th International Symposium on Micro Machine and Human Science, pp. 39–43, Nagoya, Japan, October 1995. View at: Google Scholar
 D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007. View at: Publisher Site  Google Scholar
 L. Guo, G. Wang, H. Wang, and D. Wang, “An effective hybrid firefly algorithm with harmony search for global numerical optimization,” The Scientific World Journal, vol. 2013, Article ID 125625, 9 pages, 2013. View at: Publisher Site  Google Scholar
 C. Xu, H. Duan, and F. Liu, “Chaotic artificial bee colony approach to Uninhabited Combat Air Vehicle (UCAV) path planning,” Aerospace Science and Technology, vol. 14, no. 8, pp. 535–541, 2010. View at: Publisher Site  Google Scholar
 Z. Cheng, Y. Sun, and Y. Liu, “Path planning based on immune genetic algorithm for UAV,” in Proceedings of the International Conference on Electric Information and Control Engineering (ICEICE '11), pp. 590–593, Wuhan, China, April 2011. View at: Publisher Site  Google Scholar
 Y. Bao, X. Fu, and X. Gao, “Path planning for reconnaissance UAV based on particle swarm optimization,” in Proceedings of the 2nd International Conference on Computational Intelligence and Natural Computing (CINC '10), pp. 28–32, Wuhan, China, September 2010. View at: Publisher Site  Google Scholar
 Y. Fu, M. Ding, and C. Zhou, “Phase angleencoded and quantumbehaved particle swarm optimization applied to threedimensional route planning for UAV,” IEEE Transactions on Systems, Man, and Cybernetics A: Systems and Humans, vol. 42, no. 2, pp. 511–526, 2012. View at: Publisher Site  Google Scholar
 D. M. Pierre, N. Zakaria, and A. J. Pal, “Masterslave parallel vectorevaluated genetic algorithm for unmanned aerial vehicle's path planning,” in Proceedings of the 11th International Conference on Hybrid Intelligent Systems (HIS '11), pp. 517–521, Melacca, Malaysia, December 2011. View at: Publisher Site  Google Scholar
 H. Duan, S. Liu, and J. Wu, “Novel intelligent water drops optimization approach to single UCAV smooth trajectory planning,” Aerospace Science and Technology, vol. 13, no. 8, pp. 442–449, 2009. View at: Publisher Site  Google Scholar
 Z. Yin, X. Liu, and Z. Wu, “A multiuser detector based on artificial bee colony algorithm for DSUWB systems,” The Scientific World Journal, vol. 2013, Article ID 547656, 8 pages, 2013. View at: Publisher Site  Google Scholar
 B. Li and Y. Yao, “An edgebased optimization method for shape recognition using atomic potential function,” Engineering Applications of Artificial Intelligence. Submitted. View at: Google Scholar
 H. Sun, H. Lus, and R. Betti, “Identification of structural models using a modified Artificial Bee Colony algorithm,” Computers and Structures, vol. 116, pp. 59–74, 2013. View at: Google Scholar
 B. Li, Y. Li, and L. Gong, “Protein secondary structure optimization using an improved artificial bee colony algorithm based on AB offlattice model,” Engineering Applications of Artificial Intelligence, vol. 27, pp. 70–79, 2014. View at: Google Scholar
 R. Irani and R. Nasimi, “Application of artificial bee colonybased neural network in bottom hole pressure prediction in underbalanced drilling,” Journal of Petroleum Science and Engineering, vol. 78, no. 1, pp. 6–12, 2011. View at: Publisher Site  Google Scholar
 B. Li and Y. Li, “BEABC: hybrid artificial bee colony algorithm with balancing evolution strategy,” in Proceedings of the International Conference on Intelligent Control and Information Processing (ICICIP '12), pp. 217–222, Dalian, China, June 2012. View at: Google Scholar
 F. Kang, J. Li, and Z. Ma, “Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions,” Information Sciences, vol. 181, no. 16, pp. 3508–3531, 2011. View at: Publisher Site  Google Scholar
 H.B. Duan, C.F. Xu, and Z.H. Xing, “A hybrid artificial bee colony optimization and quantum evolutionary algorithm for continuous optimization problems,” International Journal of Neural Systems, vol. 20, no. 1, pp. 39–50, 2010. View at: Publisher Site  Google Scholar
 B. Alatas, “Chaotic bee colony algorithms for global numerical optimization,” Expert Systems with Applications, vol. 37, no. 8, pp. 5682–5687, 2010. View at: Publisher Site  Google Scholar
 D. Haijun and F. Qingxian, “Artificial bee colony algorithm based on boltzmann selection strategy,” Computer Engineering and Applications, vol. 45, no. 32, pp. 53–55, 2009. View at: Google Scholar
 Y. Li, Y. Wang, and B. Li, “A hybrid artificial bee colony assisted differential evolution algorithm for optimal reactive power flow,” International Journal of Electrical Power and Energy Systems, vol. 52, pp. 25–33, 2013. View at: Google Scholar
 M. S. Kiran and M. Gündüz, “A recombinationbased hybridization of particle swarm optimization and artificial bee colony algorithm for continuous optimization problems,” Applied Soft Computing, vol. 13, pp. 2188–2203, 2013. View at: Google Scholar
 H. Duan, Z. Xing, and C. Xu, “An improved quantum evolutionary algorithm based on artificial bee colony optimization,” in Advances in Computational Intelligence, vol. 116, pp. 269–278, Springer, Berlin, Germany, 2009. View at: Google Scholar
 A. Banharnsakun, B. Sirinaovakul, and T. Achalakul, “Job shop scheduling with the Bestsofar ABC,” Engineering Applications of Artificial Intelligence, vol. 25, no. 3, pp. 583–593, 2012. View at: Publisher Site  Google Scholar
 G. Li, P. Niu, and X. Xiao, “Development and investigation of efficient artificial bee colony algorithm for numerical function optimization,” Applied Soft Computing Journal, vol. 12, no. 1, pp. 320–332, 2012. View at: Publisher Site  Google Scholar
 B. Li, L. G. Gong, and Y. Yao, “On the performance of internal feedback artificial bee colony algorithm (IFABC) for protein secondary structure prediction,” in presented in Proceedings of the 6th International Conference on Advanced Computational Intelligence, pp. 33–38, Hangzhou, China, 2013. View at: Google Scholar
 B. Li, “Research on WNN modeling based on an improved artificial Bee Colony algorithm for gold price prediction,” Computational Intelligence and Neuroscience, vol. 2014, Article ID 270658, 10 pages, 2014. View at: Publisher Site  Google Scholar
 D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,” Applied Soft Computing Journal, vol. 8, no. 1, pp. 687–697, 2008. View at: Publisher Site  Google Scholar
 G. Wang, L. Guo, H. Duan, L. Liu, and H. Wang, “A Bat algorithm with mutation for UCAV path planning,” The Scientific World Journal, vol. 2012, Article ID 418946, 15 pages, 2012. View at: Publisher Site  Google Scholar
 Y. Zhang, L. Wu, and S. Wang, “UCAV path planning by fitnessscaling adaptive chaotic particle swarm optimization,” Mathematical Problems in Engineering, vol. 2013, Article ID 705238, 9 pages, 2013. View at: Publisher Site  Google Scholar
 D. Rodic and A. P. Engelbrecht, “Social networks in simulated multirobot environment,” International Journal of Intelligent Computing and Cybernetics, vol. 1, pp. 110–127, 2008. View at: Google Scholar
 B. Li and R. Chiong, “A new artificial bee colony algorithm based on a balanceevolution strategy for numerical optimization,” Submitted. View at: Google Scholar
Copyright
Copyright © 2014 Bai Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.