当前位置:   article > 正文

智能算法之蜣螂优化算法(DBO),原理公式详解,附matlab代码

智能算法之蜣螂优化算法(DBO),原理公式详解,附matlab代码

蜣螂优化算法(Dung beetle optimizerDBO)是一种新型群智能优化算法,该算法是受蜣螂滚球、跳舞、觅食、偷窃和繁殖行为启发而提出的,具有进化能力强、搜索速度快、寻优能力强的特点。该成果于2022年发表在知名SCI期刊THE JOURNAL OF SUPERCOMPUTING上。目前谷歌学术上查询被引102次。

a0687bc9a41036dbd7d12aaf05e7a5f2.png

DBO算法通过滚球、跳舞、觅食、偷窃和繁殖的过程,五个主要操作模拟了蜣螂的生存行为,最后选取最优解。

算法原理

(1)滚球行为

蜣螂在滚动的过程中利用天体导航,蜣螂滚动路径受环境影响,环境的变化也影响着蜣螂位置的改变:

 式中:t为当前迭代次数;  表示第  只蜣螂在第  次迭代时的位置信息;  为(0,0.2)之间的常数,表示缺陷系数;  为(0,1)的常数;  -11的自然系数;  为全局最差位置;∆x为环境变化。

(2)跳舞行为

当蜣螂遇到障碍无法继续前进时,通过跳舞行为变换方向,获得新的移动路线。使用切线函数得到新的滚动方向,在[0,π]区间值,确定新方向后,继续滚球行为: 如果角度为0π/2π时,蜣螂的位置不更新。

(3)繁殖

蜣螂将粪球滚动到安全区域后进行隐藏,合适的产卵地点对于蜣螂后代来说至关重要,提出一种边界选择策略模拟雌性蜣螂的产卵区域: 式中:  为当前局部最佳位置;  ,  分别为产卵区的下限和上限;  表示动态选择因子;  为最大迭代次数;  ,  分别为优化问题的下限和上限。

确定产卵区域后,每只雌性蜣螂每次产一枚卵。产卵的区域随迭代次数动态调整,卵球的位置也是动态变化的:  式中,  表示第  次迭代时第  个卵球的位置;  ,  均为1×D的独立随机向量;D为优化问题的维数。

(4)觅食行为

幼虫从卵中钻出后,需要限定最佳觅食区域引导小蜣螂觅食,最佳觅食区域的边界为:  式中:  ,  分别为最佳觅食区域的下限和上限;  为全局最佳位置。小蜣螂在觅食过程中位置的更新为: 式中:C1为服从正态分布的随机数;C2为(0,1)的随机数。

(5)偷窃行为

蜣螂种群中,偷窃作为一种竞争行为十分常见,偷窃蜣螂的位置更新为:  式中:S为常数;g为服从正态分布的1×D随机向量。

 果展示

以为CEC2005函数集为例,进行结果展示:

2d0fc872a483fc0d14b3c8bb08a3119f.png

9acbde54978f19a8b283e768ffd0f818.png

bb307743ca0a15f8d166f280fad6ea19.png

7b63a3c5639f08707212295cc1e10f0c.png

673e020503962ea6a36ae760e31732fc.png

 MATLAB核心代码

  1. function [fMin , bestX, Convergence_curve ] = DBO(pop, M,c,d,dim,fobj )
  2. P_percent = 0.2; % The population size of producers accounts for "P_percent" percent of the total population size
  3. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  4. pNum = round( pop * P_percent ); % The population size of the producers
  5. lb= c.*ones( 1,dim ); % Lower limit/bounds/ a vector
  6. ub= d.*ones( 1,dim ); % Upper limit/bounds/ a vector
  7. %Initialization
  8. for i = 1 : pop
  9. x( i, : ) = lb + (ub - lb) .* rand( 1, dim );
  10. fit( i ) = fobj( x( i, : ) ) ;
  11. end
  12. pFit = fit;
  13. pX = x;
  14. XX=pX;
  15. [ fMin, bestI ] = min( fit ); % fMin denotes the global optimum fitness value
  16. bestX = x( bestI, : ); % bestX denotes the global optimum position corresponding to fMin
  17. % Start updating the solutions.
  18. for t = 1 : M
  19. [fmax,B]=max(fit);
  20. worse= x(B,:);
  21. r2=rand(1);
  22. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  23. for i = 1 : pNum
  24. if(r2<0.8)
  25. r1=rand(1);
  26. a=rand(1,1);
  27. if (a>0.1)
  28. a=1;
  29. else
  30. a=-1;
  31. end
  32. x( i , : ) = pX( i , :)+0.3*abs(pX(i , : )-worse)+a*0.1*(XX( i , :)); % Equation (1)
  33. else
  34. aaa= randperm(180,1);
  35. if ( aaa==0 ||aaa==90 ||aaa==180 )
  36. x( i , : ) = pX( i , :);
  37. end
  38. theta= aaa*pi/180;
  39. x( i , : ) = pX( i , :)+tan(theta).*abs(pX(i , : )-XX( i , :)); % Equation (2)
  40. end
  41. x( i , : ) = Bounds( x(i , : ), lb, ub );
  42. fit( i ) = fobj( x(i , : ) );
  43. end
  44. [ fMMin, bestII ] = min( fit ); % fMin denotes the current optimum fitness value
  45. bestXX = x( bestII, : ); % bestXX denotes the current optimum position
  46. R=1-t/M; %
  47. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  48. Xnew1 = bestXX.*(1-R);
  49. Xnew2 =bestXX.*(1+R); %%% Equation (3)
  50. Xnew1= Bounds( Xnew1, lb, ub );
  51. Xnew2 = Bounds( Xnew2, lb, ub );
  52. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  53. Xnew11 = bestX.*(1-R);
  54. Xnew22 =bestX.*(1+R); %%% Equation (5)
  55. Xnew11= Bounds( Xnew11, lb, ub );
  56. Xnew22 = Bounds( Xnew22, lb, ub );
  57. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  58. for i = ( pNum + 1 ) :12 % Equation (4)
  59. x( i, : )=bestXX+((rand(1,dim)).*(pX( i , : )-Xnew1)+(rand(1,dim)).*(pX( i , : )-Xnew2));
  60. x(i, : ) = Bounds( x(i, : ), Xnew1, Xnew2 );
  61. fit(i ) = fobj( x(i,:) ) ;
  62. end
  63. for i = 13: 19 % Equation (6)
  64. x( i, : )=pX( i , : )+((randn(1)).*(pX( i , : )-Xnew11)+((rand(1,dim)).*(pX( i , : )-Xnew22)));
  65. x(i, : ) = Bounds( x(i, : ),lb, ub);
  66. fit(i ) = fobj( x(i,:) ) ;
  67. end
  68. for j = 20 : pop % Equation (7)
  69. x( j,: )=bestX+randn(1,dim).*((abs(( pX(j,: )-bestXX)))+(abs(( pX(j,: )-bestX))))./2;
  70. x(j, : ) = Bounds( x(j, : ), lb, ub );
  71. fit(j ) = fobj( x(j,:) ) ;
  72. end
  73. % Update the individual's best fitness vlaue and the global best fitness value
  74. XX=pX;
  75. for i = 1 : pop
  76. if ( fit( i ) < pFit( i ) )
  77. pFit( i ) = fit( i );
  78. pX( i, : ) = x( i, : );
  79. end
  80. if( pFit( i ) < fMin )
  81. % fMin= pFit( i );
  82. fMin= pFit( i );
  83. bestX = pX( i, : );
  84. % a(i)=fMin;
  85. end
  86. end
  87. Convergence_curve(t)=fMin;
  88. end
  89. % Application of simple limits/bounds
  90. function s = Bounds( s, Lb, Ub)
  91. % Apply the lower bound vector
  92. temp = s;
  93. I = temp < Lb;
  94. temp(I) = Lb(I);
  95. % Apply the upper bound vector
  96. J = temp > Ub;
  97. temp(J) = Ub(J);
  98. % Update this new move
  99. s = temp;
  100. function S = Boundss( SS, LLb, UUb)
  101. % Apply the lower bound vector
  102. temp = SS;
  103. I = temp < LLb;
  104. temp(I) = LLb(I);
  105. % Apply the upper bound vector
  106. J = temp > UUb;
  107. temp(J) = UUb(J);
  108. % Update this new move
  109. S = temp;

参考文献

[1] Xue J, Shen B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization[J]. The Journal of Supercomputing, 2023, 79(7): 7305-7336.

完整代码获取方式:后台回复关键字:

TGDM990

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/200191
推荐阅读
相关标签
  

闽ICP备14008679号