The gray wolf optimization algorithm (GWO) is a pack intelligence optimization algorithm designed by Mirjalili [54 (
link)]. It was inspired by the social stratification characteristics and hunting and trapping behavior of wolves. It has the advantages of strong convergence performance, a simple structure, and easy implementation. There exist convergence factors with self-adaptive adjustment and information feedback mechanisms. Thus, a balance between local optimization and global optimization is achieved. For this reason, it has good problem-solving accuracy. Gray wolves are divided into four social classes:
. After sorting according to the adaptation function,
wolves represent the optimal solutions.
wolves and
wolves represent the suboptimal solutions, with their role to assist
wolves in making decisions. The remaining candidate solutions are defined as
wolves. Classes of
wolves,
wolves, and
wolves command the hunting behavior, and
wolves follow the above-mentioned higher-level wolves in hunting. Since the position of the prey (the optimal solution) in the solution space is not known, it is assumed that the positions of wolf
, wolf, and wolf
are closest to the prey.
As shown in
Figure 8 [54 (
link)], the hunting process of the gray wolf surrounding prey is represented by the following mathematical model:
where
and
are coefficient vectors determined by random values,
and
represent the position vector of the prey and the position vector of the gray wolf at iteration up to the t-th generation, respectively [54 (
link)]. Since the location of the optimal solution in the solution space is not known, it is assumed that the locations of wolf
, wolf
, and wolf
are closest to the optimal solution. After recording the positions of these three wolves, the
wolf is ordered to approach the
wolf, and the
wolf is ordered to approach the
wolf. During each iteration, the positions of the
wolf, the
wolf, and the
wolf are updated via the formulae shown in Equation (12) [54 (
link)]:
Averaging the positions of the
wolf, the
wolf, the
wolf, and the result obtained is regarded as the final position after this iteration is updated. When
in Equation (12), the next generation of gray wolves can be located anywhere near the prey. The constant repetition of this behavior is to hunt the prey. It is worth mentioning that in order to prevent falling into local optimal solutions during the optimization process, the gray wolf chooses to move away from the prey when
.
To enable the gray wolf optimization algorithm to perform multi-objective optimization, Mirjalili provided two components [55 (
link)]. One is a storage component responsible for storing non-dominated Pareto optimal solutions, which implements the storage function of several optimal solutions. The other is a leader-selection strategy component, which can obtain dominance relations with the help of the Pareto global optimum concept but cannot compare solutions that are not dominated by each other. This component selects a new leader in the uncrowded region of non-dominated solutions according to the roulette wheel method, marked as
, which is then archived.
In this study, two objective functions were chosen as constraints and the expressions are as shown in Equation (14):
where
P and
HI represent the random forest HI prediction model and the random forest indentation pressure generation prediction model, respectively,
represents the
i-th design parameter, and
and
are the upper and lower bounds of each design parameter, respectively. All codes in this paper were written in the MATLAB environment, the number of populations in each iteration was set to 100, and the maximum number of external archives was chosen to be 100, for a total of 400 iterations, running on a PC with Inlet(R) Core(TM) i7-12700 k CPU 5.00 GHz 32 GB RAM.
Jing T., Sun H., Cheng J, & Zhou L. (2023). Optimization of a Screw Centrifugal Blood Pump Based on Random Forest and Multi-Objective Gray Wolf Optimization Algorithm. Micromachines, 14(2), 406.