1. Introduction
As a random approximate optimization technology, metaheuristic algorithms use a search strategy inspired by natural laws and human society to solve optimization problems [1,2]. Research on metaheuristic algorithms can be traced back to 1957. Fraser initially proposed concepts such as the genetic algorithm (GA) based on the theory of evolution [3]. The concept of metaheuristic algorithms was proposed by Fred Glover in 1986 [4]. Subsequently, metaheuristic algorithms developed rapidly, and scholars continued to propose metaheuristic algorithms with various characteristics. Inspired by the foraging behavior of ants, Colorni et al. proposed the ant colony optimization algorithm in 1991 and successfully applied it to solve the TSP problem [5]. The idea of ant colony optimization algorithm combined with fuzzy C-means clustering is applied to the problem of the collaborative multi-task reallocation of heterogeneous UAVs [6]. In 1995, Kennedy and Eberhart were inspired by the foraging behavior of bird flocks and jointly proposed particle swarm optimization (PSO) [7]. Abbass et al. were inspired by the reproductive behavior of bees and proposed a bee colony optimization algorithm in 2001 [8]. Karaboga et al. were inspired by the honey-collecting mechanism of bees and proposed an artificial bee colony optimization algorithm [9]. Inspired by the phototaxis of bacteria, Pan et al. proposed a bacterial phototaxis optimization algorithm [10]. Tang et al. proposed an improved artificial electric field algorithm (I-AEFA) and applied it to robot three-dimensional path planning [11].
The particle swarm optimization algorithm is one of the most well-known metaheuristic algorithms. It has received widespread attention in academia and industry. The improvement of the particle swarm optimization algorithm mainly includes three aspects: population initialization, parameter adaptation, and hybrid optimization. Haupt et al. pointed out that the solution accuracy and convergence speed of metaheuristics would be affected by the initial population. The higher the diversity of the population, the stronger the global optimization ability of the particle swarm optimization algorithm [12]. In order to improve the diversity of the population, scholars have designed many population initialization strategies, such as population initialization strategies based on opposition-based learning [13] and chaos mapping [14]. The reverse learning strategy is mainly used to expand the algorithm search area. It has been successfully applied in swarm intelligence optimization algorithms such as the whale optimization algorithm [15] and gray wolf optimization algorithm [16]. Qin et al. introduced adaptive inertia weights to improve the solution accuracy and convergence speed [17]. Chauhan et al. proposed three nonlinear strategies to select inertia weights to improve the quality of the algorithm solution [18]. Sekyere et al. applied nonlinear adaptive dynamic inertia weights to improve the exploration capability of the algorithm [19]. Hybrid optimization combines other optimization methods with the particle swarm optimization algorithm to overcome the limitations of various algorithms and improve the overall performance of the algorithm. Li et al. designed a search mechanism for the simulated annealing algorithm in the particle swarm optimization algorithm, which effectively improved its exploit capability [20]. Blas et al. applied different evolution algorithms to improve the optimization mechanism of the particle swarm optimization algorithm and enhanced the algorithm’s optimization ability [21]. Zhang et al. proposed an improved hybrid algorithm based on particle swarm optimization and gray wolf optimization to combine their advantages and then apply it to clustering problems [22]. In order to improve the diversity of the initial population, this paper adopts a population initialization strategy based on reverse learning. Parameter adjustment and adaptability mainly improve the speed update operator in the particle swarm optimization algorithm and coordinate the exploration and development capabilities of the algorithm by adjusting the parameters in the speed update operator.
The application of the particle swarm optimization algorithm is very extensive. Li et al. proposed a particle swarm optimization algorithm based on fast density peak clustering and successfully applied it to dynamic optimization problems [23]. Lu et al. designed a multi-level particle swarm optimization algorithm and successfully applied it to market-driven workflow scheduling problems on heterogeneous cloud resources with deadline constraints [24]. Aljohani et al. combined random forest and particle swarm optimization algorithms, used the particle swarm optimization algorithm to eliminate redundant features, and successfully and efficiently realized pothole detection on the road [25].
In order to further improve the performance of the particle swarm optimization algorithm, we focused on enhancing the exploration of the particle swarm optimization algorithm to improve its global optimization ability and proposed an improved particle swarm optimization algorithm based on grouping (IPSO) in 2023 [26]. However, the local development ability of the algorithm still has room for improvement and does not have the ability of constrained optimization and discrete optimization. In order to further improve the performance and application value of the IPSO algorithm, this paper focuses on improving the local development ability of the IPSO algorithm and its ability to solve discrete problems with constraints. The contribution of this study lies in two aspects. First, on the basis of the IPSO algorithm, we propose a local development strategy for variable neighborhood search to obtain a new algorithm, VN-IPSO, which improves the algorithm’s optimization ability in solving multi-peak functions and composite functions. Second, we design a constrained 0-1 integer programming solver with constraints, which enables VN-IPSO to solve constrained discrete optimization problems. This solver is also suitable for other metaheuristic algorithms to expand their ability to solve discrete problems with constraints.
The rest of this paper is organized as follows. Section 2 introduces the optimization and neighborhood definitions. In Section 3, we describe the improved particle swarm optimization algorithm based on variable neighborhood search. Section 4 introduces the experiment results and analysis. Finally, Section 5 summarizes this paper and describes future research.
2. Definitions and Preliminaries
Optimization is generally understood as the process of exploring all potential values of a problem to find the best solution [27]. Specifically, it is to maximize or minimize a multidimensional function within a given set of constraints, which can be expressed as follows [28]:
$$minimize\text{}f(X),\text{}s.t.\text{}g(X)\le 0,\text{}X\in {R}^{n}$$
where $X=\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is an n-dimensional solution, ${R}^{n}$ is the domain, $f\left(X\right)$ is the objective function, and $g\left(X\right)\le 0$ is the constraint condition.
A neighborhood is a set of candidate solutions defined by the operator, which contains a set of possible solutions in a specific problem space, which are relatively close to the current solution. Operation operators are strategies or methods adopted for a given solution, which are used to generate new solutions in the neighborhood. These operators can be realized by various means, such as modifying some parameters, exchanging elements, or applying specific rules so as to explore possible improvement directions and provide broader choices for solving optimization problems. By using different operators, operators can effectively traverse the whole solution space and improve the chances of finding high-quality solutions.
Define1
(Neighborhoods [29]). A neighborhood is a set of candidate solutions defined by an operator.
Define2
(Operators [29]). An operation operator is an operation or method of operation that generates a neighbor solution based on a given solution. For the solution${X}_{0}$, define an operator $f$, and the set of all$f\left({X}_{0}\right)$is a neighborhood, as shown in Figure 1.
The variable neighborhood search algorithm is based on a variety of different neighborhood structures, alternating between different neighborhood structures for searching; its core idea is to use a small neighborhood for rapid improvement and a large neighborhood for deep optimization. The process of the variable neighborhood search algorithm is as follows: (1) when there is no solution better than the current solution in the current neighborhood search, search the next larger neighborhood; and (2) when there is a solution better than the current solution in the current neighborhood search, update the current solution in time, and start the search again from the first neighborhood based on the updated current solution, as shown in Figure 2.
The knapsack problem is that, given a series of items, each item has two attributes, price and volume. When the total volume is limited, items must be chosen to make the total price of the items in the bag the highest [30]. The knapsack problem is an NP-Complete problem, and the correctness of its solution can only be verified by a deterministic Turing machine in polynomial time. The knapsack problem is widely used in real-world scenarios, such as packing problems, portfolio investment problems with limited funds, warehousing problems, etc. The knapsack problem can be expressed as shown in Equation (2).
$$\mathrm{max}{\displaystyle \sum}_{j\in J}{w}_{j}\times {x}_{j},\text{}s.t.{\displaystyle \sum}_{j\in J}{v}_{ij}\times {x}_{j}\le {V}_{i},\text{}\forall i\in I$$
Among them, $J$ represents the number of items, $I$ represents the number of backpacks; ${w}_{j}$ represents the price of the ${j}^{th}$ item; ${V}_{i}$ represents the limited volume of the $i{}^{th}$ backpack; ${v}_{ij}$ represents the volume occupied by the $j{}^{th}$ item when it is put into the $i{}^{th}$ backpack; and ${x}_{j}\in \left\{0,1\right\}$ represents whether the $j{}^{th}$ item is selected, if so, it is 1, otherwise it is 0.
3. Methods
We propose an improved particle swarm optimization algorithm based on variable neighborhood search on the basis of the IPSO algorithm and design a set of 0-1 integer programming solutions with constraints. The flow diagram for the complete work is shown in Figure 3. We introduce the idea of variable neighborhood search and specially design a local development strategy based on variable neighborhood search, which aims to deeply develop the current optimal particles in each iteration. We design large neighborhood and small neighborhood search operators, Operator1 and Operator2, to improve the local scalability of the IPSO algorithm. We combine IPSO with a local development strategy based on variable neighborhood search to form VN-IPSO, which has strong exploration and development capabilities to solve continuous optimization functions. Additionally, we propose a coding scheme based on 0-1 integer programming to expand the algorithm’s ability to address constrained discrete problems, using the sigmoid function as the mapping scheme.
3.1. Local Development Strategy Based on Variable Neighborhood Search
In [26], a large number of experiments were carried out to verify and analyze the performance of the IPSO algorithm. The IPSO algorithm has extremely strong global exploration and global optimization capabilities, but its local development capabilities still have room for improvement. Therefore, it is worthwhile to improve the local search capabilities of the IPSO algorithm and then develop its convergence accuracy and comprehensive solution performance. In order to improve the local development capabilities of the IPSO algorithm, this paper considers introducing the idea of variable neighborhood search and designs a local development strategy based on variable neighborhood search, as shown in Algorithm 1.
Algorithm 1 Local development strategy based on variable neighborhood search. | |
1: | Input$N$, $X$, $f$, dim, objf(), $ub$, $lb$, steps |
2: | Define two neighborhoods, $operato{r}_{1}()$ and $operato{r}_{2}()$ |
3: | Define the IPSO method |
4: | for$i=1$ to $N$ do |
5: | Use the IPSO algorithm to obtain the current local optimal particle gbest and fitness value $f$ |
6: | ${X}_{1}$, ${f}_{1}$ = $operato{r}_{1}\left(gbest\right)$ |
7: | if${f}_{1}<f$: |
8: | gbest = ${X}_{1}$ |
9: | $f={f}_{1}$ |
10: | continue |
11: | ${X}_{2}$, ${f}_{2}$ = $operato{r}_{2}\left(gbest\right)$ |
12: | if${f}_{2}<f$: |
13: | gbest = ${X}_{2}$ |
14: | $f={f}_{2}$ |
15: | continue |
16: | end for |
17: | Output gbest, $f$ |
The local development strategy based on variable neighborhood search constructs a variable neighborhood search operator by designing multiple neighborhood search operators in a targeted manner. After each round of iteration, the variable neighborhood search is used to start the search from the current global optimal particle. Once a particle better than the current global optimal particle is found, the search is immediately stopped and the search information is returned to update the current global optimal particle and enter the next round of iteration; if a particle better than the current global optimal particle is not found after executing the entire neighborhood space, the next round of iteration is directly entered. This adds more local development operations in each round of iteration, which can theoretically improve the local development capability of the PSO algorithm.
We design two neighborhoods to improve the local expansion capability of the IPSO algorithm: the small neighborhood quick exploitation operator (Operator1) and the large neighborhood deep exploration operator (Operator2). Operator1 plays the role of rapidly improving the small neighborhood, undertakes the task of local refined search development, and strives to discover the useful information around the current particle; Operator2 plays the role of the deep optimization of the large neighborhood, undertakes the task of local rough search development, and complements Operator1 to prevent the entire search process from falling into the local optimum, thereby improving the overall search capability.
- (1)
Small neighborhood quick exploitation operator
The purpose of designing Operator1 is to conduct a refined search around the current global optimal particle and mine the useful information around the current global optimal particle, as shown in Algorithm 2. Therefore, when designing the operator of Operator1, the movement amplitude is very small. First, a moving step list is defined, $steps=\left[0.1,0.01,0.001,0.0001\right]$. Secondly, each dimension of the current solution is traversed, each dimension is moved in turn according to the step length in the moving step list to obtain a new candidate solution, and the fitness value of the new candidate solution ${f}_{1}$ is immediately calculated. If ${f}_{1}>f$, the search continues; if ${f}_{1}<f$, the search is stopped, and the new candidate solution information is recorded. Finally, if a new candidate solution is found, the relevant information is returned.
Algorithm 2 Small neighborhood quick exploitation operator. | |
1: | Input$X$, $f$, dim, objf(), $ub$, $lb$, steps |
2: | for i = 1 to dim do |
3: | for j = 1 to len(steps) do |
4: | ${X}^{\prime}=X$ |
5: | ${X}^{\prime}\left(i\right)={X}^{\prime}\left(i\right)+steps\left[j\right]$ |
6: | ${X}^{\prime}=numpy.clip\left({X}^{\prime},lb,ub\right)$ |
7: | ${f}_{1}=objf\left({X}^{\prime}\right)$ |
8: | ${X}^{\prime}=X$ |
9: | ${X}^{\prime}\left(i\right)={X}^{\prime}\left(i\right)-steps\left[j\right]$ |
10: | ${X}^{\prime}=numpy.clip\left({X}^{\prime},lb,ub\right)$ |
11: | ${f}_{2}=objf\left({X}^{\prime}\right)$ |
12: | Select the improved neighbor solution ${X}^{\prime}$ that appears for the first time in the neighborhood |
13: | end for |
14: | end for |
15: | Output${X}^{\prime}$ |
- (2)
Large neighborhood deep exploration operator
The purpose of designing Operator2 is to conduct a larger search around the current global optimal particle, perform deep optimization, prevent the entire search process from falling into the local optimum, and improve the algorithm’s ability to jump out of the local optimum, as shown in Algorithm 3. Therefore, when designing the operator of Operator2, its search range is larger and more random. Firstly, we define the search step size. Secondly, we define the large neighborhood deep exploration operator, which contains two functions. The first is the selection function, selecting the mutation position of the current solution; the second is the mutation function, mutating the mutation point of the current solution and generating a neighbor solution. The search is terminated and the neighbor solution information is returned if an improved neighbor solution is found; otherwise, the search process continues.
Algorithm 3 Large neighborhood deep exploration operator. | |
1: | Input$X$, $f$, dim, objf(), $ub$, $lb$, $nums$ |
2: | for$i=1$ to $nums$ do |
3: | ${X}^{\prime}=X$ |
4: | $index$ = randint(1, dim) |
5: | ${X}^{\prime}\left[index\right]$ = rand * (ub − lb) + lb |
6: | ${X}^{\prime}$ = clip (${X}^{\prime}$, lb, ub) |
7: | ${f}^{\prime}$ = objf (${X}^{\prime}$) |
8: | Find the first occurrence of a better neighbor solution ${X}^{\prime}$ |
9: | end for |
10: | Output${X}^{\prime}$ |
3.2. The Improved Particle Swarm Optimization Algorithm Based on Variable Neighborhood Search
In Section 3.2.1, we describe the complete process and key steps of the VN-IPSO algorithm. Then, in Section 3.2.2, we give a coding scheme to expand the application scope of the VN-IPSO algorithm, from solving continuous problems to solving discrete problems.
3.2.1. The Flow of the VN-IPSO Algorithm
Like the IPSO algorithm, each particle has two properties, namely, velocity and geometric position. In each iteration process, on the one hand, the local development strategy based on variable neighborhood search is used to deeply optimize the current global optimal solution; on the other hand, each particle updates its velocity and position successively by Equations (3) and (4) to constantly approach the optimal position. Figure 3 shows the process of the VN-IPSO algorithm. The concrete implementation steps of the improved particle swarm optimization algorithm based on variable neighborhood search are as follows.
Population initialization
Firstly, the initial velocities and geometric positions of $N$ particles are randomly generated in the solution space. ${V}_{i}\left(0\right)$ represents the initial velocity of the ${i}^{th}$ particle, which is a D-dimensional vector. $X{1}_{i}\left(0\right)$ represents the initial position of the ${i}^{th}$ particle in population X1.
$${V}_{i}\left(0\right)=\left({v}_{1}\left(0\right),{v}_{2}\left(0\right),\cdots ,{v}_{D}\left(0\right)\right),i=1,2,\cdots ,N$$
$$\begin{array}{c}X{1}_{i}\left(0\right)=\left({x}_{1}\left(0\right),{x}_{2}\left(0\right),\cdots ,{x}_{D}\left(0\right)\right),i=1,2,\cdots ,N\end{array}$$
Secondly, the inverse point $N$ of the initial geometric position of $X2$ particles is obtained. Finally, from the particle set composed of $N$ particles $X1$ and $N$ opposite points $X2$, $N$ particles with the best fitness value are selected as the initial population, which is denoted as $X$.
- 2.
Grouping
We take $X=\left({X}_{1}\left(0\right),{X}_{2}\left(0\right),\cdots ,{X}_{N}\left(0\right)\right)$ as the data set (the dimension of the data set is: $N\times dim$), use the K-Means algorithm to group the $N$ particles in the initial population, and set the group number $K=\mathrm{max}\left(int\left({\displaystyle \frac{N}{40}}\right),1\right)$. Then, we obtain $X=\left({X}_{k1}\left(0\right),{X}_{k2}\left(0\right),\cdots ,{X}_{ki}\left(0\right),\cdots ,{X}_{kN}\left(0\right)\right)$, where ${X}_{ki}\left(0\right)$ means that the particle $i$ belongs to the ${k}^{th}$ group.
- 3.
Choosing
See AlsoESMO Congress 2024: Young Oncologists Track2024 Housing Market Forecast and Predictions: Housing Affordability Finally Begins to TurnaroundParis Paralympics 2024: A complete guide to what’s on and the stars to watch out forGrove City Women's Volleyball - 2024 Season Preview - Grove City College Athletics
Firstly, we find the optimal position ${P}_{ki}\left({p}_{i1},{p}_{i2},\cdots ,{p}_{in}\right)$ found by each particle; secondly, we find the optimal position ${P}_{kg}\left({p}_{i1},{p}_{i2},\cdots ,{p}_{in}\right)$ found by each group; and finally, we find the optimal position ${P}_{g}\left(t\right)\in \left\{{P}_{k1}\left(t\right),{P}_{k2}\left(t\right),\cdots ,{P}_{kN}\left(t\right)\right\}$ found by the current population so far.
- 4.
Local development search
The purpose of this step is to locally develop the current optimal particle ${P}_{g}\left(t\right)$ of the population obtained in Step (3), thereby improving the overall development capability and convergence speed. According to the variable neighborhood search algorithm process shown in Figure 2, the two operators Operator1 and Operator2 are executed in sequence on the current optimal particle ${P}_{g}\left(t\right)$ of the population. If a better particle ${{P}^{\prime}}_{g}\left(t\right)$ is found during the process, the search is stopped immediately, and the current optimal particle of the population is updated to ${P}_{g}\left(t\right)={{P}^{\prime}}_{g}\left(t\right)$.
- 5.
Update operator
For each particle, we update the speed and position:
$$\begin{array}{c}{V}_{i}\left(t+1\right)=\omega \times {V}_{i}\left(t\right)+{c}_{1}\times {r}_{1}\left(t\right)\left({P}_{ki}\left(t\right)-{X}_{i}\left(t\right)\right)+\\ {c}_{2}\times {r}_{2}\left(t\right)\left({P}_{kg}\left(t\right)-{X}_{i}\left(t\right)\right)+{c}_{3}\times {r}_{3}\left(t\right)\left({P}_{g}\left(t\right)-{X}_{i}\left(t\right)\right)\end{array}$$
$${X}_{i}\left(t+1\right)={X}_{i}\left(t\right)+\alpha \times {V}_{i}\left(t+1\right)$$
where $\omega $ is the inertia factor; ${c}_{1}$, ${c}_{2}$, and ${c}_{3}$ are acceleration constants; and ${r}_{1}\left(t\right)$, ${r}_{2}\left(t\right)$, and ${r}_{3}\left(t\right)$ are independent random numbers between 0 and 1 at time $t$.
- 6.
Termination inspection
If $X\left(t+1\right)$ has produced an approximate solution that meets the required accuracy or the number of iterations $T$, we stop the calculation and output the current optimal position of the population.
$$X\left(t+1\right)=\left({X}_{1}\left(t+1\right),{X}_{2}\left(t+1\right),\cdots ,{X}_{N}\left(t+1\right)\right)$$
Otherwise, $t+1$, and we go to Step (3). The process of the enhanced improved particle swarm optimization algorithm based on variable neighborhood search is shown in Figure 4.
3.2.2. Coding Scheme for Discrete Optimization
VN-IPSO can only be used to solve continuous optimization problems, not discrete optimization problems. In this section, the coding scheme is studied to establish the mapping relationship between the particle element values of each dimension and 0-1, so that VN-IPSO can be applied to solve 0-1 integer programming problems. In this paper, the Sigmoid function is modified to perform de-mapping, as shown in (8) [31].
$${x}_{i}^{binary}=\{\begin{array}{l}1,{x}_{i}\ge ub\\ 1,lb<{x}_{i}<ub,rand\le {\displaystyle \frac{1}{1+{e}^{-{x}_{i}}}}\\ 0,lb<{x}_{i}<ub,rand>{\displaystyle \frac{1}{1+{e}^{-{x}_{i}}}}\\ 0,{x}_{i}\le lb\end{array}$$
where ${x}_{i}$ represents the element value of the particle’s $i$ dimension; $rand$ represents the random number within [0,1]; and $ub$ and $lb$ are the real numbers within [0,1]. Simulation experiments are carried out on groups of $ub$ and $lb$, and the results show that $ub=0.8$ and $lb=0.2$. The time mapping effect is better. The corresponding relationship between ${x}_{i}^{binary}$ and ${x}_{i}$ of the particle dimensions is shown in Figure 5. The red particle indicates that the value of the particle in this dimension is mapped to 1, and the black particle indicates that the value of the particle in this dimension is mapped to 0. when ${x}_{i}\le 0.2$, the value in the dimension is 0. When $0.2<{x}_{i}<0.8$, the value in this dimension is taken as 1 with the probability ${p}_{1}={\displaystyle \frac{1}{1+{e}^{-{x}_{i}}}}$ and 0 with the probability ${p}_{2}=1-{\displaystyle \frac{1}{1+{e}^{-{x}_{i}}}}$. If ${x}_{i}\ge 0.8$ is greater than or equal to 0.8, the value in this dimension is 1. Formula (8) guarantees that the probability of taking 1 in this dimension increases with an increase in ${x}_{i}$, and it has a certain randomness.
In the process of the algorithm solving the 0-1 integer programming problem, the mapping between ${x}_{i}^{binary}$ and ${x}_{i}$ only occurs during particle evaluation and the optimal solution output. When evaluating the particle, the values of each dimension of the particle are mapped and the corresponding evaluation function value is calculated.
4. Results
To evaluate the performance of the VN-IPSO algorithm and the effectiveness of the CBPS scheme, we compare VN-IPSO with some excellent comparative algorithms to solve 23 classic benchmark functions [32], 6 knapsack problem benchmark sets [33], and 10 CEC2017 composite functions [34]. In Section 4.1, the 23 classic benchmark functions, 6 knapsack problem benchmark sets, 10 CEC2017 composite functions, and the comparison algorithms are described. In Section 4.2, the solution process of the enhanced improved particle swarm optimization algorithm based on variable neighborhood search is qualitatively analyzed from the perspectives of search history (population position changes), the trajectory of the first particle, the average fitness of the population, and convergence. In Section 4.3, the results data of each algorithm on the 23 benchmark functions and 6 knapsack problem benchmark sets are analyzed in detail. The population size and maximum number of iterations of all optimization algorithms are set to 200 and 1500, respectively; other parameter settings are consistent with the sources of each algorithm. Each algorithm is run independently 10 times on each benchmark problem, and the average performance of the algorithm is used as the result data.
4.1. Benchmark Functions and Comparison Algorithms
The benchmark functions used in this section are divided into two parts. The first includes 23 classic benchmark functions and 6 knapsack problem benchmark sets. The former is a continuous optimization problem, and the latter is a discrete optimization problem with constraints. The second part includes the CEC2017 composite functions, which belongs to the complex function-solving problem. Through the two parts of the test problem, the performance of the VN-IPSO algorithm is comprehensively evaluated. The Friedman test [35], as a non-parametric test method, can detect significant differences in the performances of several algorithms, and the average ranking of each algorithm on the benchmark function can be obtained through its use. Among the 23 benchmark functions, F1–F7 are unimodal optimization problems, and F8–F23 are multimodal optimization problems. The unimodal function is generally used to test the development ability of the algorithm, while the multi-modal function is generally used to test the exploration ability of the algorithm. For the six knapsack problem benchmark sets, detailed information is shown in Table 1. It shows the name, number of knapsacks, number of items, item prices, knapsack capacity, constraint coefficient matrix, optimal value, and other information of each knapsack problem benchmark set. The detailed data on the item prices and constraint matrix coefficients are shown in the attachment. In addition, the mathematical models of each knapsack problem can be obtained by substituting the various parameters of the six knapsack problem benchmark sets into Equation (2). Among the CEC2017 functions, C20–C29 are composite functions, which could be used to test the algorithm’s ability to solve complex problems.
When solving the first part of the benchmark functions, we select six representative algorithms that simulate biological behaviors, including the Harris hawk optimization (HHO) [36], the osprey optimization algorithm (OOA) [37], the sparrow search optimization algorithm (SSA) [38], and the dung beetle optimization (DBO) [39]. When solving the second part of the benchmark functions, we choose PSO, PSO with comprehensive learning and a modified dynamic multi-swarm strategy (CLDMSL-PSO) [40], guided adaptive search-based PSO (GuASPSO) [41], and success-history based parameter adaptation for differential evolution (SHADE) [42]. CLDMSL-PSO and GuASPSO are improved PSO algorithms with an excellent performance shown in recent years. SHADE is a winning algorithm in the CEC competition that is widely used to compare the performances of algorithms solving a single objective function. The comparison algorithms from these different sources can show the performance of the VN-IPSO algorithm more comprehensively.
In the experiment, the parameters of VN-IPSO are set as follows: $Max(V)=6$; $Max(\omega )=0.8$; $Min(\omega )=0.2$; $Min({c}_{1})=0.1$; $Min({c}_{2})=0.5$; $Min({c}_{3})=0.2$; $Max({c}_{1})=2$; $Min({c}_{2})=2.5$; $Min({c}_{3})=3$. On the one hand, the selection of these parameter values and the determination of their upper and lower limits are based on the original settings of PSO and some improved parameter settings of PSO. On the other hand, the parameter value range obtained in many experiments is used to ensure the excellent performance of the algorithm.
4.2. Qualitative Analysis
In order to conduct a qualitative analysis of the solving performance of VN-IPSO, we analyze four well-known indicators in the field: search history (population position change), the movement trajectory of the first particle, the average fitness of the population, and the best fitness value of the population. By recording the initial state, intermediate state, and final state of the population, the search calendar shows the position changes of all particles in the population in the iterative process and can reflect the movement trend of the population. The trajectory of the first particle is monitored by recording the change in the first one-dimensional variable of the first particle in the iterative process. The average fitness of the population was recorded in each generation to reflect the overall quality change and population stability. The best fitness value of the population reflects the convergence of the algorithm by recording the best fitness value in each generation.
One benchmark function is selected from each class of 23 benchmark functions for display and analysis, and two benchmark test sets WEING1 and WEING2 are selected from 6 benchmark test sets for display and analysis. Every complete qualitative analysis consists of four indexes: search history (population position change), the movement trajectory of the first particle, the average fitness, and convergence of the population. The search calendar consists of green dots, blue dots, and red dots, which represent the initial state, intermediate state, and final state of the population, respectively. The $x$ axis of the first particle trajectory diagram refers to the number of iterations, and the $y$ axis refers to the value of the first one-dimensional variable of the first particle. The $x$ axis of the population average fitness graph refers to the number of iterations, and the $y$ axis refers to the average fitness value of the population. The $x$ axis of the optimum fitness diagram of the population refers to the number of iterations, and the $y$ axis refers to the optimum fitness value of the population. Figure 5 shows the qualitative analysis results of VN-IPSO for single-peak optimization problems (F1) and multi-peak optimization problems (F9, F18). Figure 6 shows the qualitative analysis results of VN-IPSO when solving constrained 0-1 integer programming problems (WEING1 and WEING2).
As can be seen from the search history in Figure 6a–c and Figure 7a,b, the population gradually gathers from the initial green random dispersion state into the middle blue state in the iterative process and finally converges to the red point. This reflects that the population in the initial stage of the VN-IPSO algorithm has a good diversity, and the convergence performance of VN-IPSO gradually appears with the advancement of the iterative process. In addition, comparing the search calendar of the VN-IPSO algorithm with the search calendar of IPSO, it is found that the middle state is more concentrated, indicating that the convergence speed of the VN-IPSO algorithm is improved on the basis of the IPSO algorithm.
It can be seen from the first particle trajectory in Figure 6a–c and Figure 7a,b that the first particle moves with a high frequency and large amplitude in the beginning phase; with an increase in the number of iterations, the position of the first particle tends to be stable, and finally, it is stable in one position. This shows that the VN-IPSO algorithm has a convergence tendency and can find the stable point. In addition, comparing the first particle trajectory diagram of the VN-IPSO algorithm with the first particle trajectory diagram of the IPSO algorithm, it is found that the first particle trajectory diagram is more stable, indicating that it has a higher stability and faster convergence speed than that of the IPSO algorithm.
In Figure 6a–c and Figure 7a,b, with the population average fitness figure (average fitness of all particles), it can be seen that the population average fitness value declines in volatility, and this decline is fast after a slow start. This reflects that the overall quality of the population in the IPSO algorithm gets better with slight fluctuations, and finally tends to be stable.
According to the convergence curve in Figure 6a–c and Figure 7a,b, it can be seen that the optimum fitness value of the population decreases rapidly first and then remains stable with an increase in the number of iterations, indicating that the VN-IPSO algorithm has a convergence ability. In addition, by comparing the population best fitness graphs of the VN-IPSO algorithm and IPSO algorithm, it is found that, in the benchmark function F9, the VN-IPSO algorithm converges faster, while in the other benchmark functions, there is little difference between the two, indicating that the overall convergence ability of the VN-IPSO algorithm is stronger than that of the IPSO algorithm.
By analyzing the search history, the trajectory of the first particle, the average fitness value of the population, and the best fitness value of the population, it can be seen that the VN-IPSO algorithm performs well in terms of population diversity, convergence speed, optimization ability, and robustness. In addition, VN-IPSO also showed a good performance on the six benchmark test sets of knapsack problems. The VN-IPSO algorithm showed a good population diversity, convergence speed, and optimization ability in continuous optimization problems and constrained 0-1 integer programming problems. The results show that the 0-1 integer programming solution designed in this paper can help VN-IPSO to solve constrained discrete optimization problems to some extent.
4.3. Quantitative Analysis
In this section, the VN-IPSO algorithm and comparison algorithms are carried out 10 times successively on 23 benchmark test functions and 6 benchmark test sets of knapsack problems, with 10 experimental results of each algorithm obtained on each benchmark test problem. The experimental results are further processed and analyzed. The population size and maximum number of iterations of all optimization algorithms are set to 200 and 1500, respectively; other parameter settings are consistent with the sources of each algorithm.
In the experiment, the average error value, standard deviation, and Friedman ranking are also used as quantitative analysis indicators to evaluate the performance of the VN-IPSO algorithm. In Table 2, the average error and standard deviation of each algorithm in solving the 23 benchmark functions are recorded, the ranking of eight algorithms is marked on each benchmark function, and the algorithm that ranks first is marked in bold. In addition, the number of times that each algorithm wins first place in the 23 benchmark test functions, the number of times that each algorithm wins first place in the unimodal functions, and the number of times that each algorithm wins first place in the multi-modal functions are also counted, with the algorithm that wins first place in the above three cases being marked in bold. At the end of Table 2, the Friedman ranking of the above eight algorithms is also recorded. According to these Friedman rankings, VN-IPSO ranked first, IPSO ranked second, SSA ranked third, HHO ranked fourth, PSO ranked fifth, OOA ranked sixth, DBO ranked seventh, and GA ranked eighth. The results show that the VN-IPSO algorithm has a strong competitiveness in the comparison with IPSO and other classical algorithms, and its comprehensive solving performance is very high. At the same time, it shows that the improvement based on the IPSO algorithm is effective.
In Table 2, among the experiments for solving the 23 benchmark functions, the VN-IPSO algorithm obtained 18 first places, the IPSO algorithm obtained 13 first places, the SSA algorithm obtained 5 first places, the HHO algorithm obtained 2 first places, the PSO algorithm obtained 3 first places, and the DBO algorithm obtained 2 first places. The OOA algorithm won first place, while the GA algorithm did not win first place in all experiments. The VN-IPSO algorithm performed the best in most of the benchmark functions, which fully proves that the VN-IPSO algorithm has an excellent performance. We calculated the number of times each algorithm won the first place from the perspective of unimodal function and multi-modal functions, respectively, as shown at the end of Table 2.
Among the seven unimodal functions, the VN-IPSO algorithm won first place six times, the IPSO algorithm won first place two times, and the SSA algorithm won first place once. This indicates that the VN-IPSO algorithm has a better development ability than that of IPSO. The local development strategy effectively improves the exploit ability.
Among the 16 multi-modal functions, the VN-IPSO algorithm won first place 12 times, the IPSO algorithm won first place 11 times, the SSA algorithm won first place 4 times, the HHO algorithm won first place 2 times, the PSO algorithm won first place 3 times, the DBO algorithm won first place 2 times, and the OOA algorithm won first place once, showing that the exploration ability exceeds other comparison algorithms.
Therefore, compared with the IPSO algorithm and the other six comparison algorithms, the VN-IPSO algorithm surpassed the other algorithms in both unimodal functions and multi-modal functions. The unimodal function can verify the development ability of the algorithm, and the multimodal function can verify the exploration ability of the algorithm, indicating that the exploration and development capabilities of the VN-IPSO algorithm are very excellent, achieving a balance between the exploration and development of the algorithm, indicating that the local development strategy based on variable neighborhood search designed in this chapter is effective for improving the IPSO algorithm, not only achieving the goal of improving the local development ability of the IPSO algorithm, but also achieving a balance between the exploration and development of the algorithm, further improving the performance of the algorithm.
In addition, the detailed experimental results of each algorithm in solving the six knapsack problem benchmark test sets are recorded in Table 3. It can be seen that VN-IPSO’s results on most datasets are better than the other seven algorithms. On the six discrete problem benchmark test sets, VN-IPSO won four first places, surpassing the other seven algorithms. According to the Friedman ranking, VN-IPSO ranked first, GA ranked second, IPSO ranked third, OOA ranked fourth, HHO ranked fifth, PSO ranked sixth, SSA ranked seventh, and DBO ranked eighth. The experimental results not only prove the high solution performance of the VN-IPSO algorithm, but also verify the effectiveness of the 0-1 integer programming solution. That is, the comprehensive solution performance of VN-IPSO is better than the other comparison algorithms, and when combined with the 0-1 integer programming solution, it is also applicable to solving 0-1 integer programming problems with constraints. From the above analysis results, VN-IPSO is superior to IPSO, GA, DBO, PSO, SSA, HHO, and OOA on most problems. It not only has a high computational efficiency, but also has a very strong optimization ability, convergence ability, global optimization ability, and exploration and development ability. In addition, when solving the constrained 0-1 integer programming problems, the VN-IPSO algorithm also shows a strong competitiveness, indicating that the VN-IPSO algorithm can not only efficiently solve unconstrained continuous optimization problems, but also effectively solve constrained 0-1 integer programming problems.
When setting the experiment parameters, we used 30 independent runs for each comparison algorithm and each test function, 500 iterations in each dimension of the test function, and the population size is set to 200. In order to further demonstrate the performance of the proposed VN-IPSO algorithm in solving composite functions, we compared VN-IPSO with the particle swarm algorithm, three improved particle swarm algorithms proposed in recent years, and SHADE, a winning algorithm in the CEC competition, on the composite test functions of CEC2017. The results are shown in Table 4. It records the average error and standard deviation of each algorithm in solving the 10 composite functions and marks the ranking of the six algorithms on each benchmark function. The algorithm ranked first is marked in bold, and the number of times each algorithm won first place in the 10 composite test functions is counted. Table 4 also gives the Friedman scores and rankings of the six algorithms. According to the Friedman ranking, VN-IPSO ranks first, IPSO ranks second, SSA ranks third, HHO ranks fourth, PSO ranks fifth, and OOA ranks sixth. The results show that the VN-IPSO algorithm is very competitive in comparison with SHADE and multiple improved particle swarm algorithms, and has an excellent performance in solving composite functions.
We compare and analyze the convergence of VN-IPSO and the other five algorithms, and Figure 8 plots the convergence curves of the 10 composite test functions in an independent test. There are large differences in the search speed, solution accuracy, and convergence speed among the different algorithms. On the C21, C22, C24, C25, C26, and C28 functions, VN-IPSO converges quickly and has a high accuracy, with obvious advantages among the six algorithms. On the C20 and C29 test functions, SHADE converges quickly and has the highest accuracy, and VN-IPSO has the second-highest accuracy. On the C23 test function, SHADE has the fastest convergence speed and the highest accuracy, CLDMSL-PSO ranks second, PSO ranks third, VN-IPSO ranks fourth, and IPSO and GuASPSO rank fifth. On the C27 test function, SHADE has the fastest convergence speed and the highest accuracy, CLDMSL-PSO is second, PSO is third, and IPSO and GuASPSO are both fourth, while VN-IPSO is sixth. Overall, IIPSO has a better convergence performance than the other algorithms on most composition function, and its convergence ability is very stable.
We compare and analyze the robustness of VN-IPSO and five other algorithms, and draw the box diagram of six algorithms on 10 composition functions, as shown in Figure 9. There are great differences in the solving accuracy and robustness among the different algorithms. On the C26 and C28 functions, VN-IPSO has the highest robustness and precision. On the C20, C22, and C27 functions, SHADE has the highest robustness and precision, and VN-IPSO ranks second. On the C23 and C29 functions, the data distribution of VN-IPSO is relatively concentrated, and its performance is average. On the C21, C24, and C25 functions, the data distribution of VN-IPSO is widely distributed and has a weak robustness. VN-IPSO and SHADE perform best on most test functions, with relatively concentrated data distributions. CLDMSL-PSO performs mediocrely on some test functions, with a wide data distribution and a small number of outliers. GuASPSO, PSO, and IPSO perform averagely on most test functions, with a wide data distribution and some outliers.
We perform a computational complexity analysis for VN-IPSO. First, we analyze the time complexity. In the algorithm, the initialization stage involves operations such as population initialization, initial fitness calculation, and k-means clustering, and its time complexity is O(N × D).The main cycle stage includes fitness evaluation, individual optimal and population optimal updating, speed and location updating, and other operations, which need to iterate the population several times, and the time complexity is O(T × K × N × D).Therefore, the time complexity of the overall algorithm is approximately O(T × K × N × D). Then, we analyze the space complexity. The space complexity of storing information such as the position and speed of the initial population and another opposite point is O(2 × N × D).The space complexity of storing information such as the number of samples, labels, etc., in each cluster is O(K × N × D).The space complexity of storing information such as parameter variables, nearest neighbors, and optimal values is O(N × D). Therefore, the overall spatial complexity of the algorithm is approximately O(N × D+ K × N × D). The overall time complexity of the original particle swarm optimization algorithm can be approximated as O(T × N × D) and space complexity as O(N × D). Therefore, the difference of time complexity and space complexity between our algorithm and the original PSO is mainly due to the number of clusters. There is little difference when clustering is small.
5. Conclusions
The local development ability of the IPSO algorithm still has room for improvement, and it does not have constrained optimization and discrete optimization capabilities. Therefore, we mainly improve the local development ability of the IPSO algorithm and its ability to solve constrained discrete problems. We propose an improved particle swarm optimization algorithm based on variable neighborhood search and design a set of constrained 0–1 integer programming problems. In terms of improvement strategies, the VN-IPSO algorithm continues to use the three improvement strategies of the IPSO algorithm: population initialization strategy, parameter adaptation strategy, and grouping strategy, and designs a local development strategy based on variable neighborhood search. The local development strategy based on variable neighborhood search improves the exploit ability by deeply developing the current optimal particle of the population in each iteration.
In addition, we conducted a large number of experiments on 23 benchmark test functions, 6 benchmark test sets of knapsack problems, and 10 composite functions, and analyzed the experimental data from two aspects: qualitative analysis and quantitative analysis. The experiment results show that the VN-IPSO algorithm not only has a better local development ability than the IPSO algorithm, but also ranks first in 23 benchmark functions, 6 knapsack problem benchmark sets, and 10 composite functions of Friedman’s ranking. Therefore, the VN-IPSO algorithm proposed in this paper is very competitive, with strong exploration and development capabilities, convergence capabilities, global optimization capabilities, and a strong robustness. In addition, the VN-IPSO algorithm can not only efficiently solve unconstrained continuous optimization problems, but also effectively solve constrained 0-1 integer programming problems in combination with the CBPS scheme. Furthermore, it also performs well in solving complex functions. In future research, we will consider applying VN-IPSO to multi-objective optimization problems to further expand its scope of application.
Author Contributions
Conceptualization, H.L. and J.Z.; methodology, H.L., J.Z., Z.Z. and H.W.; software, H.L.; validation, H.L., J.Z., Z.Z. and H.W.; formal analysis, H.L. and J.Z.; investigation, J.Z.; resources, Z.Z. and H.W.; data curation, H.L.; writing—original draft preparation, H.L. and J.Z.; writing—review and editing, J.Z. and Z.Z.; visualization, H.L.; supervision, J.Z., Z.Z. and H.W.; project administration, H.L. and J.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The original data and analytical data can be obtained from the author Hao Li ([emailprotected]).
Conflicts of Interest
Author Jianjun Zhan was employed by the company Cainiao Network. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
- Osaba, E.; Villar-Rodriguez, E.; Del Ser, J.; Nebro, A.J.; Molina, D.; LaTorre, A.; Suganthan, P.N.; Coello, C.A.C.; Herrera, F. A Tutorial on the Design, Experimentation and Application of Metaheuristic Algorithms to Real-World Optimization Problems. Swarm Evol. Comput. 2021, 64, 100888. [Google Scholar] [CrossRef]
- Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
- Fraser, A.S. Simulation of Genetic Systems by Automatic Digital Computers II. Effects of Linkage on Rates of Advance under Selection. Aust. J. Biol. Sci. 1957, 10, 492–500. [Google Scholar] [CrossRef]
- Glover, F. Future Paths for Integer Programming and Links to Artificial Intelligence. Comput. Oper. Res. 1986, 13, 533–549. [Google Scholar] [CrossRef]
- Colorni, A.; Dorigo, M.; Maniezzo, V. Distributed Optimization by Ant Colonies. In Proceedings of the First European Conference on Artificial Life, Paris, France, 11–13 December 1991; Volume 142, pp. 134–142. [Google Scholar]
- Tang, J.; Chen, X.; Zhu, X.; Zhu, F. Dynamic Reallocation Model of Multiple Unmanned Aerial Vehicle Tasks in Emergent Adjustment Scenarios. IEEE Trans. Aerosp. Electron. Syst. 2022, 59, 1139–1155. [Google Scholar] [CrossRef]
- Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN′95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
- Abbass, H.A. MBO: Marriage in Honey Bees Optimization-A Haplometrosis Polygynous Swarming Approach. In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No. 01TH8546), Seoul, Republic of Korea, 27–30 May 2001; Volume 1, pp. 207–214. [Google Scholar]
- Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report-tr06; Erciyes University, Faculty of Engineering, Computer Engineering Department: Talas, Turkey, 2005. [Google Scholar]
- Pan, Q.; Tang, J.; Zhan, J.; Li, H. Bacteria Phototaxis Optimizer. Neural Comput. Appl. 2023, 35, 13433–13464. [Google Scholar] [CrossRef]
- Tang, J.; Pan, Q.; Chen, Z.; Liu, G.; Yang, G.; Zhu, F.; Lao, S. An Improved Artificial Electric Field Algorithm for Robot Path Planning. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 2292–2304. [Google Scholar] [CrossRef]
- Haupt, R.L.; Haupt, S.E. Practical Genetic Algorithms; John Wiley & Sons Inc.: Hoboken, NJ, USA, 2004. [Google Scholar]
- Tizhoosh, H.R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC′06), Vienna, Austria, 28–30 November 2005; Volume 1, pp. 695–701. [Google Scholar]
- Arora, S.; Anand, P. Chaotic Grasshopper Optimization Algorithm for Global Optimization. Neural Comput. Appl. 2019, 31, 4385–4405. [Google Scholar] [CrossRef]
- Ewees, A.A.; Abd Elaziz, M.; Oliva, D. A New Multi-Objective Optimization Algorithm Combined with Opposition-Based Learning. Expert Syst. Appl. 2021, 165, 113844. [Google Scholar] [CrossRef]
- Yu, X.; Xu, W.; Li, C. Opposition-Based Learning Grey Wolf Optimizer for Global Optimization. Knowl.-Based Syst. 2021, 226, 107139. [Google Scholar] [CrossRef]
- Qin, Z.; Yu, F.; Shi, Z.; Wang, Y. Adaptive Inertia Weight Particle Swarm Optimization. In Artificial Intelligence and Soft Computing–ICAISC 2006: Proceedings of the 8th International Conference, Zakopane, Poland, 25–29 June 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 450–459. [Google Scholar]
- Chauhan, P.; Deep, K.; Pant, M. Novel Inertia Weight Strategies for Particle Swarm Optimization. Memetic Comput. 2013, 5, 229–251. [Google Scholar] [CrossRef]
- Sekyere, Y.O.; Effah, F.B.; Okyere, P.Y. An Enhanced Particle Swarm Optimization Algorithm via Adaptive Dynamic Inertia Weight and Acceleration Coefficients. J. Electron. Electr. Eng. 2024, 3, 50–64. [Google Scholar] [CrossRef]
- Li, C.; You, F.; Yao, T.; Wang, J.; Shi, W.; Peng, J.; He, S. Simulated Annealing Particle Swarm Optimization for High-Efficiency Power Amplifier Design. IEEE Trans. Microw. Theory Tech. 2021, 69, 2494–2505. [Google Scholar] [CrossRef]
- Gómez Blas, N.; Arteta Albert, A.; de Mingo López, L.F. Differential Evoluiton-Particle Swarm Optimization. Int. J. Inf. Technol. Knowl. 2011, 5, 77–84. [Google Scholar]
- Zhang, X.; Lin, Q.; Mao, W.; Liu, S.; Dou, Z.; Liu, G. Hybrid Particle Swarm and Grey Wolf Optimizer and Its Application to Clustering Optimization. Appl. Soft Comput. 2021, 101, 107061. [Google Scholar] [CrossRef]
- Li, F.; Yue, Q.; Liu, Y.C.; Ouyang, H.B.; Gu, F.Q. A Fast Density Peak Clustering Based Particle Swarm Optimizer for Dynamic Optimization. Expert Syst. Appl. 2024, 236, 121254. [Google Scholar] [CrossRef]
- Lu, C.; Zhu, J.; Huang, H.; Sun, Y. A Multi-Hierarchy Particle Swarm Optimization-Based Algorithm for Cloud Workflow Scheduling. Future Gener. Comput. Syst. 2024, 153, 125–138. [Google Scholar] [CrossRef]
- Aljohani, A. Optimized Convolutional Forest by Particle Swarm Optimizer for Pothole Detection. Int. J. Comput. Intell. Syst. 2024, 17, 7. [Google Scholar] [CrossRef]
- Zhan, J.; Tang, J.; Pan, Q.; Li, H. Improved Particle Swarm Optimization Algorithm Based on Grouping and Its Application in Hyperparameter Optimization. Soft Comput. 2023, 27, 8807–8819. [Google Scholar] [CrossRef]
- Eiben, A.E.; Smith, J. From Evolutionary Computation to the Evolution of Things. Nature 2015, 521, 476–482. [Google Scholar] [CrossRef]
- Panigrahy, D.; Samal, P. Modified Lightning Search Algorithm for Optimization. Eng. Appl. Artif. Intell. 2021, 105, 104419. [Google Scholar] [CrossRef]
- Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Metaheuristic Algorithms: A Comprehensive Review. Comput. Intell. Multimed. Big Data Cloud Eng. Appl. 2018, 185–231. [Google Scholar] [CrossRef]
- Cacchiani, V.; Iori, M.; Locatelli, A.; Martello, S. Knapsack Problems—An Overview of Recent Advances. Part II: Multiple, Multidimensional, and Quadratic Knapsack Problems. Comput. Oper. Res. 2022, 143, 105693. [Google Scholar] [CrossRef]
- Abdel-Basset, M.; El-Shahat, D.; Sangaiah, A.K. A Modified Nature Inspired Meta-Heuristic Whale Optimization Algorithm for Solving 0–1 Knapsack Problem. Int. J. Mach. Learn. Cybern. 2019, 10, 495–514. [Google Scholar] [CrossRef]
- Yao, X.; Liu, Y.; Lin, G. Evolutionary Programming Made Faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
- Khuri, S.; Bäck, T.; Heitkötter, J. An Evolutionary Approach to Combinatorial Optimization Problems. In Proceedings of the ACM Conference on Computer Science, Phoenix, AZ, USA, 8–10 March 1994; pp. 66–73. [Google Scholar]
- Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
- Friedman, M. The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
- Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris Hawks Optimization: Algorithm and Applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
- Dehghani, M.; Trojovskỳ, P. Osprey Optimization Algorithm: A New Bio-Inspired Metaheuristic Algorithm for Solving Engineering Optimization Problems. Front. Mech. Eng. 2023, 8, 1126450. [Google Scholar] [CrossRef]
- Xue, J.; Shen, B. A Novel Swarm Intelligence Optimization Approach: Sparrow Search Algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
- Xue, J.; Shen, B. Dung Beetle Optimizer: A New Meta-Heuristic Algorithm for Global Optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
- Wang, R.; Hao, K.; Chen, L.; Liu, X.; Zhu, X.; Zhao, C. A Modified Hybrid Particle Swarm Optimization Based on Comprehensive Learning and Dynamic Multi-Swarm Strategy. Soft Comput. 2024, 28, 3879–3903. [Google Scholar] [CrossRef]
- Rezaei, F.; Safavi, H.R. GuASPSO: A New Approach to Hold a Better Exploration–Exploitation Balance in PSO Algorithm. Soft Comput. 2020, 24, 4855–4875. [Google Scholar] [CrossRef]
- Tanabe, R.; f*ckunaga, A. Success-History Based Parameter Adaptation for Differential Evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar]
Figure 1. Diagram of neighborhood and operator.
Figure 1. Diagram of neighborhood and operator.
Figure 2. Variable neighborhood search algorithm process.
Figure 2. Variable neighborhood search algorithm process.
Figure 3. The flow diagram for the complete work.
Figure 3. The flow diagram for the complete work.
Figure 4. The process of the VN-IPSO algorithm.
Figure 4. The process of the VN-IPSO algorithm.
Figure 5. Mapping relationship between integer variable ${x}_{i}^{binary}$ and continuous variable ${x}_{i}$.
Figure 5. Mapping relationship between integer variable ${x}_{i}^{binary}$ and continuous variable ${x}_{i}$.
Figure 6. Qualitative analysis results of VN-IPSO on some benchmark functions.
Figure 6. Qualitative analysis results of VN-IPSO on some benchmark functions.
Figure 7. Qualitative analysis results of VN-IPSO on some benchmark test sets of knapsack problems.
Figure 7. Qualitative analysis results of VN-IPSO on some benchmark test sets of knapsack problems.
Figure 8. Iteration curves of VN-IPSO and comparison algorithms on the cec2017 composition functions.
Figure 8. Iteration curves of VN-IPSO and comparison algorithms on the cec2017 composition functions.
Figure 9. Box plots of VN-IPSO and its comparison algorithms on the cec2017 combination functions.
Figure 9. Box plots of VN-IPSO and its comparison algorithms on the cec2017 combination functions.
Table 1. Knapsack problem benchmark set.
Table 1. Knapsack problem benchmark set.
Dataset | Backpack Quantity $\mathbf{\left(}\mathit{n}\mathbf{\right)}$ | Item Quantity $(\mathit{m})$ | Commodity Price $(\mathit{m}\mathbf{\times}\mathbf{1})$ | Backpack Capacity $(\mathit{n}\mathbf{\times}\mathbf{1})$ | Reduced Matrix Coefficient $(\mathit{n}\mathbf{\times}\mathit{m})$ | Optimal Value |
---|---|---|---|---|---|---|
WEING1 | 2 | 28 | P1 | (600, 600) | A1 | 141,278 |
WEING2 | 2 | 28 | P2 | (500, 500) | A2 | 130,883 |
WEING3 | 2 | 28 | P3 | (300, 300) | A3 | 95,677 |
WEING4 | 2 | 28 | P4 | (300, 600) | A4 | 119,337 |
WEISH01 | 5 | 30 | P5 | (400, 500, 500, 600, 600) | A5 | 4554 |
WEISH02 | 5 | 30 | P6 | (370, 650, 460, 980, 870) | A6 | 4536 |
Table 2. Optimization results of VN-IPSO and comparison algorithms on 23 benchmark functions.
Table 2. Optimization results of VN-IPSO and comparison algorithms on 23 benchmark functions.
Function | Index | GA | DBO | PSO | SSA | HHO | OOA | IPSO | VN-IPSO |
---|---|---|---|---|---|---|---|---|---|
F1 | Mean | 442.790359 | 10.1385099 | 2.8216 × 10^{−28} | 7.14 × 10^{−242} | 1.566 × 10^{−221} | 0.00619342 | 7.7598 × 10^{−66} | 1.13 × 10^{−66} |
Std | 137.182308 | 16.9797411 | 7.0889 × 10^{−28} | 0 | 0 | 0.00092278 | 2.0045 × 10^{−65} | 2.7073 × 10^{−66} | |
Rank | 8 | 7 | 5 | 1 | 2 | 6 | 4 | 3 | |
F2 | Mean | 12.8168562 | 1.1851585 | 2 | 7.3222 × 10^{−11} | 1.076 × 10^{−130} | 0.03226565 | 5.434 × 10^{−116} | 5.703 × 10^{−141} |
Std | 1.8351542 | 0.79065973 | 4.21637021 | 2.2433 × 10^{−10} | 3.401 × 10^{−130} | 0.00147625 | 1.599 × 10^{−115} | 1.197 × 10^{−131} | |
Rank | 8 | 6 | 7 | 4 | 2 | 5 | 3 | 1 | |
F3 | Mean | 4384.54913 | 2347.923 | 0.09788436 | 4.588 × 10^{−196} | 8.3933 × 10^{−7} | 0.01034093 | 3.705 × 10^{−174} | 6.501 × 10^{−223} |
Std | 1638.95376 | 3006.00198 | 0.06384631 | 0 | 1.8972 × 10^{−6} | 0.00211017 | 0 | 0 | |
Rank | 8 | 7 | 6 | 2 | 4 | 5 | 3 | 1 | |
F4 | Mean | 22.2240848 | 0.56392872 | 0.03420377 | 7.636 × 10^{−225} | 1.5528 × 10^{−5} | 0.03162702 | 8.254 × 10^{−106} | 3.202 × 10^{−246} |
Std | 4.13359783 | 0.66019147 | 0.0185348 | 0 | 1.5743 × 10^{−5} | 0.00184941 | 2.491 × 10^{−105} | 0 | |
Rank | 8 | 7 | 6 | 2 | 4 | 5 | 3 | 1 | |
F5 | Mean | 84531.9336 | 406.338099 | 58.1867974 | 6.0048 × 10^{−5} | 24.5223222 | 28.6349398 | 0.00030569 | 1.092 × 10^{−10} |
Std | 61559.4176 | 947.446188 | 30.8338529 | 0.00010472 | 18.4911938 | 0.06766764 | 0.000602 | 3.563 × 10^{−5} | |
Rank | 8 | 7 | 6 | 2 | 4 | 5 | 3 | 1 | |
F6 | Mean | 521.620021 | 23.2563521 | 1.3476 × 10^{−27} | 7.8357 × 10^{−7} | 1.4713 × 10^{−7} | 0.0939151 | 0 | 0 |
Std | 139.335016 | 11.8244259 | 2.1736 × 10^{−27} | 4.1079 × 10^{−7} | 1.2245 × 10^{−7} | 0.02678423 | 0 | 0 | |
Rank | 8 | 7 | 3 | 5 | 4 | 6 | 1 | 1 | |
F7 | Mean | 0.2614798 | 0.03384663 | 0.54813195 | 6.3318 × 10^{−5} | 9.1721 × 10^{−6} | 0.00224541 | 1.8619 × 10^{−6} | 1.8619 × 10^{−6} |
Std | 0.07496412 | 0.02673147 | 1.12939895 | 4.3605 × 10^{−5} | 7.3833 × 10^{−6} | 0.00079055 | 1.2622 × 10^{−6} | 1.2622 × 10^{−6} | |
Rank | 7 | 6 | 8 | 4 | 3 | 5 | 1 | 1 | |
F8 | Mean | 543.761125 | 3318.55481 | 5282.89406 | 4409.98554 | 6404.45814 | 8341.22323 | 0.00127062 | 0.00127062 |
Std | 152.557438 | 1470.59194 | 870.566862 | 1171.64832 | 705.745341 | 622.572248 | 0.00194463 | 0.00194463 | |
Rank | 3 | 4 | 6 | 5 | 7 | 8 | 1 | 1 | |
F9 | Mean | 33.6295793 | 55.6864159 | 48.6044005 | 0 | 0.00309443 | 29.7381907 | 0 | 0 |
Std | 10.348397 | 66.9578772 | 34.9259932 | 0 | 0.00027821 | 4.14773298 | 0 | 0 | |
Rank | 6 | 8 | 7 | 1 | 4 | 5 | 1 | 1 | |
F10 | Mean | 6.34186004 | 2.82354519 | 2.2471 × 10^{−14} | 4.4409 × 10^{−16} | 1.1102 × 10^{−14} | 0.01911995 | 4.4409 × 10^{−16} | 4.4409 × 10^{−16} |
Std | 0.80583363 | 0.80136395 | 8.67 × 10^{−15} | 0 | 3.7449 × 10^{−15} | 0.00147602 | 0 | 0 | |
Rank | 8 | 7 | 5 | 1 | 4 | 6 | 1 | 1 | |
F11 | Mean | 4.9866637 | 1.05190314 | 0.00984062 | 0 | 0.00935129 | 0.01004718 | 0 | 0 |
Std | 0.83576603 | 0.77290385 | 0.01285125 | 0 | 0.01320962 | 0.00195946 | 0 | 0 | |
Rank | 8 | 7 | 5 | 1 | 4 | 6 | 1 | 1 | |
F12 | Mean | 8.30049403 | 0.152749 | 1.2445 × 10^{−30} | 3.6485 × 10^{−6} | 2.0575 × 10^{−8} | 0.00772934 | 1.5705 × 10^{−32} | 1.5705 × 10^{−32} |
Std | 3.09958946 | 0.18841611 | 1.4798 × 10^{−30} | 3.4859 × 10^{−6} | 2.3272 × 10^{−8} | 0.00234762 | 2.885 × 10^{−48} | 2.885 × 10^{−48} | |
Rank | 8 | 7 | 3 | 5 | 4 | 6 | 1 | 1 | |
F13 | Mean | 1640.51486 | 1.96310445 | 0.00109874 | 2.4381 × 10^{−5} | 4.8619 × 10^{−7} | 0.26095617 | 1.3498 × 10^{−32} | 1.3498 × 10^{−32} |
Std | 1940.80195 | 2.32048283 | 0.00347451 | 1.5846 × 10^{−5} | 5.2673 × 10^{−7} | 0.06978139 | 2.885 × 10^{−48} | 2.885 × 10^{−48} | |
Rank | 8 | 7 | 5 | 4 | 3 | 6 | 1 | 1 | |
F14 | Mean | 0.00199616 | 0.01426632 | 0.00199616 | 1.37134178 | 0.00199616 | 0.19680925 | 0.00199616 | 0.0018897 |
Std | 2.7696 × 10^{−10} | 0.05142648 | 0 | 3.06179728 | 1.4478 × 10^{−13} | 0.41911861 | 0 | 0.00033665 | |
Rank | 4 | 6 | 1 | 8 | 5 | 7 | 1 | 1 | |
F15 | Mean | 0.00039297 | 0.00042169 | 0.00024161 | 1.4617 × 10^{−5} | 7.5099 × 10^{−6} | 8.5978 × 10^{−6} | 9.9055 × 10^{−5} | 0.00031928 |
Std | 0.00010013 | 0.00034143 | 0.00032791 | 6.0474 × 10^{−6} | 2.1452 × 10^{−8} | 1.028 × 10^{−6} | 0.00028957 | 0.00051208 | |
Rank | 7 | 8 | 5 | 3 | 1 | 2 | 4 | 6 | |
F16 | Mean | 1.4095 × 10^{−5} | 6.6738 × 10^{−6} | 2.8453 × 10^{−5} | 2.8453 × 10^{−5} | 2.8453 × 10^{−5} | 2.8434 × 10^{−5} | 2.8453 × 10^{−5} | 2.8453 × 10^{−5} |
Std | 1.3187 × 10^{−5} | 0.00011108 | 0 | 6.4355 × 10^{−15} | 1.1322 × 10^{−15} | 2.5584 × 10^{−8} | 0 | 0 | |
Rank | 8 | 1 | 2 | 6 | 5 | 7 | 2 | 2 | |
F17 | Mean | 9.4618 × 10^{−5} | 0.00011264 | 0.00011264 | 0.00011264 | 0.00011264 | 0.00011217 | 0.00011264 | 0.00011264 |
Std | 1.8407 × 10^{−5} | 0 | 0 | 1.4103 × 10^{−10} | 1.4998 × 10^{−11} | 4.871 × 10^{−7} | 0 | 0 | |
Rank | 8 | 1 | 1 | 6 | 5 | 7 | 1 | 1 | |
F18 | Mean | 0.00034627 | 0.00016835 | 7.816 × 10^{−14} | 5.4 | 7.1498 × 10^{−14} | 2.3473 × 10^{−7} | 7.816 × 10^{−14} | 7.816 × 10^{−14} |
Std | 0.00073058 | 0.00053238 | 0 | 11.3841996 | 6.8126 × 10^{−15} | 3.0929 × 10^{−7} | 0 | 0 | |
Rank | 7 | 6 | 1 | 8 | 4 | 5 | 1 | 1 | |
F19 | Mean | 0.00277949 | 0.00278215 | 0.00278215 | 0.00133492 | 0.00278215 | 0.00278191 | 0.00278215 | 0.00278215 |
Std | 2.0463 × 10^{−6} | 7.4015 × 10^{−16} | 9.3622 × 10^{−16} | 0.00090259 | 4.018 × 10^{−10} | 1.8426 × 10^{−7} | 9.3622 × 10^{−16} | 9.3622 × 10^{−16} | |
Rank | 8 | 3 | 4 | 1 | 7 | 2 | 4 | 4 | |
F20 | Mean | 0.05826583 | 0.10609445 | 0.08836169 | 0.0118129 | 0.05748756 | 0.05936371 | 0.02181996 | 0.0099352 |
Std | 0.06251318 | 0.0803198 | 0.06544614 | 0.04352062 | 0.06263712 | 0.07964247 | 0.05011052 | 0.03758289 | |
Rank | 5 | 8 | 7 | 2 | 4 | 6 | 3 | 1 | |
F21 | Mean | 3.75694836 | 2.93954421 | 0.50524307 | 1.50454592 | 3.05881205 | 0.00137885 | 1.01048583 | 0.74703425 |
Std | 2.30963523 | 2.93079437 | 1.59771787 | 3.17186063 | 2.6325841 | 0.00076302 | 2.1302905 | 2.3623287 | |
Rank | 8 | 6 | 2 | 5 | 7 | 1 | 4 | 3 | |
F22 | Mean | 0.04700383 | 2.13817714 | 0.53138631 | 0.66772346 | 4.25209492 | 0.532823 | 0.00014057 | 0.00014057 |
Std | 0.04106891 | 2.69727141 | 1.68083556 | 2.11197148 | 2.24107285 | 1.68035026 | 1.8724 × 10^{−15} | 1.8724 × 10^{−15} | |
Rank | 3 | 7 | 4 | 6 | 8 | 5 | 1 | 1 | |
F23 | Mean | 1.80819855 | 3.49275691 | 5.40782013 | 3.83015478 | 0.00010982 | 0.00168152 | 0.00010982 | 0.00010982 |
Std | 2.81784607 | 2.12754648 | 7.086 × 10^{−7} | 4.08236367 | 1.8724 × 10^{−15} | 0.00111246 | 1.8724 × 10^{−15} | 1.0256 × 10^{−15} | |
Rank | 5 | 6 | 8 | 7 | 1 | 4 | 1 | 1 | |
Number of first place | 0 | 2 | 3 | 5 | 2 | 1 | 13 | 18 | |
Get 1st times in unimodal functions | 0 | 0 | 0 | 1 | 0 | 0 | 2 | 6 | |
Get 1st times in multimodal function | 0 | 2 | 3 | 4 | 2 | 1 | 11 | 12 | |
Average ranking | 6.9130 | 6.0434 | 4.6521 | 3.8695 | 4.1739 | 5.2173 | 2.0 | 1.5652 | |
Friedman ranking | 8 | 7 | 5 | 3 | 4 | 6 | 2 | 1 |
Table 3. Optimization results of VN-IPSO and comparison algorithms on 6 knapsack problem benchmark test sets.
Table 3. Optimization results of VN-IPSO and comparison algorithms on 6 knapsack problem benchmark test sets.
Problem | Index | GA | DBO | PSO | SSA | HHO | OOA | IPSO | VN-IPSO |
---|---|---|---|---|---|---|---|---|---|
WEING1 | Mean | 200.1000 | 9082.3000 | 3728.9000 | 7004.1000 | 7259.7000 | 3337.1000 | 696.5000 | 356.6000 |
Std | 287.0310 | 4770.0680 | 2805.2030 | 2245.2070 | 6549.3000 | 3338.3440 | 480.9738 | 317.0052 | |
Rank | 1 | 8 | 5 | 6 | 7 | 4 | 3 | 2 | |
WEING2 | Mean | 178.6000 | 12,774.8000 | 10,804.8000 | 13,278.8000 | 9922.6000 | 3539.3000 | 3508.3000 | 1038.4000 |
Std | 291.2209 | 6020.9120 | 2676.6370 | 2885.7830 | 7062.7600 | 2307.0160 | 3072.1480 | 2202.4610 | |
Rank | 1 | 7 | 6 | 8 | 5 | 4 | 3 | 2 | |
WEING3 | Mean | 8043.1000 | 22,077.1000 | 8975.3000 | 20,667.7000 | 5011.5000 | 3988.0000 | 6816.9000 | 814.5000 |
Std | 5651.3250 | 11888.0200 | 6745.8950 | 3678.2950 | 5117.2880 | 2206.0130 | 7075.3860 | 860.7260 | |
Rank | 5 | 8 | 6 | 7 | 3 | 2 | 4 | 1 | |
WEING4 | Mean | 400.4000 | 10,854.5000 | 8884.5000 | 12,168.9000 | 6920.5000 | 5598.1000 | 4713.8000 | 40.9000 |
Std | 1096.0960 | 4093.4210 | 3606.9780 | 3607.7080 | 2532.0730 | 2870.2640 | 2459.3220 | 129.3372 | |
Rank | 2 | 7 | 6 | 8 | 5 | 4 | 3 | 1 | |
WEISH01 | Mean | 74.2000 | 749.5000 | 496.0000 | 766.9000 | 405.9000 | 226.9000 | 93.2000 | 0.0000 |
Std | 64.5149 | 727.3459 | 221.0354 | 302.2238 | 201.6022 | 94.4651 | 84.7792 | 0.0000 | |
Rank | 2 | 7 | 6 | 8 | 5 | 4 | 3 | 1 | |
WEISH01 | Mean | 6.9000 | 815.9000 | 299.8000 | 557.1000 | 216.4000 | 181.4000 | 128.9000 | 0.0000 |
Std | 11.0900 | 503.2822 | 145.0738 | 236.1254 | 99.2373 | 116.2241 | 90.4967 | 0.0000 | |
Rank | 2 | 8 | 6 | 7 | 5 | 4 | 3 | 1 | |
Number of times to win the first place | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | |
Average ranking | 2.1667 | 7.5000 | 5.8333 | 7.3333 | 5.0000 | 3.6667 | 3.1667 | 1.3333 | |
Friedman ranking | 2 | 8 | 6 | 7 | 5 | 4 | 3 | 1 |
Table 4. Optimization results of VN-IPSO and comparison algorithms on 10 CEC2017 composition functions.
Table 4. Optimization results of VN-IPSO and comparison algorithms on 10 CEC2017 composition functions.
Function | Index | SHADE | CLDMSL-PSO | GuASPSO | PSO | IPSO | VN-IPSO |
---|---|---|---|---|---|---|---|
C20 | Mean | 0 | 21.15365761 | 33.29898954 | 63.20034247 | 33.29898954 | 5.657683372 |
Std | 0 | 41.73793238 | 15.11951136 | 53.90405061 | 15.11951136 | 6.103723013 | |
Rank | 1 | 3 | 4 | 6 | 5 | 2 | |
C21 | Mean | 133.6615744 | 162.9370387 | 132.2818833 | 146.7459032 | 132.2818833 | 134.845604 |
Std | 44.33402988 | 53.86230956 | 54.48766648 | 57.94062949 | 54.48766648 | 54.19161521 | |
Rank | 3 | 6 | 1 | 5 | 1 | 4 | |
C22 | Mean | 100 | 101.9906485 | 93.98976272 | 96.92634814 | 93.98976272 | 93.24087653 |
Std | 0 | 1.998406051 | 21.1866303 | 19.85974134 | 21.1866303 | 23.02059206 | |
Rank | 5 | 6 | 2 | 4 | 3 | 1 | |
C23 | Mean | 304.0516673 | 313.4525772 | 335.5633963 | 322.8265642 | 335.5633963 | 333.8308895 |
Std | 1.429478872 | 6.628703125 | 9.534433986 | 10.16884169 | 9.534433986 | 9.04433487 | |
Rank | 1 | 2 | 5 | 3 | 6 | 4 | |
C24 | Mean | 299.6394768 | 309.1760025 | 257.6516403 | 284.6512984 | 257.6516403 | 254.3574406 |
Std | 72.84316848 | 83.59799634 | 123.2355665 | 107.1925837 | 123.2355665 | 120.0454233 | |
Rank | 5 | 6 | 2 | 4 | 3 | 1 | |
C25 | Mean | 416.6076647 | 442.4497437 | 413.1410394 | 418.3482144 | 413.1410394 | 411.6593503 |
Std | 23.22847765 | 9.633182774 | 21.82754355 | 22.95072834 | 21.82754355 | 21.1496311 | |
Rank | 4 | 6 | 2 | 5 | 3 | 1 | |
C26 | Mean | 300 | 306.6918764 | 276.6666667 | 341.4100252 | 276.6666667 | 278.2383118 |
Std | 0 | 74.36461414 | 77.38543627 | 189.9198848 | 77.38543627 | 78.34845454 | |
Rank | 4 | 5 | 1 | 6 | 1 | 3 | |
C27 | Mean | 389.483826 | 398.3006532 | 399.5479121 | 403.5229496 | 399.5479121 | 396.7648786 |
Std | 0.130016213 | 5.4671525 | 6.954158072 | 19.56179933 | 6.954158072 | 6.785615744 | |
Rank | 1 | 3 | 4 | 6 | 5 | 2 | |
C28 | Mean | 469.6364644 | 537.3506923 | 344.5934027 | 439.4083219 | 344.5934027 | 332.2310026 |
Std | 149.5091467 | 112.8156237 | 91.22837107 | 141.101867 | 91.22837107 | 87.83661403 | |
Rank | 5 | 6 | 2 | 4 | 3 | 1 | |
C29 | Mean | 231.6811918 | 266.2303566 | 288.8946768 | 292.2126569 | 288.8946768 | 279.9698013 |
Std | 1.704362748 | 21.51989583 | 22.47162985 | 46.9460656 | 22.47162985 | 18.93510266 | |
Rank | 1 | 2 | 4 | 6 | 5 | 3 | |
Number of first place | 4 | 0 | 2 | 0 | 2 | 4 | |
Average ranking | 3 | 4.5 | 3.2 | 4.9 | 3.2 | 2.2 | |
Friedman ranking | 2 | 5 | 3 | 6 | 3 | 1 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).