A small project written in college may not be updated in the future.
[introduction]PSO Lab is designed for auxiliary research of particle swarm optimization, verification algorithm, joint algorithm comparison
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Dependencies used by the environment, with * is optional, not required
- MATLAB Live Editor `version: R2016a or above
- *Parallel Computing Toolbox `Not provided, if necessary, please purchase and install it yourself, see official website.
- GUI Layout Toolbox `version 2.3.5 is provided, or higher versions can be installed by yourself
- Change directory to PSO_Lab
- Type the following command in the command window and complete the parts that need to be manually installed according to the pop-up window prompts
cd PSO_Lab run_lab setup
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Current command include setup, help, zip, tar, clear, select.
- setup
Installation environment and related dependencies.
- help
Type the following command in the command window to view the help documentation.
run_lab help
- zip, tar
Compress PSO Lab, the files in these directories in the compressed package will not be preserved:
.\.temp .\data .\test
Type the following command in the command window to compress:
run_lab zip run_lab tar
- clear
Clear Lab's cache (excluding log files), if it contains some running data cache that may not be saved, the user will be prompted whether to delete it.
Type the following command in the command window to Clear Lab's cache:
run_lab clear
- select
Select profiles and modes in the form of pop-up windows, type the following command in the command window to compress:
run_lab select
- syntax
run_lab [config] [mode]
- description
PSO Lab will run in config and mode.
*config composition: full directory of config files(recommend) or relative directory
*mode composition: optimize, par-optimize, search
e.g. Run PSO_Lab\config\benchmark\single\basepso\basepso_primary_unimodal_1_benchmark_config.m by typing the following command in the command window:
run_lab .\config\benchmark\single\basepso\basepso_primary_unimodal_1_benchmark_config.m optimize
e.g. or if you know the full directory of config files like
run_lab E\PSO_Lab\config\benchmark\single\basepso\basepso_primary_unimodal_1_benchmark_config.m optimize
How to write a configuration file
How to choose mode
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The configuration file exists in the form of a function file, and the configuration information is passed by returning a structure.
*Parameters with support search in search mode
- fitness function
Specifically, the fitness fucntion here is supposed to be a function handle, which can handle vectors.
e.g.1
function stc = config ... fit_function_handle = @(x) x(5) + sum(abs(x)) - x(6); stc.fit_fun = fit_function_handle; ... end
e.g.2
function stc = config ... stc.fit_fun = @fit_function; ... end function val = fit_function(x) val = sum(abs(x(1:end - 1) - x(2:end))); end
It should be noted that the fitness function handle that does not meet the specification will be detected and the program will be interrupted.
*If you need to test the algorithm, you can use the test functions encapsulated in the PSO_Lab\lib\util\math_util library.
It has four test levels:primary, medium, advanced, precision; and the test function types are divided into unimodal and multimodal.
A note on test levels:
1)primary
dimension: 30
scope of the variable: [-100, 100]
test function: alternative
2)medium
dimension: 100
scope of the variable: [-1000, 1000]
test function: alternative
3)advanced
dimension: 1000
scope of the variable: [-10000, 10000]
test function: alternative
4)precision
dimension: 1000
scope of the variable: [9994.88, 10005.12]
To choose a test level, you can use the following code:
testfun_stc = math_util().test_function("primary"); testfun_stc = math_util().test_function("medium"); testfun_stc = math_util().test_function("advanced"); testfun_stc = math_util().test_function("precision");
A note on function type:
1)unimodal
2)multimodal
test function 1: minimum:-12569.5
*There are some problem with the minimum value of this test function.
As long as the test level and function type are determined, the variable range and dimension can also be determined.
e.g.Select primary and unimodal test function 1 with dimension 1000
function stc = config testfun_stc = math_util().test_function("primary"); ... %fitness function stc.fit_fun = testfun_stc.unimodal().fun1.function; ... %size [number of dimensions, number of particles] stc.size = [testfun_stc.varset.dim, 1000]; ... end
*With whose fitness function that needs to be constrained by constraints, add the corresponding constraint code to the corresponding algorithm.
(PSO_Lab\algorithm\... method: checkRestrictions() )
- save data
If you need to save the data after the end of the run, set save to true:
function stc = config ... stc.save = true; ... end
- parallel processing
see Parallel Computing
- algorithm
function stc = config ... stc.algorithm = "BasePSO"; ... end
Algorithms currently available include:
BasePSO
PPSO (Perturbation PSO)
PCPSO (Perturbation-based Cauchy-variation PSO)
ERPSO (Experience Replay PSO)
TSPSO (exTremum disturbed and Simple PSO)
CPSO (Chaos PSO)
*You can also write your own algorithm and save it with a unique file name in the directory: PSO_Lab\algorithm\.
And add the incentive generation code for the algorithm optimizer in the file:PSO_Lab\pso_lab\lab_generate_optimizer.m
see Algorithm coding specification
see Algorithm optimizer coding specification
There are two configuration options for the algorithm in the configuration file:
one algorithm for optimize mode and par-optimize mode
algorithm vector for search mode
- size of the iteration
function stc = config ... stc.size = [30, 1000]; ... end
The size of this iteration is defined by a 1*2 row vector: [ dimensions, number of particles]
- Maximum number of iterations
function stc = config ... stc.num_Iter = 1000; ... end
The final iteration end condition that does not meet other iteration end conditions.
- Print evaluation cycle
function stc = config ... stc.evalu_times = 100; ... end
- Exit condition
function stc = config ... stc.fit_tolerance = 1e-8; stc.max_tolerance_times = 100; ... end
The exit condition is not necessary and can be set to an empty array.
- Basic algorithm parameter settings
function stc = config %dimensions:30 ... stc.idvlLrnFactor = 1.5; stc.soclLrnFactor = 2.05; ... stc.w_range = [0.4; 0.9]; ... stc.var_range = repmat([-100, 100], 30, 0); ... d_var = abs(stc.var_range(:, 1) - stc.var_range(:, 2)); stc.v_range = 0.05 * [-d_var, d_var]; ... end
individual learning factor (default 1.5)
social learning factor (default 2.05)
*In order to adapt to the specific fitness function, the above two parameters can be used to find more suitable parameters
using the search mode before the final determination.
e.g. Determine the best individual learning factor of BasePSO at test level at primary, unimodal test function 1 (social learning factor is 2.05)
@PSO_Lab\config\benchmark\parameter\basepso\idvlLrnFactor
run_lab .\config\benchmark\parameter\basepso\idvlLrnFactor\basepso_primary_unimodal_1_idvlLrnFactor_benchmark_config.m search
Inertia weight range
Specification: 1*2 row vector [min, max].
variable range
Specification: matrix of dimension * 2, the range of the ith dimension is the i-th row of the matrix [min, max].
speed range
The same specification as variable range.
- Specific algorithm parameter configuration
see below:
BasePSO
PPSO (Perturbation PSO)
PCPSO (Perturbation-based Cauchy-variation PSO)
ERPSO (Experience Replay PSO)
TSPSO (exTremum disturbed and Simple PSO)
CPSO (Chaos PSO)
- In order to ensure the adaptability of the environment, the configuration function file is best saved in the directory: PSO_Lab\config
@PSO_Lab\config\benchmark\joint\joint_all_primary_unimodal_1_benchmark_config.m (code for search mode)
%Joint simulation, all algorithms, primary, unimodal test function 1 function stc = joint_all_primary_unimodal_1_benchmark_config testfun_stc = math_util().test_function("primary"); %Save Data stc.save = true; %parallel processing stc.para_enable = true; stc.para_num = 4; %% Basic configuration %Fitness function stc.fit_fun = testfun_stc.unimodal().fun1.function; %Algorithm used stc.algorithm = [ "BasePSO", "PPSO", "PCPSO", "ERPSO", "TSPSO", "CPSO"]; %size stc.num_dim = testfun_stc.varset.dim; stc.num_particle = 1000; %Maximum Iterations (1000 by default) stc.num_Iter = 1000; %Individual factor stc.idvlLrnFactor = 1.5; %Social factor stc.soclLrnFactor = 2.05; %Inertia weight range [lower bound, upper bound] (default 0.4~0.9) stc.w_range = [0.4; 0.9]; %Variable range [lower bound, upper bound;...] (dimension * 2) stc.var_range = testfun_stc.varset.var_range; %Speed range [lower bound, upper bound;...] (dimension * 2) d_var = abs(stc.var_range(:, 1) - stc.var_range(:, 2)); stc.v_range = 0.05 * [-d_var, d_var]; %Each evaluation iteration number (100 by default, no need to empty the evaluation) stc.evalu_times = 100; %Exit conditions (minimum fitness change tolerance, the number of times to reach the minimum fitness tolerance, not used is empty) stc.fit_tolerance = []; stc.max_tolerance_times = []; %% BasePSO stc.basepso.enable = true; %% PPSO stc.ppso.enable = true; %% PCPSO stc.pcpso.enable = true; %Tolerance of fitness for opening Cauchy variation stc.pcpso.variation_tolerance = inf; %% TSPSO stc.tspso.enable = true; %Individual Stagnation Steps Threshold and Global Stagnation Steps Threshold (3 and 5 by default) stc.tspso.idvlStagnantTh = 3; stc.tspso.soclStagnantTh = 5; %% ERPSO stc.erpso.enable = true; %Experience size stc.erpso.memory_size = 16384; %Experience learning factor (default 0.3) stc.erpso.expLrnFactor = 0.3; %Experience learning opens tolerance stc.erpso.exp_tolerance = 1e+5; %% CPSO stc.cpso.enable = true; %Chaotic Iterative Control Parameters stc.cpso.chaos_mu = 4; %Length of chaotic sequence stc.cpso.chaos_length = 500; end
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Algorithms are stored in the directory: PSO_Lab\algorithm
PSO Lab recommends that all algorithms be a subclass of BasePSO, so that you can just redefine some methods.
Properties of BasePSO:
It is better not to change the dimension of the same property used by inheritance, so as to avoid some unspeakable errors.
classdef BasePSO < handle ... properties %-----particle properties------ pst %particle position: dimension * number of particles v %Particle velocity: dimension * number of particles fitness %Fitness: 1*Number of particles w %Inertia weight of each particle: 1*Number of particles %-----Iterative process record----- idvlPstBest %Best particle history: dimension * number of particles idvlFitBest %The best fitness of particle history: 1*number of particles soclPstBest %Best Collective History: Dimension*1 soclFitBest %Collective historical: best fitness 1*1 idvlFitBestAvgArray %Record the average best fitness of particles soclFitBestArray %Record the global optimal fitness %-----iteration parameters----- num_particles %Number of particles num_dim %Number of Dimensions/Number of Variables idvlLrnFactor %individual learning factor soclLrnFactor %social learning factor cttFactor %contraction factor w_range %Inertia weight range (lower bound, upper bound): 2*1 v_range %Speed range (lower bound, upper bound): dimension*2 var_range %Variable range (lower bound, upper bound): dimension*2 evalu_enable %enable evaluation evalu_times %Iterations per evaluation num_Iter %The maximum number of iterations %-----Exit conditions----- exit_enable %Allow conditional exit fit_tolerance max_tolerance_times fit_fun %fitness function end ... end
Methods of BasePSO:
For specific implementation, please refer to the code: PSO_Lab\algorithm\BasePSO
classdef BasePSO < handle ... %base function methods function obj = BasePSO(num_particle, num_dim, idvl, socl, wr, vr, varr, num_Iter, fit_fun, evalu_times, fit_tolerance, max_tolerance_times) ... end %Test the fitness function function result = test_fitfun(~, fitfun) ... end %iterative optimization function stc = search(obj, disp_stc, tnum) ... end end %Iterative correlation function methods(Access = protected) %Calculate fitness function cal_fitness(obj) ... end %Update fitness related parameters function update_fit(obj) ... end %Update particle parameters function update_particle(obj) ... end %Update adaptive parameters function update_adappara(obj) ... end %initialization function reset(obj) ... end %Restrictions function checkRestrictions(~) end %Check exit conditions function result = check_exit(obj) ... end %Packed data function stc = data_backup(obj) ... end end ... end
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*Dependency:Parallel Computing Toolbox
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Modify the configuration file to enable parallel processing.
function config ... stc.para_enable = true; stc.para_num = 4; ... end
para_num: the amount of parallel processing, limited by the local computer, recommended 4 or a divisor of 4.
A non-searching configuration, like other configurations in optimize mode, uses multiple threads to run the same configuration to pick acceptable results.
e.g.So configuration ( @PSO_Lab\config\benchmark\single\basepso\basepso_primary_unimodal_1_benchmark_config.m ) can be run in two modes:
%single thread operation(run after disableing Parallel Computing)
run_lab .\config\benchmark\single\basepso\basepso_primary_unimodal_1_benchmark_config.m optimize
%multi-threaded operation(run after enableing Parallel Computing)
run_lab .\config\benchmark\single\basepso\basepso_primary_unimodal_1_benchmark_config.m par-optimize
Contains search configuration, search configuration can only contain one item.
Currently supported search configurations include:
- Algorithm
- Individual learning factors
- Social Learning Factors
- *Tolerance of fitness for opening Cauchy variation(PCPSO)
- *Individual Stagnation Steps Threshold(TSPSO)
- *Global Stagnation Steps Threshold(TSPSO)
- *Chaotic Iterative Control Parameters(CPSO)
- *Length of chaotic sequence(CPSO)
The item with * is the search configuration of the specific algorithm.
Search specification: represented as a vector.
e.g. (@PSO_Lab\config\benchmark\joint\joint_all_primary_unimodal_1_benchmark_config.m)search configuration for Algorithm, see specific code.
run_lab .\config\benchmark\joint\joint_all_primary_unimodal_1_benchmark_config.m search