-
Notifications
You must be signed in to change notification settings - Fork 47
Add optimizers from PySwarms #639
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report❌ Patch coverage is
🚀 New features to boost your workflow:
|
@janosg could you review this?
A few points:
- Could you check the failing run on Python 3.11? I’m not sure what’s causing the issue.
- PySwarms has stochastic algorithms but doesn’t expose a seed parameter. Because of that, they were being treated as deterministic in the tests. I’ve added a seed parameter with a random number generator for it to be treated as stochastic.
- The default value CONVERGENCE_FTOL_REL = 2e-9 is causing the algorithm to converge earlier than expected(how should we handle this?).
@janosg could you review this?
A few points:
- Could you check the failing run on Python 3.11? I’m not sure what’s causing the issue.
I also don't see the reason but it seems completely unrelated to your PR so you can ignore it for now.
- PySwarms has stochastic algorithms but doesn’t expose a seed parameter. Because of that, they were being treated as deterministic in the tests. I’ve added a seed parameter with a random number generator for it to be treated as stochastic.
If I see correctly, you only use the rng for inital positions. Is this the only stochastic part of the algorithm or are ther other stochastic parts that make the optimization non-deterministic even if a seed is set?
- The default value CONVERGENCE_FTOL_REL = 2e-9 is causing the algorithm to converge earlier than expected(how should we handle this?).
You can deviate from this default if there are good reasons. Especially for a global optimizer I don't think it is problematic to set this to 0 because global optimizers are usually expected to run until maxiter or maxfun are reached. Does setting convergence_ftol_rel=0 fix the problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have time to completely finish the review but this should give you some pointers to continue working on the PR.
If I see correctly, you only use the rng for inital positions. Is this the only stochastic part of the algorithm or are ther other stochastic parts that make the optimization non-deterministic even if a seed is set?
Stochastic parts in PySwarms:
- Random topology and random bounds handling strategy.
- Random components (r1, r2) in the velocity update equation, making the movement inherently stochastic.
You can deviate from this default if there are good reasons. Especially for a global optimizer I don't think it is problematic to set this to 0 because global optimizers are usually expected to run until maxiter or maxfun are reached. Does setting convergence_ftol_rel=0 fix the problem?
If we set convergence_ftol_rel to 0, it disables early convergence and forces the algorithm to run for STOPPING_MAXITER iterations by default if stopping_maxiter is not set. However, running optimization for STOPPING_MAXITER times i.e (1_000_000) takes a long time.
Stochastic parts in PySwarms:
- Random topology and random bounds handling strategy.
- Random components (r1, r2) in the velocity update equation, making the movement inherently stochastic.
Then we need a very big warning that the seed does not make the algorithm deterministic. Would setting a global numpy seed help? i.e. np.random.seed(123)? If so, the warning should explain this workaround but we would not want to set a global seed in optimagic.
You can deviate from this default if there are good reasons. Especially for a global optimizer I don't think it is problematic to set this to 0 because global optimizers are usually expected to run until maxiter or maxfun are reached. Does setting convergence_ftol_rel=0 fix the problem?
If we set convergence_ftol_rel to 0, it disables early convergence and forces the algorithm to run for STOPPING_MAXITER iterations by default if stopping_maxiter is not set. However, running optimization for STOPPING_MAXITER times i.e (1_000_000) takes a long time.
As I said, you can deviate from defaults where necessary. Of course, we would not leave maxiter at a million if this is a bad default. Maybe 1000 would be a good idea? We just can't have a variable called "STOPPING_MAXITER_GLOBAL" (analogous to "STOPPING_MAXFUN_GLOBAL") in the algo options because what an iteration is changes between optimizers.
The failing test should be fixed after you update the branch.
...into pyswarms-optimizer
Then we need a very big warning that the seed does not make the algorithm deterministic. Would setting a global numpy seed help? i.e. np.random.seed(123)? If so, the warning should explain this workaround but we would not want to set a global seed in optimagic.
I meant that there are stochastic parts to it, but setting the global seed does make it deterministic. Yes setting global seed work. I will add a warning if seed is set.
As I said, you can deviate from defaults where necessary. Of course, we would not leave maxiter at a million if this is a bad default. Maybe 1000 would be a good idea? We just can't have a variable called "STOPPING_MAXITER_GLOBAL" (analogous to "STOPPING_MAXFUN_GLOBAL") in the algo options because what an iteration is changes between optimizers.
Yes, 1000 is a reasonable number for PySwarms, so I had introduced STOPPING_MAXITER_GLOBAL earlier, but I will remove it and just default to 1000 iterations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm super sorry for the late review. This looks great and we can merge.
hi, no worries. it’s been some time, give me some time to check again if everything’s good to merge?
Sure! You can click on merge after checking!
This PR introduces the following optimizers from PySwarms:
pyswarms_global_bestpyswarms_local_bestpyswarms_general