-
-
Notifications
You must be signed in to change notification settings - Fork 489
Speedup with GPU #42
-
I am running my project on my local machine and am wondering if a GPU workstation might speedup the runtime for GA.run() ? If so, could you please let me know of the dependencies and helpful hints for configuration? Thank you very much!
Beta Was this translation helpful? Give feedback.
All reactions
I did not try using a GPU with PyGAD. I tried to parallelize the processing and did not find any speedup in the execution time. Badly, the time increased compared to not using parallel processing. The reason is that there are no single long-running operation in the genetic algorithm. For example, parent selection, mutation, and crossover use few CPU time.
The only thing that would make a change is the fitness function. As the fitness function changes for each problem, it may or may not need parallel processing. Check this article for an example where the fitness function is parallelized: https://hackernoon.com/how-genetic-algorithms-can-compete-with-gradient-descent-and-backprop-9m9t33bq
Replies: 2 comments 5 replies
-
I did not try using a GPU with PyGAD. I tried to parallelize the processing and did not find any speedup in the execution time. Badly, the time increased compared to not using parallel processing. The reason is that there are no single long-running operation in the genetic algorithm. For example, parent selection, mutation, and crossover use few CPU time.
The only thing that would make a change is the fitness function. As the fitness function changes for each problem, it may or may not need parallel processing. Check this article for an example where the fitness function is parallelized: https://hackernoon.com/how-genetic-algorithms-can-compete-with-gradient-descent-and-backprop-9m9t33bq
Beta Was this translation helpful? Give feedback.
All reactions
-
❤️ 1
-
Got it. Thank you for the quick response!
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 1
-
Awesome!
Beta Was this translation helpful? Give feedback.
All reactions
-
For complicated reasons I sent a reply from a spurious account, please ignore it, best to delete it frankly. No security risk, just a dead account that lingered in my set up.
I have run fast CPU parallelism. I could try GPU. What's the use case (broad type of maths problem being solved), hardware, and how does Ahmed want it configured and submitted? If it has to be GPU is the hardware anything other than Linux and NVIDIA?
I have a fan repo called "props to pygad" and I can post stuff there. I have a small speed optimisation there already.
Beta Was this translation helpful? Give feedback.
All reactions
-
I deleted the comment. No problem.
FYI, I already made an experiment to support this feature in the library itself and failed in all ways to have reduction in the time. It is interesting if you would help me with that.
PyGAD is designed to be a general-purpose optimization library so it should work with various types of problems.
You may start with some limited support of hardware until making sure things are working properly. I was planning to make PyGAD 100% compatible with Android and Raspberry Pi but this optional feature would be neglected from those.
But to save your time, I expect this new feature to save time for the examples posted in GitHub. If this would only work with some special cases, then I do not think supporting such a feature would be of interest.
This feature should be optional. Would you add flag and all needed parameters in the run()
method of the pygad.GA
class? This method accepts no arguments at this time.
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 1
-
Ok boss. No promises... day job is busy. Thanks for the steer on the config.
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 1
-
Thanks @keithreid-sfw!
Beta Was this translation helpful? Give feedback.