-
Notifications
You must be signed in to change notification settings - Fork 440
Disable Subscription-Polling Sleep while queue items available #1202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disable Subscription-Polling Sleep while queue items available #1202
Conversation
...requests add subscription_interval as config for dbal subscription consumer
jprangenbergde
commented
Sep 30, 2021
That would be great!
Hey @makasim , do you think that this change will be merged into the next version? We are using it already productive (we are patching the change into the files) and it works like a charm. :)
What if there is one huge queue, you would stuck on it for a long time and other queues wont get a chance to be processed
at all. That's the reason why it was implemented that way in first place. The subscription consumer is friendly to all queues it subscribed.
If you have a consumer with higher priority you can create a dedicated consumer only for it (maybe a pool of consumers).
The PR is unlikely to be merged in its current state because it introduce a BC break.
@makasim if you have a huge queue, and some lower priority jobs are never executed, then something is wrong with the load that is put on the consuming processes.
Symfony messenger separates tasks like that by putting them into separate queues.
@makasim based on the published_at column the queue is actually ignored. First In, First Out! As long the Items have the same priority.
I understand why it was implemented that way, but in my opinion it's more important to work with "First In, First Out" for a Job-System like that.
jprangenbergde
commented
Oct 7, 2021
First in, First out is in my opinion the correct way. Same priority? > published_at. Why should a job „A" which has the same priority like job „B" but got published After job „B" get executed before? First in, first out on same priority is the way to go.
The priority works on per queue basis. There is no way to apply priority to messages from several different queues. It works for dbal because of its nature but contradicts to general idea of MQ and the library API. The current approach play well with per-queue-priority.
Consider my example with one huge queue and others small where all messages have same priority. Small queues wont get a chance to be served in this PR approach.
@makasim But when the jobs of the small queue get published after the jobs of the huge queue?
@jprangenbergde sorry, I dont get the point
@makasim I have added logic to have a similar behavior like you had before. But now it takes directly the next queue if there is no one for the first one from the array.
The array gets resorted after a message was found. It moves the queue name from the gotten message to the end of the queueNames list. That way the Consumer works as long it is finding something but is also taking the order of the queueNames into consideration.
@makasim would be the current solution fine for you? I changed the logic in that way, that it goes through the queues now.
@makasim I would like to come back to this topic. The issue which I have is that the SubscriptionPollingDelay is happening between the change of the queue and not only if all queues are empty. My current solution should still fit your requirements to loop through the queues but not cause the issue which I am currently facing.
@makasim I changed the main code back to the original one and added a check if a queue was consumed. If a queue was consumed, then it works further without waiting because we expect more jobs. If no queue was processed, then it waits for the configured polling interval.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
@makasim please let us talk again about this change, because otherwise it could be that there is an unexpected behaviour
This is a more performant solution for the PR #1201
It just works until there is no more message in the queue. The priority and published_at are the key for what will be next worked on.