Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Increase the consumption of a message at the Consumer End using Thread Pool in Spring Application (or even any other recommended way if possible) with Rabbitmq as message broker #805

miPlodder started this conversation in General
Discussion options

AIM: Increase the consumption of a message at the Consumer End using Thread Pool in Spring Application (or even any other recommended way if possible) with Rabbitmq as message broker

Scenario: I have a scenario in which our application takes around ~15 seconds for consumption of 1 message whereas we producing 1 message per second, in simple way. So, by the time 1st message is consumed after 15 seconds. We have 14 more messages and so on.

Is there any way to reduce this gap on the consumer and producer side by "increasing Consumption at consumer end"?

Existing Understanding:

I tried to increase Thread Pool so that each consumer has 15 threads. This will ensure

00:00:01 - 1st msg, picked by thread 1
00:00:02 - 2nd msg, picked by thread 2
00:00:03 - 3rd msg, picked by thread 3
00:00:04 - 4th msg, picked by thread 4
..... and soon
00:00:15 - 15th msg, picked by thread 15
00:00:16 - 16th msg, picked by thread 1 (processing of 1st msg done after 15 seconds)
00:00:17 - 17th msg, picked by thread 2 (processing of 2st msg done after 15 seconds)
..... and soon

Existing Implementation:

val factory = new SimpleRabbitListenerContainerFactory();
factory.setTaskExecutor(Executors.newFixedThreadPool(15));

With the above understanding, I implemented above implementation, but don't see any significant improvement on the consumption rate at Consumer end. I found consumption rate at consumer end independent to Thread Pool

Is above implementation correct or missing something? Are there any other ways to solve this issue?

Added it at Stackoverflow (https://stackoverflow.com/q/72798968/5662835) coz I'm not aware which Channel is actively looked upon by Rabbitmq Team

You must be logged in to vote

Replies: 1 comment 10 replies

Comment options

Consider using discussions for questions.

Increasing the number of consumer work pool threads will help in some cases and won't make any difference or be a net negative in others. We cannot suggest much without knowing how many cores the consumer application has available to it and what it does. In fact, only benchmarking of your consumer will tell the whole story.

You must be logged in to vote
10 replies
Comment options

Profile your consumer and see what takes time.

Everything is okay, just that the logic contains a REST call that takes around 15-30 seconds in worst case. I'm thinking to make system resilient when majority of messages are having REST call that takes around 15-30 seconds in worst case by introducing Consumer Thread Pool or by any other way possible.

  • Only 1 way I found till now to improve consumption rate is via "increasing Number of Consumer". Does high number of consumers have any drawbacks/concerns?
  • Will try to play around with Prefetch Settings and Size of Consumer Thread Pool? Is it correct to say, Prefect Count should be greater than Consumer Thread Pool?

How would 15 threads parallelize work over one message?

I was thinking of a case where internally library might using Divide and Conquer Approach (or something similar) to optimize a particular flow.

Comment options

Only 1 way I found till now to improve consumption rate is via "increasing Number of Consumer". Does high number of consumers have any drawbacks/concerns?

It's hard to say, but you can use a few dozens in a given connection without problems.

You can start experimenting with a few channels (10, 20, etc) and one consumer per channel. Considering the message processing is long compared to the actual delivery, set Qos (prefetch count) to 2, this way you should have a message ready when you're done processing. Set the size of the thread pool to the number of channels.

Comment options

There seem to be relation between Maximum Concurrency and Thread Pool Size, which were not set properly as a pair. Doing this makes the Thread Pool working as per expectation. Got to know about it via Hit and Trial on my configs. Not found it on documentation. Wrote an answer on Stackoverflow regarding the same: https://stackoverflow.com/a/72869716/5662835
for others dev folks.

Comment options

Thanks for the follow-up and the answer on SO, users will appreciate it for sure.
You can also have a look at the prefetchCount (QoS) setting for fine tuning. Default is 250 but you may want to use a smaller value, as suggested in the note.

Comment options

Yes, Prefetch count can be configured also, and should be low in scenario where processing of Single Message is time consuming coz in that case 250 (by default), 249 will be blocked and will get a change late and new message can be proceeded before them in some other consumer.
I have one question or I'm not able to visualize/understand a scenario where playing around with Prefetch Count value can increase the consumption rate?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Converted from issue

This discussion was converted from issue #804 on June 29, 2022 10:21.

AltStyle によって変換されたページ (->オリジナル) /