Skip to main content
Stack Overflow
  1. About
  2. For Teams

Return to Question

Post Timeline

deleted 2 characters in body
Source Link

I'm using LettuceConnection with a connection pool to connect my application to a Redis server. However, during load testing, I encountered a significant number of command timeout errors. Initially, I suspected that the Redis server might be struggling to handle the high traffic, but after checking, I found that there were still plenty of available resources — CPU usage was around 5%, and memory usage was only about 10%.

This made me wonder: when exactly does the command timeout start? Does it include the time spent waiting for a connection from the pool to become available (in that case increasing connection pool size may help), or does it begin only after the request is sent to the Redis server? I reviewed the Lettuce documentation, but it’s unclear how the command timeout is measured. Please help me understand.

Here's the configuration I used.

RedisStandaloneConfiguration configuration = new RedisStandaloneConfiguration("host");
GenericObjectPoolConfig<?> poolConfig = new GenericObjectPoolConfig<>();
poolConfig.setMaxWaitMillis(300050);
poolConfig.setMinIdle(2);
poolConfig.setMaxIdle(2);
poolConfig.setMaxTotal(4);
LettuceClientConfiguration client = LettucePoolingClientConfiguration.builder()
 .poolConfig(poolConfig)
 .commandTimeout(Duration.ofMillis(50))
 .build();
return new LettuceConnectionFactory(configuration, client);

I'm using LettuceConnection with a connection pool to connect my application to a Redis server. However, during load testing, I encountered a significant number of command timeout errors. Initially, I suspected that the Redis server might be struggling to handle the high traffic, but after checking, I found that there were still plenty of available resources — CPU usage was around 5%, and memory usage was only about 10%.

This made me wonder: when exactly does the command timeout start? Does it include the time spent waiting for a connection from the pool to become available (in that case increasing connection pool size may help), or does it begin only after the request is sent to the Redis server? I reviewed the Lettuce documentation, but it’s unclear how the command timeout is measured. Please help me understand.

Here's the configuration I used.

RedisStandaloneConfiguration configuration = new RedisStandaloneConfiguration("host");
GenericObjectPoolConfig<?> poolConfig = new GenericObjectPoolConfig<>();
poolConfig.setMaxWaitMillis(3000);
poolConfig.setMinIdle(2);
poolConfig.setMaxIdle(2);
poolConfig.setMaxTotal(4);
LettuceClientConfiguration client = LettucePoolingClientConfiguration.builder()
 .poolConfig(poolConfig)
 .commandTimeout(Duration.ofMillis(50))
 .build();
return new LettuceConnectionFactory(configuration, client);

I'm using LettuceConnection with a connection pool to connect my application to a Redis server. However, during load testing, I encountered a significant number of command timeout errors. Initially, I suspected that the Redis server might be struggling to handle the high traffic, but after checking, I found that there were still plenty of available resources — CPU usage was around 5%, and memory usage was only about 10%.

This made me wonder: when exactly does the command timeout start? Does it include the time spent waiting for a connection from the pool to become available (in that case increasing connection pool size may help), or does it begin only after the request is sent to the Redis server? I reviewed the Lettuce documentation, but it’s unclear how the command timeout is measured. Please help me understand.

Here's the configuration I used.

RedisStandaloneConfiguration configuration = new RedisStandaloneConfiguration("host");
GenericObjectPoolConfig<?> poolConfig = new GenericObjectPoolConfig<>();
poolConfig.setMaxWaitMillis(50);
poolConfig.setMinIdle(2);
poolConfig.setMaxIdle(2);
poolConfig.setMaxTotal(4);
LettuceClientConfiguration client = LettucePoolingClientConfiguration.builder()
 .poolConfig(poolConfig)
 .commandTimeout(Duration.ofMillis(50))
 .build();
return new LettuceConnectionFactory(configuration, client);
Source Link

Lettuce Redis: Does Command Timeout Include Connection Pool Wait Time?

I'm using LettuceConnection with a connection pool to connect my application to a Redis server. However, during load testing, I encountered a significant number of command timeout errors. Initially, I suspected that the Redis server might be struggling to handle the high traffic, but after checking, I found that there were still plenty of available resources — CPU usage was around 5%, and memory usage was only about 10%.

This made me wonder: when exactly does the command timeout start? Does it include the time spent waiting for a connection from the pool to become available (in that case increasing connection pool size may help), or does it begin only after the request is sent to the Redis server? I reviewed the Lettuce documentation, but it’s unclear how the command timeout is measured. Please help me understand.

Here's the configuration I used.

RedisStandaloneConfiguration configuration = new RedisStandaloneConfiguration("host");
GenericObjectPoolConfig<?> poolConfig = new GenericObjectPoolConfig<>();
poolConfig.setMaxWaitMillis(3000);
poolConfig.setMinIdle(2);
poolConfig.setMaxIdle(2);
poolConfig.setMaxTotal(4);
LettuceClientConfiguration client = LettucePoolingClientConfiguration.builder()
 .poolConfig(poolConfig)
 .commandTimeout(Duration.ofMillis(50))
 .build();
return new LettuceConnectionFactory(configuration, client);
lang-java

AltStyle によって変換されたページ (->オリジナル) /