1

Why do I get such a difference in Bitrate with iPerf when running a single connection compared to running 10 simultaneous connections?

When I run iPerf with 10 simultaneous connections:

iperf3 -p 32770 -R -t 180 -P 10 -c <remote_ip>

I get the following output:

Connecting to host <remote_ip>, port 32770
Reverse mode, remote host <remote_ip> is sending
[ 6] local <local_ip> port 54283 connected to <remote_ip> port 32770
[ 8] local <local_ip> port 54284 connected to <remote_ip> port 32770
[ 10] local <local_ip> port 54285 connected to <remote_ip> port 32770
[ 12] local <local_ip> port 54286 connected to <remote_ip> port 32770
[ 14] local <local_ip> port 54287 connected to <remote_ip> port 32770
[ 16] local <local_ip> port 54288 connected to <remote_ip> port 32770
[ 18] local <local_ip> port 54289 connected to <remote_ip> port 32770
[ 20] local <local_ip> port 54290 connected to <remote_ip> port 32770
[ 22] local <local_ip> port 54291 connected to <remote_ip> port 32770
[ 24] local <local_ip> port 54292 connected to <remote_ip> port 32770
[ ID] Interval Transfer Bitrate Retr
[ 6] 0.00-180.03 sec 546 MBytes 25.5 Mbits/sec 689 sender
[ 6] 0.00-180.01 sec 546 MBytes 25.4 Mbits/sec receiver
[ 8] 0.00-180.03 sec 128 KBytes 5.82 Kbits/sec 109 sender
[ 8] 0.00-180.01 sec 0.00 Bytes 0.00 bits/sec receiver
[ 10] 0.00-180.03 sec 128 KBytes 5.82 Kbits/sec 16 sender
[ 10] 0.00-180.01 sec 0.00 Bytes 0.00 bits/sec receiver
[ 12] 0.00-180.03 sec 543 MBytes 25.3 Mbits/sec 730 sender
[ 12] 0.00-180.01 sec 542 MBytes 25.3 Mbits/sec receiver
[ 14] 0.00-180.03 sec 368 MBytes 17.2 Mbits/sec 846 sender
[ 14] 0.00-180.01 sec 367 MBytes 17.1 Mbits/sec receiver
[ 16] 0.00-180.03 sec 746 MBytes 34.7 Mbits/sec 634 sender
[ 16] 0.00-180.01 sec 744 MBytes 34.7 Mbits/sec receiver
[ 18] 0.00-180.03 sec 469 MBytes 21.8 Mbits/sec 727 sender
[ 18] 0.00-180.01 sec 468 MBytes 21.8 Mbits/sec receiver
[ 20] 0.00-180.03 sec 128 KBytes 5.82 Kbits/sec 112 sender
[ 20] 0.00-180.01 sec 128 KBytes 5.83 Kbits/sec receiver
[ 22] 0.00-180.03 sec 507 MBytes 23.6 Mbits/sec 624 sender
[ 22] 0.00-180.01 sec 506 MBytes 23.6 Mbits/sec receiver
[ 24] 0.00-180.03 sec 7.50 MBytes 349 Kbits/sec 671 sender
[ 24] 0.00-180.01 sec 7.38 MBytes 344 Kbits/sec receiver
[SUM] 0.00-180.03 sec 3.11 GBytes 148 Mbits/sec 5158 sender
[SUM] 0.00-180.01 sec 3.11 GBytes 148 Mbits/sec receiver

but when I run it with only one connection:

iperf3 -p 32770 -R -t 180 -P 1 -c <remote_ip>

I get the following:

Connecting to host 18.132.14.20, port 32770
Reverse mode, remote host 18.132.14.20 is sending
[ 6] local 192.168.1.177 port 55808 connected to 18.132.14.20 port 32770
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 6] 0.00-180.00 sec 7.00 MBytes 326 Kbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 6] 0.00-180.02 sec 7.00 MBytes 326 Kbits/sec 635 sender
[ 6] 0.00-180.00 sec 7.00 MBytes 326 Kbits/sec receiver

A little bit of background

I am currently experiencing issues with my ISP with intermittent speed differences. I normally get 150Mbps according to fast.net and speedtest.net, however my new results vary between 325Kbps and 150Mbps. To investigate this further, I set up an iPerf3 server in AWS and run the iPerf3 client on my desktop and laptop. The results above are similar if I run the iPerf3 client on a wireless 5Ghz, 1000BaseT over Powerline or using a 1000BaseT directly in to the router.

I am expecting the single connection to use up the full 150Mbps bitrate or at least match the bitrate of one of the connections in the 10 simultaneous connections.

asked Feb 22, 2025 at 18:24

2 Answers 2

0

Assuming it isn't simply a matter of luck of timing, seeing higher throughput with more streams implies that the TCP window size settings at the sender, and the receiver, are not sufficient to enable achieving the full bandwidth-delay product with the smaller number (eg single) of streams.

There are likely plenty of references for TCP tuning for high bandwidth delay product networks. One which touches upon the topic, which is near and dear to my heart for some reason :) is at https://services.google.com/fh/files/misc/considerations_when_benchmarking_tcp_bulk_flows.pdf

answered Apr 14, 2025 at 22:00
Sign up to request clarification or add additional context in comments.

Comments

0

Unfortunately I do not have a clear answer for this as my ISP resolved the issue. The ISP provider said there where issues with the switches withing the data centres. Even though the BGP links were up on their side and showing correct confirmations and speed, they acknowledged that several properties in the area (hundreds apparently) were experiencing symptoms similar to mine. The issue took 3 weeks to be resolved.

If I run the iperf3 tests now, I get text book output with similar bitrates across both single and multiple stream tests.

answered Apr 16, 2025 at 6:17

Comments

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.