homepage

This issue tracker has been migrated to GitHub , and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Socket accept exhaustion during high TCP traffic
Type: Stage:
Components: asyncio Versions: Python 3.6, Python 3.4, Python 3.5
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: yselivanov Nosy List: gvanrossum, kevinconway, python-dev, vstinner, yselivanov
Priority: normal Keywords: patch

Created on 2016年08月31日 01:57 by kevinconway, last changed 2022年04月11日 14:58 by admin.

Files
File name Uploaded Description Edit
testservice.zip kevinconway, 2016年08月31日 01:57 Service example for testing
multi-accept.patch kevinconway, 2016年08月31日 01:58 Patchfile with proposed solution.
multi-accept-2.patch kevinconway, 2016年08月31日 04:09 Patch file with test review
multi-accept-3.patch kevinconway, 2016年09月01日 13:11 Patch with test and comment review
Messages (16)
msg273989 - (view) Author: (kevinconway) * Date: 2016年08月31日 01:57
My organization noticed this issue after launching several asyncio services that would receive either a sustained high number of incoming connections or regular bursts of traffic. Our monitoring showed a loss of between 4% and 6% of all incoming requests. On the client side we see a socket read error "Connection reset by peer". On the asyncio side, with debug turned on, we see nothing.
After some more investigation we determined asyncio was not calling 'accept()' on the listening socket fast enough. To further test this we put together several hello-world type examples and put them under load. I've attached the project we used to test. Included are three docker files that will run the services under different configurations. One runs the service as an aiohttp service, the other uses the aiohttp worker behind gunicorn, and the third runs the aiohttp service with the proposed asyncio patch in place. For our testing we used 'wrk' to generate traffic and collect data on the OS/socket errors.
For anyone attempting to recreate our experiments, we ran a three test batteries against the service for each endpoint using:
wrk --duration 30s --timeout 10s --latency --threads 2 --connections 10 <URL>
wrk --duration 30s --timeout 10s --latency --threads 2 --connections 100 <URL>
wrk --duration 30s --timeout 10s --latency --threads 2 --connections 1000 <URL>
The endpoints most valuable for us to test were the ones that replicated some of our production logic:
<URL>/ # Hello World
<URL>/sleep?time=100 # Every request is delayed by 100 milliseconds and returns an HTML message.
<URL>/blocking/inband # Every request performs a bcrypt with complexity 10 and performs the CPU blocking work on the event loop thread.
Our results varied based on the available CPU cycles, but we consistently recreate the socket read errors from production using the above tests.
Our proposed solution, attached as a patch file, is to put the socket.accept() call in a loop that is bounded by the listening socket's backlog. We use the backlog value as an upper bound to prevent the reverse situation of starving active coroutines while the event loop continues to accept new connections without yielding. With the proposed patch in place our loss rate disappeared.
For further comparison, we reviewed the socket accept logic in Twisted against which we ran similar tests and encountered no loss. We found that Twisted already runs the socket accept in a bounded loop to prevent this issue (https://github.com/twisted/twisted/blob/trunk/src/twisted/internet/tcp.py#L1028).
msg273990 - (view) Author: (kevinconway) * Date: 2016年08月31日 01:58
Attaching the patch file
msg273991 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2016年08月31日 02:22
Thanks -- IIRC this was even brought up during asyncio's early implementation phase, but we didn't have enough experience to warrant the extra logic. It seems like now is the time to do this!
I hope you won't need it for 3.4, because that version is out of support. However I would like to have this upstream in https://github.com/python/asyncio.
Is there any chance you can write a unit test for this functionality? Or is the only way to test it really to do a load test?
msg273992 - (view) Author: (kevinconway) * Date: 2016年08月31日 03:07
I'll dig into the existing asyncio unit tests and see what I can come up with. I'm not sure, yet, exactly what I might test for.
The variables involved with reproducing the error are mostly environmental. CPU speed of the host, amount of CPU bound work happening in handler coroutines, and the rate of new connections are the major contributors we've identified. I'm not sure how I might simulate those in a unit test.
Would it be sufficient to add a test that ensures the _accept_connection calls .accept() on the listening socket 'backlog' number of times in event there are no OS errors?
msg273993 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2016年08月31日 03:10
Yeah, the tests are often full of mocks, and I trust that the test
suite you wrote verifies that this fixes the problem. The unit tests
are more to ensure that future changes to the code won't accidentally
break the code you wrote (which is why we should really have test
coverage, but that's another story).
msg273996 - (view) Author: (kevinconway) * Date: 2016年08月31日 04:09
I've added a unit test to the patch that asserts sock.accept() is called the appropriate number of times.
Worth a note, I had to adjust one of the existing tests to account for the new backlog argument. There is a default value for the argument to preserve backwards compat for any callers, but the mock used in the test was not tolerant of having an extra arg available.
msg274121 - (view) Author: (kevinconway) * Date: 2016年09月01日 13:11
Added a comment to the .accept() loop with a reference to the issue.
msg274143 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2016年09月01日 15:33
I'll try to get to this during the core dev sprint next week.
msg276498 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2016年09月14日 23:52
Sorry, we didn't get to this. @Yury, do you think you could get to this before b1 goes out? It's a pure optimization (and a good one!).
msg276499 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2016年09月15日 01:02
> @Yury, do you think you could get to this before b1 goes out? It's a pure optimization (and a good one!).
LGTM. Will commit tomorrow.
msg276501 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2016年09月15日 01:30
(Of course I meant b2.)
msg276526 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2016年09月15日 07:53
> On the asyncio side, with debug turned on, we see nothing.
Hum, I'm concerned by this issue. I would expect that in debug log, a log would be emitted. Maybe something like "accept queue saturated! we may probably loose incoming connections".
Does someone see a way to log a message in such case? Maybe only in debug mode?
I spent time to make sure that asyncio logs events related to network to try to understand what is happening. It's hard to decide which events should be logged, because logging everything is likely to flood logs.
For example, _SelectorSocketTransport._read_ready() logs an event for
* "EOF" (when recv() returns an empty string).
* transport pauses reading
* transport resumes reading
msg276582 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2016年09月15日 17:57
> > On the asyncio side, with debug turned on, we see nothing.
> Does someone see a way to log a message in such case? Maybe only in debug mode?
I'm not sure we can do anything here. @kevinconway, did you try to find out if it's possible to log this somehow?
msg276584 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2016年09月15日 18:03
Maybe with the new code it could log a message if it doesn't exit the
loop via a break? Because that would mean the listen buffer is full.
It would encourage people to configure a larger number.
msg276586 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2016年09月15日 18:15
New changeset 1dcfafed3cb0 by Yury Selivanov in branch '3.5':
Issue #27906: Fix socket accept exhaustion during high TCP traffic.
https://hg.python.org/cpython/rev/1dcfafed3cb0
New changeset 62948164ff94 by Yury Selivanov in branch '3.6':
Merge 3.5 (issue #27906)
https://hg.python.org/cpython/rev/62948164ff94
New changeset ac13bf1967b5 by Yury Selivanov in branch 'default':
Merge 3.6 (issue #27906)
https://hg.python.org/cpython/rev/ac13bf1967b5 
msg276592 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2016年09月15日 19:04
Guido van Rossum: "Maybe with the new code it could log a message if
it doesn't exit the loop via a break? Because that would mean the
listen buffer is full. It would encourage people to configure a larger
number."
Hum, yes maybe. But I would suggest to only log a maximum message per
server, not start filling logs with the same message if the server is
stressed.
History
Date User Action Args
2022年04月11日 14:58:35adminsetgithub: 72093
2017年04月01日 05:47:54serhiy.storchakasetpull_requests: - pull_request933
2017年03月31日 16:36:19dstufftsetpull_requests: + pull_request933
2016年09月15日 19:04:30vstinnersetmessages: + msg276592
2016年09月15日 18:15:40python-devsetnosy: + python-dev
messages: + msg276586
2016年09月15日 18:03:55gvanrossumsetmessages: + msg276584
2016年09月15日 17:57:23yselivanovsetmessages: + msg276582
2016年09月15日 07:53:13vstinnersetmessages: + msg276526
2016年09月15日 01:30:19gvanrossumsetmessages: + msg276501
2016年09月15日 01:02:03yselivanovsetassignee: yselivanov
messages: + msg276499
2016年09月14日 23:52:13gvanrossumsetmessages: + msg276498
2016年09月01日 15:33:41gvanrossumsetmessages: + msg274143
2016年09月01日 13:11:30kevinconwaysetfiles: + multi-accept-3.patch

messages: + msg274121
2016年08月31日 04:09:27kevinconwaysetfiles: + multi-accept-2.patch

messages: + msg273996
2016年08月31日 03:10:11gvanrossumsetmessages: + msg273993
2016年08月31日 03:07:31kevinconwaysetmessages: + msg273992
2016年08月31日 02:22:18gvanrossumsetmessages: + msg273991
2016年08月31日 01:58:04kevinconwaysetfiles: + multi-accept.patch
keywords: + patch
messages: + msg273990
2016年08月31日 01:57:24kevinconwaycreate

AltStyle によって変換されたページ (->オリジナル) /