This issue tracker has been migrated to GitHub ,
and is currently read-only.
For more information,
see the GitHub FAQs in the Python's Developer Guide.
Created on 2013年12月16日 11:42 by Lukasa, last changed 2022年04月11日 14:57 by admin. This issue is now closed.
| Files | ||||
|---|---|---|---|---|
| File name | Uploaded | Description | Edit | |
| hdrs.patch | Lukasa, 2013年12月16日 20:13 | Fix for current tip | review | |
| hdrs_27.patch | Lukasa, 2013年12月18日 21:17 | review | ||
| Messages (14) | |||
|---|---|---|---|
| msg206294 - (view) | Author: Cory Benfield (Lukasa) * | Date: 2013年12月16日 11:55 | |
Initially spotted on Requests GitHub bugtracker: https://github.com/kennethreitz/requests/issues/1804 On receiving an HTTP response with an invalid header, httplib stops parsing the headers and attempts to receive the rest of the message as body content. Normally that would be fine, but problems occur if later on in the headers "Transfer-Encoding: chunked" is declared. This leads to a hang while reading the body content until the remote end forcibly closes the connection. This bug certainly affects versions 2.7 through 3.3. To reproduce (note that we need to request gzip to get the server to send the bad header): import http.client h = http.client.HTTPConnection('www.sainsburysbank.co.uk') h.request('GET', '/', headers={'Accept-Encoding': 'gzip'}) r = h.getresponse() hdrs = r.getheaders() body = r.read() # Hang here. cURL configured equivalently doesn't exhibit this problem, that is the following works fine: curl --compressed http://www.sainsburysbank.co.uk/ It's not clear to me that this behaviour is wrong. The server is definitely violating RFC 2616 which expressly forbids empty header names. I'm open to consultation about what the correct fix should be here (which may be nothing at all). |
|||
| msg206306 - (view) | Author: R. David Murray (r.david.murray) * (Python committer) | Date: 2013年12月16日 13:31 | |
Well, having it hang forever is a potential DOS attack, so something needs to be fixed, I think. |
|||
| msg206315 - (view) | Author: Cory Benfield (Lukasa) * | Date: 2013年12月16日 14:12 | |
The easiest way to 'fix' the DoS problem is to throw an exception if an invalid header is parsed. That's a backwards-compatibility problem though: things that previously 'worked' now won't. That presumably limits the ability to back-apply this fix to 2.7.7. An alternative option is to speculatively attempt to parse the next line for headers or the end of the header block. I'm not sure this is a great idea: at this stage all we know is that the header block is malformed, so it's not clear that 'doing our best' is a good idea either, especially since that attitude got us here to begin with. The best 'middle of the road' option is to abort message parsing at this stage without throwing an exception. This leads to truncated headers and no body, where previously we'd have got truncated headers and a body that potentially included the missing headers. We could also potentially add a warning about the problem. Are there any preferences for a fix here, or a better solution than the above (none of which I'm wild about)? |
|||
| msg206318 - (view) | Author: R. David Murray (r.david.murray) * (Python committer) | Date: 2013年12月16日 14:35 | |
I haven't looked at the code, but could we preserve the existing behavior but apply a timeout to mitigate the DOS? On the other hand, the fact that curl manages to return something indicates there is probably an error recovery strategy that would work. I'm not sure if we have an error reporting mechanism in httplib if we do error recovery. We do in the email module, and httplib uses the email code for headers, I think, so there might be a way to leverage that if there is no existing mechanism. But of course even deciding to do that requires some discussion :) |
|||
| msg206320 - (view) | Author: Cory Benfield (Lukasa) * | Date: 2013年12月16日 15:16 | |
Maybe. If we do it we have to apply that timeout to all the socket actions on that HTTP connection. This would have the effect of changing the default value of the timeout parameter on the HTTPConnection object from socket._GLOBAL_DEFAULT_TIMEOUT to whatever value was chosen. We could do this for reads only, and avoid applying the timeout to connect() calls, but that's kind of weird. We hit the same problem though: by default, HTTPConnections block indefinitely on all socket calls: we'd be changing that default to some finite timeout instead. Does that sound like a good way to go? As for curl's error recovery strategy, I'm pretty sure it just keeps parsing the header block. That can definitely be done here. We do have an error reporting mechanism as well (sort of): we set the HTTPMessage.status field to some error string. We could do that, and continue to parse the header block: that's probably the least destructive way to fix this. |
|||
| msg206350 - (view) | Author: Cory Benfield (Lukasa) * | Date: 2013年12月16日 19:12 | |
An update: in Python 2.7 through 3.3, fixing this should only affect http.client/httplib, because they do most of their header parsing themselves. Fixing this in later versions of Python is more interesting, as http.client got rewritten to use email.parser (which uses email.feedparser). This means any change to fix this problem in HTTP will also affect anything else that uses this module. Not sure how problematic that is yet. |
|||
| msg206353 - (view) | Author: Cory Benfield (Lukasa) * | Date: 2013年12月16日 19:18 | |
Actually, that might be OK. I don't know the email package at all, but I suspect being able to handle empty header keys (by ignoring them) is a reasonable thing to do in the email case as well. Thoughts? |
|||
| msg206360 - (view) | Author: Cory Benfield (Lukasa) * | Date: 2013年12月16日 20:13 | |
Alright, here's a patch for the current tip. I'll need to prepare a different patch for earlier versions of Python, which will take me a little while longer to do (maybe not today). I've also signed a contributor agreement, but it doesn't look like that's propagated here yet. |
|||
| msg206395 - (view) | Author: R. David Murray (r.david.murray) * (Python committer) | Date: 2013年12月17日 02:17 | |
Heh. A missing header *name* was something I never considered in the email package tests. So yeah, that's worth handing one way or another. I'll put reviewing this on my TODO list, since I'm the maintainer of the email package. I'm updating the versions per our usual practice: the versions in which we want to fix it. |
|||
| msg206551 - (view) | Author: Cory Benfield (Lukasa) * | Date: 2013年12月18日 21:17 | |
Ok, here's a patch for 2.7 as well. I decided to allow the empty header names in rfc822.py as well, if only because I wanted the changed parsing code to match. If anyone thinks that's an excessive change, I'll happily remove it. |
|||
| msg207930 - (view) | Author: Cory Benfield (Lukasa) * | Date: 2014年01月12日 07:40 | |
Is there anything I can do to help move this forward? I appreciate you're all busy so I'm happy for this to take as long as it takes, I just wanted to make sure it's not blocked behind me. |
|||
| msg221682 - (view) | Author: brian yardy (brianyardy) | Date: 2014年06月27日 12:09 | |
import http.client
h = http.client.HTTPConnection('http://www.einstantloan.co.uk/')
h.request('GET', '/', headers={'Accept-Encoding': 'gzip'})
r = h.getresponse()
hdrs = r.getheaders()
body = r.read() # Hang here.
curl --compressed http://www.einstantloan.co.uk/
|
|||
| msg224545 - (view) | Author: Jason Robinson (jaywink) * | Date: 2014年08月02日 09:27 | |
I took the patches and verified that; * running the new tests without the changed code in Lib/email/feedparser.py (head) and Lib/httplib.py, Lib/rfc822.py (2.7) fails both the new tests. * running the new tests with the changed code passes the tests (on both head and 2.7). Sainsburys Bank site seems to have been fixed thus verification in the first comment does not work - but the test I think emulates that well enough. |
|||
| msg234714 - (view) | Author: Roundup Robot (python-dev) (Python triager) | Date: 2015年01月26日 04:35 | |
New changeset 25ecf3d0ea03 by Benjamin Peterson in branch '3.4': handle headers with no key (closes #19996) https://hg.python.org/cpython/rev/25ecf3d0ea03 New changeset 29923a9987be by Benjamin Peterson in branch 'default': merge 3.4 (#19996) https://hg.python.org/cpython/rev/29923a9987be New changeset 9a4882b12218 by Benjamin Peterson in branch '2.7': simply ignore headers with no name (#19996) https://hg.python.org/cpython/rev/9a4882b12218 |
|||
| History | |||
|---|---|---|---|
| Date | User | Action | Args |
| 2022年04月11日 14:57:55 | admin | set | github: 64195 |
| 2015年01月26日 04:35:11 | python-dev | set | status: open -> closed nosy: + python-dev messages: + msg234714 resolution: fixed stage: resolved |
| 2014年08月02日 09:27:20 | jaywink | set | nosy:
+ ezio.melotti, jaywink messages: + msg224545 versions: + Python 3.5 |
| 2014年06月27日 12:09:16 | brianyardy | set | nosy:
+ brianyardy messages: + msg221682 |
| 2014年01月12日 07:40:51 | Lukasa | set | messages: + msg207930 |
| 2013年12月18日 21:17:53 | Lukasa | set | files:
+ hdrs_27.patch messages: + msg206551 |
| 2013年12月17日 02:17:49 | r.david.murray | set | versions:
+ Python 3.4, - Python 3.1, Python 3.2 nosy: + barry messages: + msg206395 components: + email |
| 2013年12月16日 20:14:01 | Lukasa | set | files:
+ hdrs.patch keywords: + patch messages: + msg206360 |
| 2013年12月16日 19:18:11 | Lukasa | set | messages: + msg206353 |
| 2013年12月16日 19:13:00 | Lukasa | set | messages: + msg206350 |
| 2013年12月16日 15:16:49 | Lukasa | set | messages: + msg206320 |
| 2013年12月16日 15:08:58 | Arfrever | set | nosy:
+ Arfrever |
| 2013年12月16日 14:35:38 | r.david.murray | set | messages: + msg206318 |
| 2013年12月16日 14:12:44 | Lukasa | set | messages: + msg206315 |
| 2013年12月16日 13:31:54 | r.david.murray | set | nosy:
+ r.david.murray, christian.heimes messages: + msg206306 |
| 2013年12月16日 11:55:45 | Lukasa | set | messages: + msg206294 |
| 2013年12月16日 11:42:30 | Lukasa | create | |