This issue tracker has been migrated to GitHub ,
and is currently read-only.
For more information,
see the GitHub FAQs in the Python's Developer Guide.
Created on 2011年03月09日 20:07 by rosslagerwall, last changed 2022年04月11日 14:57 by admin. This issue is now closed.
| Files | ||||
|---|---|---|---|---|
| File name | Uploaded | Description | Edit | |
| issue11454.diff | ezio.melotti, 2012年09月16日 04:23 | review | ||
| issue11454_benchmarks.py | ezio.melotti, 2012年09月19日 19:58 | |||
| email_import_speedup.patch | r.david.murray, 2012年09月19日 20:28 | |||
| Messages (29) | |||
|---|---|---|---|
| msg130461 - (view) | Author: Ross Lagerwall (rosslagerwall) (Python committer) | Date: 2011年03月09日 20:07 | |
While importing most modules has little effect on the start up time, importing urllib.request seems to take a considerable time. E.g.: without importing urllib.request: real 0m0.072s user 0m0.070s sys 0m0.000s with importing urllib.request: real 0m0.127s user 0m0.120s sys 0m0.010s |
|||
| msg130483 - (view) | Author: Martin v. Löwis (loewis) * (Python committer) | Date: 2011年03月10日 02:58 | |
What operating system is that on? |
|||
| msg130487 - (view) | Author: Ross Lagerwall (rosslagerwall) (Python committer) | Date: 2011年03月10日 04:08 | |
Ubuntu 10.10. I haven't investigated whether it is actually urllib.request that is causing the long import time or a module that it is dependent on. |
|||
| msg130505 - (view) | Author: Ross Lagerwall (rosslagerwall) (Python committer) | Date: 2011年03月10日 13:40 | |
OK, running this: import base64 import bisect import hashlib import io import os import posixpath import random import re import socket import sys import time import collections import io import os import socket import collections import warnings import warnings from io import StringIO, TextIOWrapper import re import uu import base64 import binascii import warnings from io import BytesIO, StringIO which is most of the imports that are generated when importing urllib.request takes about 0.62s. Running this: import email.message import email.parser import email from email.feedparser import FeedParser from email.message import Message from email import utils from email import errors from email import header from email import charset as _charset which is the rest of the imports generated takes 0.105s. It seems like importing the email module adds considerable time, affecting a bunch of other modules like urllib.request and http.client. When looking at the code, it seems like a fair number of regular expressions are compiled when the email module is imported, causing the long import time. I wonder if this could be improved somehow? |
|||
| msg170545 - (view) | Author: Ezio Melotti (ezio.melotti) * (Python committer) | Date: 2012年09月16日 04:23 | |
I tried to remove a few unused regex and inline some of the others (the re module has its own caching anyway and they don't seem to be documented), but it didn't get so much faster (see attached patch).
I then put the second list of email imports of the previous message in a file and run it with cprofile and these are the results:
=== Without patch ===
$ time ./python -m issue11454_imp2
[69308 refs]
real 0m0.337s
user 0m0.312s
sys 0m0.020s
$ ./python -m cProfile -s time issue11454_imp2.py
15130 function calls (14543 primitive calls) in 0.191 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
26 0.029 0.001 0.029 0.001 {built-in method loads}
1248 0.015 0.000 0.018 0.000 sre_parse.py:184(__next)
3 0.010 0.003 0.015 0.005 sre_compile.py:301(_optimize_unicode)
48/17 0.009 0.000 0.037 0.002 sre_parse.py:418(_parse)
30/1 0.008 0.000 0.191 0.191 {built-in method exec}
82 0.007 0.000 0.024 0.000 {built-in method __build_class__}
25 0.006 0.000 0.024 0.001 sre_compile.py:207(_optimize_charset)
8 0.005 0.001 0.005 0.001 {built-in method load_dynamic}
1122 0.005 0.000 0.022 0.000 sre_parse.py:209(get)
177 0.005 0.000 0.005 0.000 {built-in method stat}
107 0.005 0.000 0.012 0.000 <frozen importlib._bootstrap>:1350(find_loader)
2944/2919 0.004 0.000 0.004 0.000 {built-in method len}
69/15 0.003 0.000 0.028 0.002 sre_compile.py:32(_compile)
9 0.003 0.000 0.003 0.000 sre_compile.py:258(_mk_bitmap)
94 0.002 0.000 0.003 0.000 <frozen importlib._bootstrap>:74(_path_join)
=== With patch ===
$ time ./python -m issue11454_imp2
[69117 refs]
real 0m0.319s
user 0m0.304s
sys 0m0.012s
$ ./python -m cProfile -s time issue11454_imp2.py
11281 function calls (10762 primitive calls) in 0.162 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
21 0.022 0.001 0.022 0.001 {built-in method loads}
3 0.011 0.004 0.015 0.005 sre_compile.py:301(_optimize_unicode)
708 0.008 0.000 0.010 0.000 sre_parse.py:184(__next)
30/1 0.008 0.000 0.238 0.238 {built-in method exec}
82 0.007 0.000 0.023 0.000 {built-in method __build_class__}
187 0.005 0.000 0.005 0.000 {built-in method stat}
8 0.005 0.001 0.005 0.001 {built-in method load_dynamic}
107 0.005 0.000 0.012 0.000 <frozen importlib._bootstrap>:1350(find_loader)
29/8 0.005 0.000 0.020 0.002 sre_parse.py:418(_parse)
11 0.004 0.000 0.020 0.002 sre_compile.py:207(_optimize_charset)
643 0.003 0.000 0.012 0.000 sre_parse.py:209(get)
5 0.003 0.001 0.003 0.001 {built-in method dumps}
94 0.002 0.000 0.003 0.000 <frozen importlib._bootstrap>:74(_path_join)
257 0.002 0.000 0.002 0.000 quoprimime.py:56(<genexpr>)
26 0.002 0.000 0.116 0.004 <frozen importlib._bootstrap>:938(get_code)
1689/1676 0.002 0.000 0.002 0.000 {built-in method len}
31 0.002 0.000 0.003 0.000 <frozen importlib._bootstrap>:1034(get_data)
256 0.002 0.000 0.002 0.000 {method 'setdefault' of 'dict' objects}
119 0.002 0.000 0.003 0.000 <frozen importlib._bootstrap>:86(_path_split)
35 0.002 0.000 0.019 0.001 <frozen importlib._bootstrap>:1468(_find_module)
34 0.002 0.000 0.015 0.000 <frozen importlib._bootstrap>:1278(_get_loader)
39/6 0.002 0.000 0.023 0.004 sre_compile.py:32(_compile)
26/3 0.001 0.000 0.235 0.078 <frozen importlib._bootstrap>:853(_load_module)
The time spent in sre_compile.py:301(_optimize_unicode) most likely comes from email.utils._has_surrogates (there's a further speedup when it's commented away):
_has_surrogates = re.compile('([^\ud800-\udbff]|\A)[\udc00-\udfff]([^\udc00-\udfff]|\Z)').search
This is used in a number of places, so it can't be inlined. I wanted to optimize it but I'm not sure what it's supposed to do. It matches lone low surrogates, but not lone high ones, and matches some invalid sequences, but not others:
>>> _has_surrogates('\ud800') # lone high
>>> _has_surrogates('\udc00') # lone low
<_sre.SRE_Match object at 0x9ae00e8>
>>> _has_surrogates('\ud800\udc00') # valid pair (high+low)
>>> _has_surrogates('\ud800\ud800\udc00') # invalid sequence (lone high, valid high+low)
>>> _has_surrogates('\udc00\ud800\ud800\udc00') # invalid sequence (lone low, lone high, valid high+low)
<_sre.SRE_Match object at 0x9ae0028>
FWIW this was introduced in email.message in 1a041f364916 and then moved to email.util in 9388c671d52d.
|
|||
| msg170546 - (view) | Author: R. David Murray (r.david.murray) * (Python committer) | Date: 2012年09月16日 04:29 | |
It detects whether a string contains any characters have been surrogate escaped by the surrogate escape handler. I disliked using it, but I didn't know of any better way to do that detection. It's on my long list of things to come back to eventually and try to improve :) |
|||
| msg170549 - (view) | Author: Ezio Melotti (ezio.melotti) * (Python committer) | Date: 2012年09月16日 05:28 | |
Given that high surrogates are U+D800..U+DBFF, and low ones are U+DC00..U+DFFF, '([^\ud800-\udbff]|\A)[\udc00-\udfff]([^\udc00-\udfff]|\Z)' means "a low surrogates, preceded by either an high one or line beginning, and followed by another low one or line end". PEP 838 says "With this PEP, non-decodable bytes >= 128 will be represented as lone surrogate codes U+DC80..U+DCFF". If I change the regex to _has_surrogates = re.compile('[\udc80-\udcff]').search, the tests still pass but there's no improvement on startup time (note: the previous regex was matching all the surrogates in this range too, however I'm not sure how well this is tested). If I change the implementation with _pep383_surrogates = set(map(chr, range(0xDC80, 0xDCFF+1))) def _has_surrogates(s): return any(c in _pep383_surrogates for c in s) the tests still pass and the startup is ~15ms faster here: $ time ./python -m issue11454_imp2 [68837 refs] real 0m0.305s user 0m0.288s sys 0m0.012s However using this function instead of the regex is ~10x slower at runtime. Using the shorter regex is about ~7x faster, but there are no improvements on the startup time. Assuming the shorter regex is correct, it can still be called inside a function or used with functools.partial. This will result in a improved startup time and a ~2x improvement on runtime (so it's a win-win). See attached patch for benchmarks. This is a sample result: 17.01 usec/pass <- re.compile(current_regex).search 2.20 usec/pass <- re.compile(short_regex).search 148.18 usec/pass <- return any(c in surrogates for c in s) 106.35 usec/pass <- for c in s: if c in surrogates: return True 8.40 usec/pass <- return re.search(short_regex, s) 8.20 usec/pass <- functools.partial(re.search, short_regex) |
|||
| msg170553 - (view) | Author: R. David Murray (r.david.murray) * (Python committer) | Date: 2012年09月16日 13:31 | |
Considering how often that test is done, I would consider the compiled version of the short regex the clear winner based on your numbers. I wonder if we could precompile the regex and load it from a pickle. |
|||
| msg170697 - (view) | Author: Ezio Melotti (ezio.melotti) * (Python committer) | Date: 2012年09月19日 02:46 | |
re.compile seems twice as fast as pickle.loads:
import re
import pickle
import timeit
N = 100000
s = "r = re.compile('[\\udc80-\\udcff]')"
t = timeit.Timer(s, 'import re')
print("%6.2f <- re.compile" % t.timeit(number=N))
s = "r = pickle.loads(p)"
p = pickle.dumps(re.compile('[\udc80-\udcff]'))
t = timeit.Timer(s, 'import pickle; from __main__ import p')
print("%6.2f <- pickle.loads" % t.timeit(number=N))
Result:
5.59 <- re.compile
11.04 <- pickle.loads
See also #2679.
|
|||
| msg170712 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) | Date: 2012年09月19日 08:32 | |
> If I change the regex to _has_surrogates = re.compile('[\udc80-\udcff]').search, the tests still pass but there's no improvement on startup time (note: the previous regex was matching all the surrogates in this range too, however I'm not sure how well this is tested).
What about
_has_surrogates = re.compile('[^\udc80-\udcff]*\Z').match
?
|
|||
| msg170713 - (view) | Author: Ezio Melotti (ezio.melotti) * (Python committer) | Date: 2012年09月19日 08:49 | |
> What about _has_surrogates = re.compile('[^\udc80-\udcff]*\Z').match ?
The runtime is a bit slower than re.compile('[\udc80-\udcff]').search, but otherwise it's faster than all the other alternatives. I haven't checked the startup-time, but I suspect it won't be better -- maybe even worse.
|
|||
| msg170714 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) | Date: 2012年09月19日 09:03 | |
> I haven't checked the startup-time, but I suspect it won't be better -- maybe even worse. I suppose it will be much better. |
|||
| msg170715 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) | Date: 2012年09月19日 09:29 | |
Startup-time:
$ ./python -m timeit -s 'import re' 're.compile("([^\ud800-\udbff]|\A)[\udc00-\udfff]([^\udc00-\udfff]|\Z)").search; re.purge()'
100 loops, best of 3: 4.16 msec per loop
$ ./python -m timeit -s 'import re' 're.purge()' 're.compile("[\udc80-\udcff]").search'
100 loops, best of 3: 5.72 msec per loop
$ ./python -m timeit 'h=lambda s, p=set(map(chr, range(0xDC80, 0xDCFF+1))): any(c in p for c in s)'
10000 loops, best of 3: 60.5 usec per loop
$ ./python -m timeit -s 'import re' 're.purge()' 're.compile("(?![^\udc80-\udcff])").search'
1000 loops, best of 3: 401 usec per loop
$ ./python -m timeit -s 'import re' 're.purge()' 're.compile("[^\udc80-\udcff]*\Z").match'
1000 loops, best of 3: 427 usec per loop
Runtime:
$ ./python -m timeit -s 'import re; h=re.compile("([^\ud800-\udbff]|\A)[\udc00-\udfff]([^\udc00-\udfff]|\Z)").search; s = "A"*1000' 'h(s)'
1000 loops, best of 3: 245 usec per loop
$ ./python -m timeit -s 'import re; h=re.compile("[\udc80-\udcff]").search; s = "A"*1000' 'h(s)'
10000 loops, best of 3: 30.1 usec per loop
$ ./python -m timeit -s 'h=lambda s, p=set(map(chr, range(0xDC80, 0xDCFF+1))): any(c in p for c in s); s = "A"*1000' 'h(s)'
10000 loops, best of 3: 164 usec per loop
$ ./python -m timeit -s 'import re; h=re.compile("(?![^\udc80-\udcff])").search; s = "A"*1000' 'h(s)'
10000 loops, best of 3: 98.3 usec per loop
$ ./python -m timeit -s 'import re; h=re.compile("[^\udc80-\udcff]*\Z").match; s = "A"*1000' 'h(s)'
10000 loops, best of 3: 34.6 usec per loop
|
|||
| msg170718 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) | Date: 2012年09月19日 09:40 | |
Faster set-version: $ ./python -m timeit -s 'h=lambda s, hn=set(map(chr, range(0xDC80, 0xDD00))).isdisjoint: not hn(s); s = "A"*1000' 'h(s)' 10000 loops, best of 3: 43.8 usec per loop |
|||
| msg170761 - (view) | Author: Ezio Melotti (ezio.melotti) * (Python committer) | Date: 2012年09月19日 19:58 | |
Attached new benchmark file.
Results:
Testing runtime of the _has_surrogates functions
Generating chars...
Generating samples...
1.61 <- re.compile(current_regex).search
0.24 <- re.compile(short_regex).search
15.13 <- return any(c in surrogates for c in s)
10.21 <- for c in s: if c in surrogates: return True
0.85 <- return re.search(short_regex, s)
0.83 <- functools.partial(re.search, short_regex)
20.86 <- for c in map(ord, s): if c in range(0xDC80, 0xDCFF+1): return True
19.68 <- for c in map(ord, s): if 0xDC80 <= c <= 0xDCFF: return True
0.28 <- re.compile('[^\udc80-\udcff]*\Z').match
7.00 <- return not set(map(chr, range(0xDC80, 0xDCFF+1))).isdisjoint(s)
Testing startup time
0.57 <- r = re.compile('[\udc80-\udcff]').search
0.59 <- r = re.compile('[^\udc80-\udcff]*\Z').match
199.79 <- r = re.compile('[\udc80-\udcff]').search; purge()
22.62 <- r = re.compile('[^\udc80-\udcff]*\Z').match; purge()
1.12 <- r = pickle.loads(p)
|
|||
| msg170762 - (view) | Author: R. David Murray (r.david.murray) * (Python committer) | Date: 2012年09月19日 20:01 | |
So by your measurements the short search is the clear winner? |
|||
| msg170763 - (view) | Author: Ezio Melotti (ezio.melotti) * (Python committer) | Date: 2012年09月19日 20:09 | |
Yes, however it has a startup cost that the function that returns re.search(short_regex, s) and the one with functool.partial don't have, because with these the compilation happens at the first call. If we use one of these two, the startup time will be reduced a lot, and the runtime will be ~2x faster. If we use re.compile(short_regex).search the startup time won't be reduced as much, but the runtime will be ~8x faster. Given that here we are trying to reduce the startup time and not the runtime, I think using one of those two functions is better. Another possible solution to improve the startup time is trying to optimize _optimize_unicode -- not sure how much can be done there though. |
|||
| msg170765 - (view) | Author: R. David Murray (r.david.murray) * (Python committer) | Date: 2012年09月19日 20:19 | |
This issue may be about reducing the startup time, but this function is a hot spot in the email package so I would prefer to sacrifice startup time optimization for an increase in speed.
However, given the improvements to import locking in 3.3, what about a self replacing function?
def _has_surrogates(s):
import email.utils
f = re.compile('[\udc80-\udcff]').search
email.utils._has_surrogates = f
return f(s)
|
|||
| msg170767 - (view) | Author: Ezio Melotti (ezio.melotti) * (Python committer) | Date: 2012年09月19日 20:25 | |
That might work. To avoid the overhead of the cache lookup I was thinking about something like regex = None def _has_surrogates(s): global regex if regex is None: regex = re.compile(short_regex) return regex.search(s) but I have discarded it because it's not very pretty and still has the overhead of the function and an additional if. Your version solves both the problems in a more elegant way. |
|||
| msg170768 - (view) | Author: R. David Murray (r.david.murray) * (Python committer) | Date: 2012年09月19日 20:28 | |
It passed the email test suite. Patch attached. |
|||
| msg170770 - (view) | Author: Ezio Melotti (ezio.melotti) * (Python committer) | Date: 2012年09月19日 20:33 | |
It would be better to add/improve the _has_surrogates tests before committing. The patch I attached is also still valid if you want a further speed up improvement. |
|||
| msg170772 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) | Date: 2012年09月19日 21:26 | |
def _has_surrogates(s): try: s.encode() return False except UnicodeEncodeError: return True Results: 0.26 <- re.compile(short_regex).search 0.06 <- try encode |
|||
| msg171073 - (view) | Author: R. David Murray (r.david.murray) * (Python committer) | Date: 2012年09月23日 18:22 | |
I'm really not willing to inline any of those pre-compiled regular expressions. They are precompiled because for a program processing lots of email, they are hot spots. We could use the same "compile on demand" dodge on them, though. Can you explain your changes to the |
|||
| msg171074 - (view) | Author: R. David Murray (r.david.murray) * (Python committer) | Date: 2012年09月23日 18:23 | |
Woops. Can you explain your changes to the ecre regex (keeping in mind that I don't know much about regex syntax). |
|||
| msg171075 - (view) | Author: R. David Murray (r.david.murray) * (Python committer) | Date: 2012年09月23日 18:24 | |
Oh, yeah, and the encode benchmark is very instructive, thanks Serhiy :) |
|||
| msg171076 - (view) | Author: Ezio Melotti (ezio.melotti) * (Python committer) | Date: 2012年09月23日 18:53 | |
> They are precompiled because for a program processing lots of email, > they are hot spots. OK, I didn't know they were hot spots. Note that the regex are not recompiled everytime: they are compiled the first time and then taken from the cache (assuming they don't fall out from the bottom of the cache). This still has a small overhead though. > Can you explain your changes to the ecre regex (keeping in mind > that I don't know much about regex syntax). - (?P<charset>[^?]*?) # non-greedy up to the next ? is the charset + (?P<charset>[^?]*) # up to the next ? is the charset \? # literal ? (?P<encoding>[qb]) # either a "q" or a "b", case insensitive \? # literal ? - (?P<encoded>.*?) # non-greedy up to the next ?= is the encoded string + (?P<encoded>[^?]*) # up to the next ?= is the encoded string \?= # literal ?= At the beginning, the non-greedy *? is unnecessary because [^?]* already stops at the first ? found. The second change might actually be wrong if <encoded> is allowed to contain lone '?'s. The original regex used '.*?\?=', which means "match everything (including lone '?'s) until the first '?=')", mine means "match everything until the first '?'" which works fine as long as lone '?'s are not allowed. Serhiy's suggestion is semantically different, but it might be still suitable if having _has_surrogate return True even for surrogates not in range \udc80-\udcff is OK. |
|||
| msg171077 - (view) | Author: R. David Murray (r.david.murray) * (Python committer) | Date: 2012年09月23日 19:06 | |
Well, "other" surrogates will cause a different error later than with the current _has_surrogates logic, but it won't be any more mysterious than what would happen now, I think. Normally, if I understand correctly, other surrogates should never occur, so I don't think it is a real issue. Yes, lone '?'s should not stop the pattern match in an encoded string. Even though I don't think they are normally supposed to occur, they do occur when encoded words are encoded incorrectly, and we get a better error recovery result if we look for ?= as the end. |
|||
| msg191910 - (view) | Author: Roundup Robot (python-dev) (Python triager) | Date: 2013年06月26日 16:06 | |
New changeset 520490c4c388 by R David Murray in branch 'default': #11454: Reduce email module load time, improve surrogate check efficiency. http://hg.python.org/cpython/rev/520490c4c388 |
|||
| msg191911 - (view) | Author: R. David Murray (r.david.murray) * (Python committer) | Date: 2013年06月26日 16:09 | |
I've checked in the encode version of the method. I'm going to pass on doing the other inlines, given that the improvement isn't that large. I will, however, keep the issue in mind as I make other changes to the code, and there will be a general performance review phase when I get done with the API additions/bug fixing in the email6 project. |
|||
| History | |||
|---|---|---|---|
| Date | User | Action | Args |
| 2022年04月11日 14:57:14 | admin | set | github: 55663 |
| 2013年06月26日 16:09:07 | r.david.murray | set | status: open -> closed resolution: fixed messages: + msg191911 stage: patch review -> resolved |
| 2013年06月26日 16:06:34 | python-dev | set | nosy:
+ python-dev messages: + msg191910 |
| 2013年03月14日 07:54:25 | ezio.melotti | set | stage: patch review versions: + Python 3.4, - Python 3.3 |
| 2012年09月23日 19:06:14 | r.david.murray | set | messages: + msg171077 |
| 2012年09月23日 18:53:18 | ezio.melotti | set | messages: + msg171076 |
| 2012年09月23日 18:24:56 | r.david.murray | set | messages: + msg171075 |
| 2012年09月23日 18:23:00 | r.david.murray | set | messages: + msg171074 |
| 2012年09月23日 18:22:09 | r.david.murray | set | messages: + msg171073 |
| 2012年09月20日 21:31:21 | Arfrever | set | nosy:
+ Arfrever |
| 2012年09月19日 21:26:28 | serhiy.storchaka | set | messages: + msg170772 |
| 2012年09月19日 20:33:56 | ezio.melotti | set | messages: + msg170770 |
| 2012年09月19日 20:28:29 | r.david.murray | set | files:
+ email_import_speedup.patch messages: + msg170768 |
| 2012年09月19日 20:25:34 | ezio.melotti | set | messages: + msg170767 |
| 2012年09月19日 20:19:37 | r.david.murray | set | messages: + msg170765 |
| 2012年09月19日 20:09:49 | ezio.melotti | set | messages: + msg170763 |
| 2012年09月19日 20:01:37 | r.david.murray | set | messages: + msg170762 |
| 2012年09月19日 19:58:01 | ezio.melotti | set | files:
+ issue11454_benchmarks.py messages: + msg170761 |
| 2012年09月19日 17:13:53 | ezio.melotti | set | files: - issue11454_surr1.py |
| 2012年09月19日 17:13:47 | ezio.melotti | set | files: - issue11454_surr1.py |
| 2012年09月19日 09:40:45 | serhiy.storchaka | set | messages: + msg170718 |
| 2012年09月19日 09:29:38 | serhiy.storchaka | set | messages: + msg170715 |
| 2012年09月19日 09:03:20 | serhiy.storchaka | set | messages: + msg170714 |
| 2012年09月19日 08:49:56 | ezio.melotti | set | files:
+ issue11454_surr1.py messages: + msg170713 |
| 2012年09月19日 08:32:57 | serhiy.storchaka | set | nosy:
+ serhiy.storchaka messages: + msg170712 |
| 2012年09月19日 02:46:49 | ezio.melotti | set | messages: + msg170697 |
| 2012年09月16日 13:31:28 | r.david.murray | set | messages: + msg170553 |
| 2012年09月16日 05:28:06 | ezio.melotti | set | files:
+ issue11454_surr1.py messages: + msg170549 |
| 2012年09月16日 04:29:05 | r.david.murray | set | messages: + msg170546 |
| 2012年09月16日 04:23:51 | ezio.melotti | set | files:
+ issue11454.diff nosy: + ezio.melotti messages: + msg170545 keywords: + patch |
| 2011年03月11日 14:12:19 | rosslagerwall | set | nosy:
loewis, barry, orsenthil, nadeem.vawda, r.david.murray, rosslagerwall title: urllib.request import time -> email.message import time |
| 2011年03月10日 21:04:39 | pitrou | set | nosy:
+ orsenthil |
| 2011年03月10日 16:09:26 | nadeem.vawda | set | nosy:
+ nadeem.vawda |
| 2011年03月10日 13:40:17 | rosslagerwall | set | nosy:
+ barry, r.david.murray, - orsenthil messages: + msg130505 |
| 2011年03月10日 04:08:10 | rosslagerwall | set | nosy:
loewis, orsenthil, rosslagerwall messages: + msg130487 |
| 2011年03月10日 02:58:12 | loewis | set | nosy:
+ loewis messages: + msg130483 |
| 2011年03月09日 20:07:16 | rosslagerwall | create | |