Message117538
| Author |
Brian.Bossé |
| Recipients |
Brian.Bossé |
| Date |
2010年09月28日.17:31:02 |
| SpamBayes Score |
3.6743444e-09 |
| Marked as misclassified |
No |
| Message-id |
<1285695065.99.0.439423855227.issue9974@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
Executing the following code against a py file which contains line continuations generates an assert:
import tokenize
foofile = open(filename, "r")
tokenize.untokenize(list(tokenize.generate_tokens(foofile.readline)))
(note, the list() is important due to issue #8478)
The assert triggered is:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\tokenize.py", line 262, in untokenize
return ut.untokenize(iterable)
File "C:\Python27\lib\tokenize.py", line 198, in untokenize
self.add_whitespace(start)
File "C:\Python27\lib\tokenize.py", line 187, in add_whitespace
assert row <= self.prev_row
AssertionError
I have tested this in 2.6.5, 2.7 and 3.1.2. The line numbers may differ but the stack is otherwise identical between these versions.
Example input code:
foo = \
3
If the assert is removed, the code generated is still incorrect. For example, the input:
foo = 3
if foo == 5 or \
foo == 1
pass
becomes:
foo = 3
if foo == 5 orfoo == 1
pass
which besides not having the line continuation, is functionally incorrect.
I'm wrapping my head around the functionality of this module and am willing to do the legwork to get a fix in. Ideas on how to go about it are more than welcome.
Ironic aside: this bug is present when tokenize.py itself is used as input. |
|
History
|
|---|
| Date |
User |
Action |
Args |
| 2010年09月28日 17:31:06 | Brian.Bossé | set | recipients:
+ Brian.Bossé |
| 2010年09月28日 17:31:05 | Brian.Bossé | set | messageid: <1285695065.99.0.439423855227.issue9974@psf.upfronthosting.co.za> |
| 2010年09月28日 17:31:04 | Brian.Bossé | link | issue9974 messages |
| 2010年09月28日 17:31:02 | Brian.Bossé | create |
|