Message149491
| Author |
ncoghlan |
| Recipients |
akuchling, docs@python, gpolo, ncoghlan, terry.reedy |
| Date |
2011年12月15日.03:16:52 |
| SpamBayes Score |
0.00021434078 |
| Marked as misclassified |
No |
| Message-id |
<1323919013.83.0.854406512576.issue2134@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
Sure, but what does that have to do with anything? tokenize isn't a general purpose tokenizer, it's specifically for tokenizing Python source code.
The *problem* is that it doesn't currently fully tokenize everything, but doesn't explicitly say that in the module documentation.
Hence my proposed two-fold fix: document the current behaviour explicitly and also add a separate "exact_type" attribute for easy access to the detailed tokenization without doing your own string comparisons. |
|
History
|
|---|
| Date |
User |
Action |
Args |
| 2011年12月15日 03:16:53 | ncoghlan | set | recipients:
+ ncoghlan, akuchling, terry.reedy, gpolo, docs@python |
| 2011年12月15日 03:16:53 | ncoghlan | set | messageid: <1323919013.83.0.854406512576.issue2134@psf.upfronthosting.co.za> |
| 2011年12月15日 03:16:53 | ncoghlan | link | issue2134 messages |
| 2011年12月15日 03:16:52 | ncoghlan | create |
|