Message293786
| Author |
vstinner |
| Recipients |
georg.brandl, josephgordon, martin.panter, meador.inge, serhiy.storchaka, vstinner |
| Date |
2017年05月16日.21:02:54 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<1494968574.68.0.617507108233.issue25324@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
> I would fix this by making tokenize.tok_name a copy. It looks like this behaviour dates back to 1997 (see revision 1efc4273fdb7).
token.tok_name is part of the Python public API:
https://docs.python.org/dev/library/token.html#token.tok_name
whereas tokenize.tok_name isn't documented. So I dislike having two disconnected mappings. I prefer to add tokenize tokens directly in Lib/token.py, and then get COMMENT, NL and ENCODING using tok_name.index(). |
|