Message204959
| Author |
ncoghlan |
| Recipients |
Balthazar.Rouberol, antlong, barry, docs@python, eric.araujo, ezio.melotti, georg.brandl, hhas, jleedev, kousu, loewis, ncoghlan, pitrou, r.david.murray, serhiy.storchaka |
| Date |
2013年12月01日.20:57:46 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<CADiSq7enVZO-exTcMQ_420TFFJdx=QPPrHTw-mt1j7=PoH8xmA@mail.gmail.com> |
| In-reply-to |
<1385912383.26.0.603339173108.issue10976@psf.upfronthosting.co.za> |
| Content |
json.bytes would also work for me. It wouldn't need to replicate the full
main module API, just combine the text transform with UTF-8 encoding and
decoding (as well as autodetected UTF-16 and UTF-32 decoding) for the main
4 functions (dump[s], load[s]).
If people want UTF-16 and UTF-32 *en*coding (which seem to be rarely used
in combination with JSON), then they can invoke the text transform version
directly, and then do a separate encoding step. |
|
History
|
|---|
| Date |
User |
Action |
Args |
| 2013年12月01日 20:57:46 | ncoghlan | set | recipients:
+ ncoghlan, loewis, barry, georg.brandl, hhas, pitrou, kousu, ezio.melotti, eric.araujo, r.david.murray, docs@python, antlong, serhiy.storchaka, Balthazar.Rouberol, jleedev |
| 2013年12月01日 20:57:46 | ncoghlan | link | issue10976 messages |
| 2013年12月01日 20:57:46 | ncoghlan | create |
|