Message105302
| Author |
stutzbach |
| Recipients |
gvanrossum, lemburg, loewis, r.david.murray, scoder, stutzbach, vstinner, zooko |
| Date |
2010年05月08日.15:48:38 |
| SpamBayes Score |
0.0005567773 |
| Marked as misclassified |
No |
| Message-id |
<m2reae285401005080848r8c64cb57m2b74b3001cbf1f06@mail.gmail.com> |
| In-reply-to |
<4BE58035.9040604@egenix.com> |
| Content |
On Sat, May 8, 2010 at 10:16 AM, Marc-Andre Lemburg
<report@bugs.python.org> wrote:
> Are you sure this doesn't get optimized away in practice ?
I'm sure it doesn't get optimized away by gcc 4.3, where I tested it. :)
> Sure, though, I don't see how this relates to C code relying
> on these details, e.g. a C extension will probably use different
> conversion code depending on whether UCS2 or UCS4 is compatible
> with some external library, etc.
Can you give an example?
All of the examples I can think of either:
- poke into PyUnicodeObject's internals,
- call a Python function that exposes Py_UNICODE or PyUnicodeObject
I'm explicitly trying to protect those two cases. It's quite possible
that I'm missing something, but I can't think of any other unsafe way
for a C extension to convert a Python Unicode object to a byte string. |
|