Message149673
| Author |
neologix |
| Recipients |
Ramchandra Apte, mark.dickinson, neologix, phillies, pitrou |
| Date |
2011年12月17日.16:21:29 |
| SpamBayes Score |
1.730333e-08 |
| Marked as misclassified |
No |
| Message-id |
<1324138890.47.0.844223294241.issue13555@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
> So it seems unlikely to be the explanation.
Victor reproduced in on IRC, and it's indeed an overflow.
The problematic code is in readline_file:
"""
bigger = self->buf_size << 1;
if (bigger <= 0) { /* overflow */
PyErr_NoMemory();
return -1;
}
newbuf = (char *)realloc(self->buf, bigger);
if (!newbuf) {
PyErr_NoMemory();
return -1;
}
"""
self->buf_size is an int, which overflow pretty easily.
>>> 196 * 240000
47040000
>>> 196 * 240000 * 8 # assuming 8 bytes per float
376320000
>>> 2**31
2147483648
Hmmm... A byte is 8 bit, which gives:
>>> 196 * 240000 * 8 * 8
3010560000L
>>> 196 * 240000 * 8 * 8 > 2**31
True
Now, if it works on your box, it's probably due to the compiler optimizing the check away. Since `bigger` is cast to an unsigned 64-bit (size_t) when calling malloc(), it happens to work.
Maybe your distro doesn't build python with -fwrapv.
So, what do you suggest? Should we fix this (Py_ssize_t, overflow check before computation), as in #11564? |
|