Message215857
| Author |
josh.r |
| Recipients |
garybernhardt, gdr@garethrees.org, josh.r, mark.dickinson, ncoghlan, pitrou, skrah, vstinner |
| Date |
2014年04月09日.23:56:17 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<1397087778.18.0.671199308204.issue20539@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
I just think it's a little odd to make math.factorial uniquely sensitive to the documented definition of OverflowError. Basically every one of the non-masking PyLong_As<CType> interfaces uses OverflowError for too large values.
In fact, all the functions which are converting to unsigned C types and aren't doing so for "program logic reasons" (e.g. the shift functions don't accept a negative shift due to semantic obscurity, and long_to_bytes doesn't accept a negative length for the output) use OverflowError when a negative value is passed as well.
The PyArg_ParseTuple* functions raise OverflowError as well, either implicitly (when they call a PyLong_As<CType> function), or explicitly for those functions that impose more restrictive ranges than the PyLong function they wrap.
Long story short: The majority of C extension functions that convert to C level types are raising OverflowError to mean "out of range for implementation's chosen C type" (whether too large, too small, or negative when only unsigned values excepted) as a side-effect of using Python's documented APIs. Making math.factorial the exception would be violating the reasonable expectation one might get from using similar functions. |
|