Message290808
| Author |
tim.peters |
| Recipients |
Thomas Wouters, gregory.p.smith, serhiy.storchaka, tim.peters, twouters, vstinner |
| Date |
2017年03月30日.05:26:31 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<1490851591.55.0.193286092508.issue29941@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
I think we should certainly support asserts regardless of whether Py_DEBUG is in force (although Py_DEBUG should imply asserts run too).
And I wish you had stuck to just that much ;-) The argument against, e.g., 'assert(!PyErr_Occurred())', seems exceedingly weak. An `assert()` is to catch things that are never supposed to happen. It's an error in the implementation if such a thing ever does happen. But whether that error is in the Pytnon core or an external C extension is a distinction that only matters to assigning blame - it's "an error" all the same. It's nothing but good to catch errors ASAP.
Where I draw a hard distinction between assertions and Py_DEBUG is along the "expensive?" axis. The more assertions the merrier, but they better be cheap (and `PyErr_Occurred()` is pretty cheap).
Py_DEBUG does all sorts of stuff that's expensive and intrusive - that's for heavy duty verification.
So, to me, 'assert(!PyErr_Occurred())' is fine - it's cheap and catches an error at a point where catching it is possible. Finding the true cause for why the error is set may be arbitrarily more expensive, so _that_ code belongs under Py_DEBUG. Except there is no general way to do that, so no such code exists ;-) |
|