homepage

This issue tracker has been migrated to GitHub , and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author meador.inge
Recipients benjamin.peterson, brett.cannon, eric.snow, gregory.p.smith, meador.inge, ncoghlan, tzickel
Date 2015年11月01日.21:44:28
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1446414268.32.0.533566047206.issue25083@psf.upfronthosting.co.za>
In-reply-to
Content
I started poking at the patch a little and have a few comments.
My understanding of the issue comments is that the read error actually happens when reading in the *source* file and *not* the bytecode file. This happens because 'ferror' is not checked after receiving an EOF and thus we think we just have an empty source file. I can understand how creating a reproducible test case for this error path would be very difficult.
So, checking for errors with 'ferror' definitely seems reasonable, but why do it in the tokenizer code? I already see several places in 'fileobject.c' that do similar checks. For example, in 'get_line' I see:
 while ( buf != end && (c = GETC(fp)) != EOF ) {
 ...
 }
 if (c == EOF) {
 if (ferror(fp) && errno == EINTR) {
 ...
 }
 }
As such, wouldn't handling this error case directly in 'Py_UniversalNewlineFgets' similar to the above code be more appropriate?
History
Date User Action Args
2015年11月01日 21:44:28meador.ingesetrecipients: + meador.inge, brett.cannon, gregory.p.smith, ncoghlan, benjamin.peterson, eric.snow, tzickel
2015年11月01日 21:44:28meador.ingesetmessageid: <1446414268.32.0.533566047206.issue25083@psf.upfronthosting.co.za>
2015年11月01日 21:44:28meador.ingelinkissue25083 messages
2015年11月01日 21:44:28meador.ingecreate

AltStyle によって変換されたページ (->オリジナル) /