Message126525
| Author |
jnoller |
| Recipients |
Thorney, bobbyi, gdb, jnoller, mattheww |
| Date |
2011年01月19日.14:22:07 |
| SpamBayes Score |
2.3727542e-10 |
| Marked as misclassified |
No |
| Message-id |
<AANLkTimVnm36dHo2PbmObT7S8+1Mye8_ErsmqvGCewew@mail.gmail.com> |
| In-reply-to |
<1295393003.13.0.986995634168.issue4106@psf.upfronthosting.co.za> |
| Content |
On Tue, Jan 18, 2011 at 6:23 PM, Brian Thorne <report@bugs.python.org> wrote:
>
> Brian Thorne <hardbyte@gmail.com> added the comment:
>
> With the example script attached I see the exception every time. On Ubuntu 10.10 with Python 2.6
>
> Since the offending line in multiprocesing/queues.py (233) is a debug statement, just commenting it out seems to stop this exception.
>
> Looking at the util file shows the logging functions to be all of the form:
>
> if _logger:
> _logger.log(...
>
> Could it be possible that after the check the _logger global (or the debug function) is destroyed by the exit handler? Can we convince them to stick around until such a time that they cannot be called?
>
> Adding a small delay before joining also seems to work, but is ugly. Why should another Process *have* to have a minimum amount of work to not throw an exception?
See http://bugs.python.org/issue9207 - but yes, the problem is that
the VM is nuking our imported modules before all the processes are
shutdown. |
|