This issue tracker has been migrated to GitHub ,
and is currently read-only.
For more information,
see the GitHub FAQs in the Python's Developer Guide.
Created on 2020年03月27日 17:02 by vstinner, last changed 2022年04月11日 14:59 by admin. This issue is now closed.
| Pull Requests | |||
|---|---|---|---|
| URL | Status | Linked | Edit |
| PR 19196 | closed | vstinner, 2020年03月27日 17:04 | |
| PR 19195 | merged | vstinner, 2020年03月27日 18:00 | |
| PR 19416 | merged | vstinner, 2020年04月07日 21:31 | |
| Messages (8) | |||
|---|---|---|---|
| msg365170 - (view) | Author: STINNER Victor (vstinner) * (Python committer) | Date: 2020年03月27日 17:02 | |
test_threading.ThreadJoinOnShutdown.test_reinit_tls_after_fork() does crash randomly on AIX in logging._after_at_fork_child_reinit_locks(): https://bugs.python.org/issue40068#msg365028 This function ends by releasing a lock: _releaseLock() # Acquired by os.register_at_fork(before=. But it is not safe to use a lock after fork (see bpo-6721 and bpo-40089). The purpose of _after_at_fork_child_reinit_locks() is already to fix locks used by logging handles: see bpo-36533 and commit 64aa6d2000665efb1a2eccae176df9520bf5f5e6. But the global logging._lock is not reset after fork. Attached PR fix the issue. |
|||
| msg365172 - (view) | Author: STINNER Victor (vstinner) * (Python committer) | Date: 2020年03月27日 17:07 | |
See also bpo-40089 "Add _at_fork_reinit() method to locks". |
|||
| msg365197 - (view) | Author: STINNER Victor (vstinner) * (Python committer) | Date: 2020年03月28日 00:09 | |
By the way, I'm not sure about the error handling code path:
for handler in _at_fork_reinit_lock_weakset:
try:
handler.createLock()
except Exception as err:
# Similar to what PyErr_WriteUnraisable does.
print("Ignoring exception from logging atfork", instance,
"._reinit_lock() method:", err, file=sys.stderr)
Is it really a good idea to ignore such error?
By the way, there is an obvious typo: "instance" name doesn't exist, it should be "handler" :-)
|
|||
| msg365952 - (view) | Author: (melwitt) * | Date: 2020年04月07日 22:13 | |
Hi, I've been following the work related to: https://bugs.python.org/issue6721 https://bugs.python.org/issue40089 because I encountered a problem where a standard library lock was held by a parent process at the time that child processes were forked, so the child processes got stuck behind the inherited held locks. But, if I'm understanding correctly, this issue is fixing something in python logging specifically and not all standard library locks in general. My question is, will there be a way to reinit standard library locks in general using _at_fork_reinit()? That is, should we expect a future fix in python to do this or is the recommendation to continue to ensure the application reinits locks during process start if we know the process could be a child? Thanks for your advice. |
|||
| msg365959 - (view) | Author: STINNER Victor (vstinner) * (Python committer) | Date: 2020年04月07日 23:12 | |
> because I encountered a problem where a standard library lock was held by a parent process at the time that child processes were forked, so the child processes got stuck behind the inherited held locks. Which lock from which module? You wrote details at: https://github.com/python/cpython/pull/19195#issuecomment-609583084 According to your comment #28 at https://bugs.launchpad.net/nova/+bug/1844929 the lock involved in the issue comes from _TransactionFactory of oslo.db: https://github.com/openstack/oslo.db/blob/b903d4e1ee07ef2ec454daa5b8418b3039e02774/oslo_db/sqlalchemy/enginefacade.py#L189 So it's a bug in oslo.db, not in Python. Python doesn't provide any machinery to automatically reinitialize all locks created in Python at fork in the child process. os.register_at_fork() must be used explicitly. > But, if I'm understanding correctly, this issue is fixing something in python logging specifically and not all standard library locks in general. This issue is specific to logging. > My question is, will there be a way to reinit standard library locks in general using _at_fork_reinit()? That is, should we expect a future fix in python to do this or is the recommendation to continue to ensure the application reinits locks during process start if we know the process could be a child? Each module has to setup an os.register_at_fork() callback to reinitialize its locks. It's done by threading and logging modules for example. The multiprocessing has its own util.register_after_fork() machinery (see bpo-40221).. |
|||
| msg365968 - (view) | Author: (melwitt) * | Date: 2020年04月08日 01:58 | |
Thank you for explaining those details and pointing me in the right direction about the proper way to solve the problem in oslo.db. We still have to support python 3.6, so for now we will still need to do something different (in nova) to handle this (clear our cache of oslo.db _TransactionContextManager during oslo.service start(), is what I have proposed). For those running python 3.7 and newer, I will try to fix the situation more generally in oslo.db with os.register_at_fork() using the examples you gave. Thank you again for your help, I have learned a lot. |
|||
| msg366338 - (view) | Author: STINNER Victor (vstinner) * (Python committer) | Date: 2020年04月13日 22:25 | |
New changeset 4c3da783cffb8471303fbae3e09f3d67b31c3d06 by Victor Stinner in branch 'master': bpo-40091: Fix a hang at fork in the logging module (GH-19416) https://github.com/python/cpython/commit/4c3da783cffb8471303fbae3e09f3d67b31c3d06 |
|||
| msg366339 - (view) | Author: STINNER Victor (vstinner) * (Python committer) | Date: 2020年04月13日 22:29 | |
I merged my fix into master. PR 19196 fix could be backported to 3.8, sadly I had to reject this PR since it introduced new problems :-( This approach (create a new lock at fork) doesn't work :-( So I'm not sure how to fix the issue on 3.8 without backporting the new _at_fork_reinit() method, but I would prefer to not backport it since I see it as a new feature. |
|||
| History | |||
|---|---|---|---|
| Date | User | Action | Args |
| 2022年04月11日 14:59:28 | admin | set | github: 84272 |
| 2020年04月13日 22:29:10 | vstinner | set | status: open -> closed resolution: fixed stage: patch review -> resolved |
| 2020年04月13日 22:29:04 | vstinner | set | messages: + msg366339 |
| 2020年04月13日 22:25:40 | vstinner | set | messages: + msg366338 |
| 2020年04月08日 01:58:43 | melwitt | set | messages: + msg365968 |
| 2020年04月07日 23:12:37 | vstinner | set | messages: + msg365959 |
| 2020年04月07日 22:13:17 | melwitt | set | nosy:
+ melwitt messages: + msg365952 |
| 2020年04月07日 21:31:10 | vstinner | set | pull_requests: + pull_request18776 |
| 2020年03月28日 00:09:08 | vstinner | set | messages: + msg365197 |
| 2020年03月27日 18:00:32 | vstinner | set | pull_requests: + pull_request18559 |
| 2020年03月27日 17:07:35 | vstinner | set | messages: + msg365172 |
| 2020年03月27日 17:05:12 | vstinner | set | nosy:
+ gregory.p.smith, vinay.sajip |
| 2020年03月27日 17:04:23 | vstinner | set | keywords:
+ patch stage: patch review pull_requests: + pull_request18556 |
| 2020年03月27日 17:02:43 | vstinner | create | |