I'm trying to use shared memory as system-managed unique resource to ensure single instance and thought shared memory is a good option. Here's the code
import multiprocessing.shared_memory as share_mem
import sys
def isRunning(name: str) -> bool:
try:
shm = share_mem.SharedMemory(name, False)
shm.close()
return True
except FileNotFoundError as e:
return False
def runInstance(name: str) -> share_mem.SharedMemory:
return share_mem.SharedMemory(name, True, 1)
NAME = 'foo.bar'
if isRunning(NAME):
print('Already launched, exit')
sys.exit(1)
print('Launching instance')
shm = runInstance(NAME)
print('Running')
input('Press key')
print('Quitting')
shm.close()
shm.unlink()
On Windows it works perfectly. On Debian 11 the mysterious bug happens:
- In terminal1 I launch
python3 si.py, it launches and waits for input - In terminal2 I launch
python3 si.py, it reports an instance is running and exits - In terminal2 I launch
python3 si.pyone more time and it launches! - In terminal1 I press key and get "FileNotFoundError: no such file /foo.bar"
It looks like 2nd instance somehow took an ownership and closed the shared memory even for the 1st instance.
Moreover, in another tries both instances just run fine at the same time (so both wrong behaviors change each other but correct one never happens).
What am I doing wrong or it's a bug in Python or OS? I use Py 3.9.2 on Debian 11.
ADD
According to @ken's answer, adding the code for isRunning routine that works:
...
import multiprocessing.resource_tracker
def isRunning(name: str) -> bool:
try:
shm = share_mem.SharedMemory(name, False)
# Due to bug https://github.com/python/cpython/issues/82300 this leads to
# closing the whole resource by tracker if this instance closes. Thus removing
# the object from tracking
multiprocessing.resource_tracker.unregister(shm._name, 'shared_memory')
shm.close()
return True
except FileNotFoundError as e:
return False
However when I force close Python imitating the crash, the resource tracker prints message to console and pauses execution until key is pressed. This is unacceptable for me as I have the scripts running at background so I have to give up with shared memory
-
1It appears to be a known and unresolved bug. This might help.ken– ken2023年08月09日 11:47:52 +00:00Commented Aug 9, 2023 at 11:47
-
@ken thanks, that was it! I managed to have the scripts working by unregistering the refs to shared mem. Adding the working version to original postAntonius Hart– Antonius Hart2023年08月09日 12:20:36 +00:00Commented Aug 9, 2023 at 12:20
-
However when I force close Python the resource tracker prints message to console and pauses execution until key is pressed. This is unacceptable for my purpose so I think I'll continue with abstract Unix domain socketsAntonius Hart– Antonius Hart2023年08月09日 12:34:56 +00:00Commented Aug 9, 2023 at 12:34
-
1In that case, you might also want to try the third-party library posix-ipc.ken– ken2023年08月10日 04:02:59 +00:00Commented Aug 10, 2023 at 4:02
-
@ken thanks again, however this is overkill for my purposes and I want my script to be self-contained. So I used abstract domain socket and solved the issue. Would you mind posting your first comment as an answer so that I could mark it as accepted?Antonius Hart– Antonius Hart2023年08月22日 11:27:28 +00:00Commented Aug 22, 2023 at 11:27