9

Is there any way to make SharedMemory object created in Python persist between processes?

If the following code is invoked in interactive python session:

>>> from multiprocessing import shared_memory
>>> shm = shared_memory.SharedMemory(name='test_smm', size=1000000, create=True)

it creates a file in /dev/shm/ on a Linux machine.

ls /dev/shm/test_smm 
/dev/shm/test_smm

But when the python session ends I get the following:

/usr/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
 warnings.warn('resource_tracker: There appear to be %d 

and the test_smm is gone:

ls /dev/shm/test_smm 
ls: cannot access '/dev/shm/test_smm': No such file or directory

So is there any way to make the shared memory object created in python persist across process runs?

Running with Python 3.8

asked Nov 19, 2020 at 16:11
3
  • Can you just dump state to a file when your program exits, and load that state when your other process starts up? Commented Nov 19, 2020 at 16:21
  • I cannot. The process will write continuously and there is another process that is reading it. So if there is any issue and process crashes or exits I need the memory to persist. I can do this if I use sysv_ipc. Commented Nov 19, 2020 at 16:36
  • If you're using NumPy, you could use memory-mapping. If what you want shared is large, you could just write to a *.npy file and read it with numpy.lib.format.open_memmap(), that way you don't take up precious RAM. Commented Dec 20, 2025 at 18:42

1 Answer 1

11

You can unregister a shared memory object from the resource cleanup process without unlinking it:

$ python3
Python 3.8.6 (default, Sep 25 2020, 09:36:53) 
[GCC 10.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from multiprocessing import shared_memory, resource_tracker
>>> shm = shared_memory.SharedMemory(name='test_smm', size=1000000, create=True)
>>> resource_tracker.unregister(shm._name, 'shared_memory')
>>> 
$ ls /dev/shm/test_smm 
/dev/shm/test_smm

I don't know whether this is portable, and it doesn't look like a supported way of using the multiprocessing module, but it works on Linux at least.

answered Nov 19, 2020 at 16:45
Sign up to request clarification or add additional context in comments.

4 Comments

You're right this seems to be the only way to do it now but there is an issue tracker for Python to resolve this: bugs.python.org/issue38119
@Diego Veralli how did you discover this module? It seems there is no documentation in docs.python.org for resource_tracker.
@andreihondrari I had a quick look at the source code for the shared_memory module.
From the docs: "When one process no longer needs access to a shared memory block that might still be needed by other processes, the close() method should be called." So maybe add shm.close() for completeness?

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.