I have a python program where I need to load and de-serialize a 1GB pickle file. It takes a good 20 seconds and I would like to have a mechanism whereby the content of the pickle is readily available for use. I've looked at shared_memory but all the examples of its use seem to involve numpy and my project doesn't use numpy. What is the easiest and cleanest way to achieve this using shared_memory or otherwise?
This is how I'm loading the data now (on every run):
def load_pickle(pickle_name):
return pickle.load(open(DATA_ROOT + pickle_name, 'rb'))
I would like to be able to edit the simulation code in between runs without having to reload the pickle. I've been messing around with importlib.reload but it really doesn't seem to work well for a large Python program with many file:
def main():
data_manager.load_data()
run_simulation()
while True:
try:
importlib.reload(simulation)
run_simulation()
except:
print(traceback.format_exc())
print('Press enter to re-run main.py, CTRL-C to exit')
sys.stdin.readline()
9 Answers 9
This could be an XY problem, the source of which being the assumption that you must use pickles at all; they're just awful to deal with due to how they manage dependencies and are fundamentally a poor choice for any long-term data storage because of it
The source financial data is almost-certainly in some tabular form to begin with, so it may be possible to request it in a friendlier format
A simple middleware to deserialize and reserialize the pickles in the meantime will smooth the transition
input -> load pickle -> write -> output
Converting your workflow to use Parquet or Feather which are designed to be efficient to read and write will almost-certainly make a considerable difference to your load speed
Further relevant links
- Answer to How to reversibly store and load a Pandas dataframe to/from disk
- What are the pros and cons of parquet format compared to other formats?
You may also be able to achieve this with hickle, which will internally use a HDH5 format, ideally making it significantly faster than pickle, while still behaving like one
An alternative to storing the unpickled data in memory would be to store the pickle in a ramdisk, so long as most of the time overhead comes from disk reads. Example code (to run in a terminal) is below.
sudo mkdir mnt/pickle
mount -o size=1536M -t tmpfs none /mnt/pickle
cp path/to/pickle.pkl mnt/pickle/pickle.pkl
Then you can access the pickle at mnt/pickle/pickle.pkl. Note that you can change the file names and extensions to whatever you want. If disk read is not the biggest bottleneck, you might not see a speed increase. If you run out of memory, you can try turning down the size of the ramdisk (I set it at 1536 mb, or 1.5gb)
2 Comments
You can use shareable list: So you will have 1 python program running which will load the file and save it in memory and another python program which can take the file from memory. Your data, whatever is it you can load it in dictionary and then dump it as json and then reload json. So
Program1
import pickle
import json
from multiprocessing.managers import SharedMemoryManager
YOUR_DATA=pickle.load(open(DATA_ROOT + pickle_name, 'rb'))
data_dict={'DATA':YOUR_DATA}
data_dict_json=json.dumps(data_dict)
smm = SharedMemoryManager()
smm.start()
sl = smm.ShareableList(['alpha','beta',data_dict_json])
print (sl)
#smm.shutdown() commenting shutdown now but you will need to do it eventually
The output will look like this
#OUTPUT
>>>ShareableList(['alpha', 'beta', "your data in json format"], name='psm_12abcd')
Now in Program2:
from multiprocessing import shared_memory
load_from_mem=shared_memory.ShareableList(name='psm_12abcd')
load_from_mem[1]
#OUTPUT
'beta'
load_from_mem[2]
#OUTPUT
yourdataindictionaryformat
You can look for more over here https://docs.python.org/3/library/multiprocessing.shared_memory.html
4 Comments
Manger code to be pickling and sending over IPC the same data the questioner needs to be efficiently available, so having it pre-loaded in one program may not add anything.File "/Users/etayluz/stocks/src/data_loader.py", line 19, in main sl = smm.ShareableList(['alpha', 'beta', data_dict_json]) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/managers.py", line 1363, in ShareableList sl = shared_memory.ShareableList(sequence) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/shared_memory.py", line 308, in __init__ assert sum(len(fmt) <= 8 for fmt in _formats) == self._list_len AssertionErrorAdding another assumption-challenging answer, it could be where you're reading your files from that makes a big difference
1G is not a great amount of data with today's systems; at 20 seconds to load, that's only 50MB/s, which is a fraction of what even the slowest disks provide
You may find you actually have a slow disk or some type of network share as your real bottleneck and that changing to a faster storage medium or compressing the data (perhaps with gzip) makes a great difference to read and writing
1 Comment
Here are my assumptions while writing this answer:
- Your Financial data is being produced after complex operations and you want the result to persist in memory
- The code that consumes must be able to access that data fast
- You wish to use shared memory
Here are the codes (self-explanatory, I believe)
Data structure
'''
Nested class definitions to simulate complex data
'''
class A:
def __init__(self, name, value):
self.name = name
self.value = value
def get_attr(self):
return self.name, self.value
def set_attr(self, n, v):
self.name = n
self.value = v
class B(A):
def __init__(self, name, value, status):
super(B, self).__init__(name, value)
self.status = status
def set_attr(self, n, v, s):
A.set_attr(self, n,v)
self.status = s
def get_attr(self):
print('\nName : {}\nValue : {}\nStatus : {}'.format(self.name, self.value, self.status))
Producer.py
from multiprocessing import shared_memory as sm
import time
import pickle as pkl
import pickletools as ptool
import sys
from class_defs import B
def main():
# Data Creation/Processing
obj1 = B('Sam Reagon', '2703', 'Active')
#print(sys.getsizeof(obj1))
obj1.set_attr('Ronald Reagon', '1023', 'INACTIVE')
obj1.get_attr()
###### real deal #########
# Create pickle string
byte_str = pkl.dumps(obj=obj1, protocol=pkl.HIGHEST_PROTOCOL, buffer_callback=None)
# compress the pickle
#byte_str_opt = ptool.optimize(byte_str)
byte_str_opt = bytearray(byte_str)
# place data on shared memory buffer
shm_a = sm.SharedMemory(name='datashare', create=True, size=len(byte_str_opt))#sys.getsizeof(obj1))
buffer = shm_a.buf
buffer[:] = byte_str_opt[:]
#print(shm_a.name) # the string to access the shared memory
#print(len(shm_a.buf[:]))
# Just an infinite loop to keep the producer running, like a server
# a better approach would be to explore use of shared memory manager
while(True):
time.sleep(60)
if __name__ == '__main__':
main()
Consumer.py
from multiprocessing import shared_memory as sm
import pickle as pkl
from class_defs import B # we need this so that while unpickling, the object structure is understood
def main():
shm_b = sm.SharedMemory(name='datashare')
byte_str = bytes(shm_b.buf[:]) # convert the shared_memory buffer to a bytes array
obj = pkl.loads(data=byte_str) # un-pickle the bytes array (as a data source)
print(obj.name, obj.value, obj.status) # get the values of the object attributes
if __name__ == '__main__':
main()
(削除) When the Producer.py is executed in one terminal, it will emit a string identifier (say, wnsm_86cd09d4) for the shared memory. Enter this string in the Consumer.py and execute it in another terminal. (削除ここまで)
Just run the Producer.py in one terminal and the Consumer.py on another terminal on the same machine.
I hope this is what you wanted!
15 Comments
ptoolpkl.dumps statement - I am guessing it is empty ( from AttributeError: 'NoneType' object ... )You can take advantage of multiprocessing to run the simulations inside of subprocesses, and leverage the copy-on-write benefits of forking to unpickle/process the data only once at the start:
import multiprocessing
import pickle
# Need to use forking to get copy-on-write benefits!
mp = multiprocessing.get_context('fork')
# Load data once, in the parent process
data = pickle.load(open(DATA_ROOT + pickle_name, 'rb'))
def _run_simulation(_):
# Wrapper for `run_simulation` that takes one argument. The function passed
# into `multiprocessing.Pool.map` must take one argument.
run_simulation()
with mp.Pool() as pool:
pool.map(_run_simulation, range(num_simulations))
If you want to parameterize each simulation run, you can do so like so:
import multiprocessing
import pickle
# Need to use forking to get copy-on-write benefits!
mp = multiprocessing.get_context('fork')
# Load data once, in the parent process
data = pickle.load(open(DATA_ROOT + pickle_name, 'rb'))
with mp.Pool() as pool:
simulations = ('arg for simulation run', 'arg for another simulation run')
pool.map(run_simulation, simulations)
This way the run_simulation function will be passed in the values from the simulations tuple, which can allow for having each simulation run with different parameters, or even just assign each run a ID number of name for logging/saving purposes.
This whole approach relies on fork being available. For more information about using fork with Python's built-in multiprocessing library, see the docs about contexts and start methods. You may also want to consider using the forkserver multiprocessing context (by using mp = multiprocessing.get_context('fork')) for the reasons described in the docs.
If you don't want to run your simulations in parallel, this approach can be adapted for that. The key thing is that in order to only have to process the data once, you must call run_simulation within the process that processed the data, or one of its child processes.
If, for instance, you wanted to edit what run_simulation does, and then run it again at your command, you could do it with code resembling this:
main.py:
import multiprocessing
from multiprocessing.connection import Connection
import pickle
from data import load_data
# Load/process data in the parent process
load_data()
# Now child processes can access the data nearly instantaneously
# Need to use forking to get copy-on-write benefits!
mp = multiprocessing.get_context('fork') # Consider using 'forkserver' instead
# This is only ever run in child processes
def load_and_run_simulation(result_pipe: Connection) -> None:
# Import `run_simulation` here to allow it to change between runs
from simulation import run_simulation
# Ensure that simulation has not been imported in the parent process, as if
# so, it will be available in the child process just like the data!
try:
run_simulation()
except Exception as ex:
# Send the exception to the parent process
result_pipe.send(ex)
else:
# Send this because the parent is waiting for a response
result_pipe.send(None)
def run_simulation_in_child_process() -> None:
result_pipe_output, result_pipe_input = mp.Pipe(duplex=False)
proc = mp.Process(
target=load_and_run_simulation,
args=(result_pipe_input,)
)
print('Starting simulation')
proc.start()
try:
# The `recv` below will wait until the child process sends sometime, or
# will raise `EOFError` if the child process crashes suddenly without
# sending an exception (e.g. if a segfault occurs)
result = result_pipe_output.recv()
if isinstance(result, Exception):
raise result # raise exceptions from the child process
proc.join()
except KeyboardInterrupt:
print("Caught 'KeyboardInterrupt'; terminating simulation")
proc.terminate()
print('Simulation finished')
if __name__ == '__main__':
while True:
choice = input('\n'.join((
'What would you like to do?',
'1) Run simulation',
'2) Exit\n',
)))
if choice.strip() == '1':
run_simulation_in_child_process()
elif choice.strip() == '2':
exit()
else:
print(f'Invalid option: {choice!r}')
data.py:
from functools import lru_cache
# <obtain 'DATA_ROOT' and 'pickle_name' here>
@lru_cache
def load_data():
with open(DATA_ROOT + pickle_name, 'rb') as f:
return pickle.load(f)
simulation.py:
from data import load_data
# This call will complete almost instantaneously if `main.py` has been run
data = load_data()
def run_simulation():
# Run the simulation using the data, which will already be loaded if this
# is run from `main.py`.
# Anything printed here will appear in the output of the parent process.
# Exceptions raised here will be caught/handled by the parent process.
...
The three files detailed above should all be within the same directory, alongside an __init__.py file that can be empty. The main.py file can be renamed to whatever you'd like, and is the primary entry-point for this program. You can run simulation.py directly, but that will result in a long time spent loading/processing the data, which was the problem you ran into initially. While main.py is running, the file simulation.py can be edited, as it is reloaded every time you run the simulation from main.py.
For macOS users: forking on macOS can be a bit buggy, which is why Python defaults to using the spawn method for multiprocessing on macOS, but still supports fork and forkserver for it. If you're running into crashes or multiprocessing-related issues, try adding OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES to your environment. See https://stackoverflow.com/a/52230415/5946921 for more details.
13 Comments
run_simulation is re-imported every time. You can edit that file, and then enter "1" at the prompt to re-run it. If the previous run is still running, you can enter "ctrl+c" to stop it, and then choose "1" at the prompt.As I understood:
- something is needed to be loaded
- it is needed to be loaded often, because file with code which uses this something is edited often
- you don't want to wait until it will be loaded every time
Maybe such solution will be okay for you.
You can write script loader file in such way (tested on Python 3.8):
import importlib.util, traceback, sys, gc
# Example data
import pickle
something = pickle.loads(pickle.dumps([123]))
if __name__ == '__main__':
try:
mod_path = sys.argv[1]
except IndexError:
print('Usage: python3', sys.argv[0], 'PATH_TO_SCRIPT')
exit(1)
modules_before = list(sys.modules.keys())
argv = sys.argv[1:]
while True:
MOD_NAME = '__main__'
spec = importlib.util.spec_from_file_location(MOD_NAME, mod_path)
mod = importlib.util.module_from_spec(spec)
# Change to needed global name in the target module
mod.something = something
sys.modules[MOD_NAME] = mod
sys.argv = argv
try:
spec.loader.exec_module(mod)
except:
traceback.print_exc()
del mod, spec
modules_after = list(sys.modules.keys())
for k in modules_after:
if k not in modules_before:
del sys.modules[k]
gc.collect()
print('Press enter to re-run, CTRL-C to exit')
sys.stdin.readline()
Example of module:
# Change 1 to some different number when first script is running and press enter
something[0] += 1
print(something)
Should work. And should reduce the reload time of pickle close to zero 🌝
UPD Add a possibility to accept script name with command line arguments
9 Comments
gc.collect() just in case. Not sure if it will change something. I am out of ideas how to fix something I never seen in any way :Dmod_name, mod_path = 'some_file', 'some_file.py' - but my program has about 50 files. How can I reload every file?I implemented the python-preloaded script, which can help you here. It will store the CPython state at an early stage after some modules are loaded, and then when you need it, you can restore from this state and load your normal Python script. Storing currently means that it will stay in memory, and restoring means that it does a fork on it, which is very fast. But these are implementation details of python-preloaded and should not matter to you.
So, to make it work for your use case:
Make a new module,
data_preloaded.pyor so, and in there, just this code:preloaded_data = load_pickle(...)Now run
py-preloaded-bundle-fork-server.py data_preloaded -o python-data-preloaded.bin. This will createpython-data-preloaded.bin, which can be used as a replacement forpython.I assume you have started
python your_script.pybefore. So now run./python-data-preloaded.bin your_script.py. Or also justpython-data-preloaded.bin(no args). The first time, this will still be slow, i.e. take about 20 seconds. But now it is in memory.Now run
./python-data-preloaded.bin your_script.pyagain. Now it should be extremely fast, i.e. a few milliseconds. And you can start it again and again and it will always be fast, until you restart your computer.
Comments
This is not exact answer to the question as the Q looks as pickle and SHM are required, but others went of the path, so I am going to share a trick of mine. It might help you. There are some fine solutions here using the pickle and SHM anyway. Regarding this I can offer only more of the same. Same pasta with slight sauce modifications.
Two tricks I employ when dealing with your situations are as follows.
First is to use sqlite3 instead of pickle. You can even easily develop a module for a drop-in replacement using sqlite. Nice thing is that data will be inserted and selected using native Python types, and you can define yourown with converter and adapter functions that would use serialization method of your choice to store complex objects. Can be a pickle or json or whatever.
What I do is to define a class with data passed in through *args and/or **kwargs of a constructor. It represents whatever obj model I need, then I pick-up rows from "select * from table;" of my database and let Python unwrap the data during the new object initialization. Loading big amount of data with datatype conversions, even the custom ones is suprisingly fast. sqlite will manage buffering and IO stuff for you and do it faster than pickle. The trick is construct your object to be filled and initiated as fast as possible. I either subclass dict() or use slots to speed up the thing. sqlite3 comes with Python so that's a bonus too.
The other method of mine is to use a ZIP file and struct module. You construct a ZIP file with multiple files within. E.g. for a pronunciation dictionary with more than 400000 words I'd like a dict() object. So I use one file, let say, lengths.dat in which I define a length of a key and a length of a value for each pair in binary format. Then I have a one file of words and one file of pronunciations all one after the other. When I load from file, I read the lengths and use them to construct a dict() of words with their pronunciations from two other files. Indexing bytes() is fast, so, creating such a dictionary is very fast. You can even have it compressed if diskspace is a concern, but some speed loss is introduced then.
Both methods will take less place on a disk than the pickle would. The second method will require you to read into RAM all the data you need, then you will be constructing the objects, which will take almost double of RAM that the data took, then you can discard the raw data, of course. But alltogether shouldn't require more than the pickle takes. As for RAM, the OS will manage almost anything using the virtual memory/SWAP if needed.
Oh, yeah, there is the third trick I use. When I have ZIP file constructed as mentioned above or anything else which requires additional deserialization while constructing an object, and number of such objects is great, then I introduce a lazy load. I.e. Let say we have a big file with serialized objects in it. You make the program load all the data and distribute it per object which you keep in list() or dict(). You write your classes in such a way that when the object is first asked for data it unpacks its raw data, deserializes and what not, removes the raw data from RAM then returns your result. So you will not be losing loading time until you actually need the data in question, which is much less noticeable for a user than 20 secs taking for a process to start.
shared_memorystores information as a buffer of bytes. If you aren’t trying to share an array, then you would likely have to reserialize the data again for saving there.numpy, but what does it use? What is this massive data structure you need to save between runs? You're not going to be able to save entire Python objects into some kind of shared memory space, you'd horribly break the interpreter's memory management if you tried. But depending on what your data actually is, you might be able to share something, we just can't know what it will be without knowing something about the data.