0

I am building a flexible, lightweight, in-memory database in Python, and discovered a performance problem with the way I was looking up values and using indexes. In an effort to improve this I've tried a few options, trying to balance speed with memory usage. My current implementation uses a dict of dicts to store data by record (object reference) and field (also an object reference). So for example, if I have three records with three fields, where some of the data is missing (i.e. NULL values)::

{<Record1>: {<Field1>: 4, <Field2>: 'value', <Field3>: <Other Record>},
{<Record2>: {<Field1>: 4, <Field2>: 'value'},
{<Record3>: {<Field1>: 5}}

I considered a numpy array, but I would still need two dictionaries to map object instances to array indexes, so I can't see that it will perform be any better.

Indexes are implemented using a pair of bisected lists, essentially acting as a map from value to record instance. For example, and index on the above Field1>:

[[4, 4, 5], [<Record1>, <Record2>, <Record3>]]

I was previously using a simple dict of bins, but this didn't allow range lookups (e.g. all values> 5) (see Python hash table for fuzzy matching).

My question is this. I am concerned that I have several object references, and multiple copies of the same values in the indexes. Do all these duplicate references actually use more memory, or are references cheap in python? My alternative is to try to associate a numerical key to each object, which might improve things at least up to 256, but I don't know enough about how python handles references to know if this would really be any better.

Does anyone have any suggestions of a better way to manage this?

Reimplementing the critical parts in C is an option I want to keep as a last resort.

For anyone interested, my code is here.

Edit 1:

The question, simple put, is which of the following is more efficient in terms of memory usage, where a is an object instance and i is an integer:

[a] * 1000

Or

[i] * 1000, {a: i}

Edit 2:

Because of the large number of comments suggesting I use an existing system, here are my requirements. If anyone can suggest a system which fulfills all of these, that would be great, but so far I have not found anything which does. Otherwise, my original question still relates to memory usage of references in python.:

  • Must be light-weight and in-memory. Definitely not a client/server model.
  • Need to be able to easily alter tables, change fields, change rules, etc, on the fly.
  • Need to easily apply very complex validation rules. SQL doesn't meet this requirement. Although it is sometimes possible to build up very complicated statements, it is far from easy.
  • Need to support joins and associations between tables. Many NoSQL databases don't support joins at all, or at most only simple joins.
  • Need to support a method of loading and storing data to any file format. I am currently implementing this by providing a framework which makes it easy to add new formats as needed.
  • It does not need persistence (beyond storing data as in the previous point), and does not need to handle massive amounts of data, i.e. not more than a couple of million records. Typically, I am dealing with a few thousand.
asked Dec 3, 2012 at 13:24
11
  • 2
    I'm not sure I entirely understand your data structure, but why are you reinventing the wheel? Commented Dec 3, 2012 at 13:31
  • A reference in Python is fundamentally a pointer to a PyObject, so yes, each reference will use a small bit of memory. If you care about that sort of thing, though, you should indeed be looking at writing the critical parts in C. Commented Dec 3, 2012 at 13:32
  • you may want to look at pandas DataFrame object, it's kinda like an excel sheet in memory, the only downside is that it doesn't play well with mixed data types (because it is a numpy arrays under the hood), it does however support fuzzy matching! Commented Dec 3, 2012 at 13:40
  • @katrielalex: If it is just a C pointer then that's fine, and answers my question. I was worried it would be something a bit larger, as python C objects tend to be. Commented Dec 3, 2012 at 13:41
  • 2
    "flexible, lightweight, in-memory database" - There are already loads of these. Stop wasting your time, and use an existing solution. I guarantee something in version 4 will be faster and more featureful than your own homebrew version. Commented Dec 3, 2012 at 14:26

3 Answers 3

1

Each reference is in effect a pointer, each pointer requires a small amount of memory.

You can use memory profiler to view memory use on a line by line basis. In this way you can see what happens when you make a reference.

answered Dec 3, 2012 at 14:25
Sign up to request clarification or add additional context in comments.

1 Comment

sys.getsizeof isn't the size of a reference, it's the size of the references object (for some implementation-specific and rarely useful definition of size).
0

Python does not specify a particular implementation for dynamic memory management, but from the semantics of the language one can assume that a reference uses memory similar to a C pointer.

answered Dec 3, 2012 at 14:22

Comments

0

FWIW, I ran some tests on a 100x100 structure, testing a sparsely populated dictionary structure, a fully populated dictionary structure, a list, and a numpy array. The latter two had a dictionary mapping object references to indexes. I timed getting every item in the structure by index (returning a sentinel for missing data in the sparse dict), and also reported the total size. My results were somewhat surprising:

Structure Time Size
============= ======== =====
full dict 0.0236s 6284
list 0.0426s 13028
sparse dict 0.1079s 1676
array 0.2262s 12608

So the fastest and second smallest was a full dict, presumable because there was no need to run a key in dict check on it.

answered Dec 5, 2012 at 13:54

Comments

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.