When programming in Python, is it possible to reserve memory for a list that will be populated with a known number of items, so that the list will not be reallocated several times while building it? I've looked through the docs for a Python list type, and have not found anything that seems to do this. However, this type of list building shows up in a few hotspots of my code, so I want to make it as efficient as possible.
Edit: Also, does it even make sense to do something like this in a language like Python? I'm a fairly experienced programmer, but new to Python and still getting a feel for its way of doing things. Does Python internally allocate all objects in separate heap spaces, defeating the purpose of trying to minimize allocations, or are primitives like ints, floats, etc. stored directly in lists?
-
26@ironfroggy: The point is that this showed up in hotspots. In these places, list building was causing a significant, real-world bottleneck, the kind you should optimize.dsimcha– dsimcha2010年01月31日 16:36:29 +00:00Commented Jan 31, 2010 at 16:36
7 Answers 7
Here's four variants:
- an incremental list creation
- "pre-allocated" list
- array.array()
- numpy.zeros()
python -mtimeit -s"N=10**6" "a = []; app = a.append;"\
"for i in xrange(N): app(i);"
10 loops, best of 3: 390 msec per loop
python -mtimeit -s"N=10**6" "a = [None]*N; app = a.append;"\
"for i in xrange(N): a[i] = i"
10 loops, best of 3: 245 msec per loop
python -mtimeit -s"from array import array; N=10**6" "a = array('i', [0]*N)"\
"for i in xrange(N):" " a[i] = i"
10 loops, best of 3: 541 msec per loop
python -mtimeit -s"from numpy import zeros; N=10**6" "a = zeros(N,dtype='i')"\
"for i in xrange(N):" " a[i] = i"
10 loops, best of 3: 353 msec per loop
It shows that [None]*N is the fastest and array.array is the slowest in this case.
8 Comments
array.array is used in a suboptimal way here, see my answer.array('i', [0])*n along is 10 times faster than array('i', [0]*n) though it is still slower than [0]*n variant if you add the initialization loop. The point of the answer: measure first. The code examples are from other answers at the time.import is not included, notice -syou can create list of the known length like this:
>>> [None] * known_number
Comments
Take a look at this:
In [7]: %timeit array.array('f', [0.0]*4000*1000)
1 loops, best of 3: 306 ms per loop
In [8]: %timeit array.array('f', [0.0])*4000*1000
100 loops, best of 3: 5.96 ms per loop
In [11]: %timeit np.zeros(4000*1000, dtype='f')
100 loops, best of 3: 6.04 ms per loop
In [9]: %timeit [0.0]*4000*1000
10 loops, best of 3: 32.4 ms per loop
So don't ever use array.array('f', [0.0]*N), use array.array('f', [0.0])*N or numpy.zeros.
4 Comments
np.empty in place of np.zeros. With your test, that's three times faster on my computer.[0.0]*4000*1000 builds a 4000-element list and repeats it 1000 times, rather than repeating a 1-element list 4000000 times like [0.0]*4000000 would. [0.0]*4000000 turns out to be significantly faster in my tests.If you're wanting to manipulate numbers efficiently in Python then have a look at NumPy ( Link). It let's you do things extremely fast while still getting to use Python.
To do what your asking in NumPy you'd do something like
import numpy as np
myarray = np.zeros(4000)
which would give you an array of floating point numbers initialized to zero. You can then do very cool things like multiply whole arrays by a single factor or by other arrays and other stuff (kind of like in Matlab if you've ever used that) which is very fast (most of the actual work is happening in the highly optimized C part of the NumPy library).
If it's not arrays of numbers your after then you're probably not going to find a way to do what you want in Python. A Python list of objects is a list of points to objects internally (I think so anyway, I'm not an expert of Python internals) so it would still be allocating each of its members as you create them.
1 Comment
np.empty is preferable unless you really need your array to start out with zeros, giving triple the speed on my computer.In most of everyday code you won't need such optimization.
However, when list efficiency becomes an issue, the first thing you should do is replace generic list with typed one from array module which is much more efficient.
Here's how list of 4 million floating point numbers cound be created:
import array
lst = array.array('f', [0.0]*4000*1000)
2 Comments
array.array might require less memory but a Python list is faster in most (meaning those I've tried) cases.In Python, all objects are allocated on the heap.
But Python uses a special memory allocator so malloc won't be called every time you need a new object.
There are also some optimizations for small integers (and the like) which are cached; however, which types, and how, is implementation dependent.
Comments
for Python3:
import timeit
from numpy import zeros
from array import array
def func1():
N=10**6
a = []
app = a.append
for i in range(N):
app(i)
def func2():
N=10**6
a = [None]*N
app = a.append
for i in range(N):
a[i] = i
def func3():
N=10**6
a = array('i', [0]*N)
for i in range(N):
a[i] = i
def func4():
N=10**6
a = zeros(N,dtype='i')
for i in range(N):
a[i] = i
start_time = timeit.default_timer()
func1()
print(timeit.default_timer() - start_time)
start_time = timeit.default_timer()
func2()
print(timeit.default_timer() - start_time)
start_time = timeit.default_timer()
func3()
print(timeit.default_timer() - start_time)
start_time = timeit.default_timer()
func4()
print(timeit.default_timer() - start_time)
result:
0.1655518
0.10920069999999998
0.1935983
0.15213890000000002
- append()
- [None]*N
- using module array
- using module numpy
Comments
Explore related questions
See similar questions with these tags.