In R
, I am using ccf
or acf
to compute the pair-wise cross-correlation function so that I can find out which shift gives me the maximum value. From the looks of it, R
gives me a normalized sequence of values. Is there something similar in Python's scipy or am I supposed to do it using the fft
module? Currently, I am doing it as follows:
xcorr = lambda x,y : irfft(rfft(x)*rfft(y[::-1]))
x = numpy.array([0,0,1,1])
y = numpy.array([1,1,0,0])
print xcorr(x,y)
4 Answers 4
To cross-correlate 1d arrays use numpy.correlate.
For 2d arrays, use scipy.signal.correlate2d.
There is also scipy.stsci.convolve.correlate2d.
There is also matplotlib.pyplot.xcorr which is based on numpy.correlate.
See this post on the SciPy mailing list for some links to different implementations.
Edit: @user333700 added a link to the SciPy ticket for this issue in a comment.
-
The implementations linked from that mailing list post use FFTs for sure, maybe they'll help. Also, I don't know how much speed is an issue for the poster.agf– agf2011年08月09日 05:17:21 +00:00Commented Aug 9, 2011 at 5:17
-
7np.correlate does not use fft, and is only faster when the second series/window is small relative to the first. there is also scipy.signal.fftconvolve. see also projects.scipy.org/numpy/ticket/1260Josef– Josef2011年08月11日 10:29:15 +00:00Commented Aug 11, 2011 at 10:29
-
All mailling list links are broken!! ;-(Andre Araujo– Andre Araujo2018年02月11日 21:12:15 +00:00Commented Feb 11, 2018 at 21:12
-
@AndreAraujo I updated the link to the SciPy ticket to point to the internet archive, since the original page seems to be gone. All of the other links (including the mailing list link) work for me.agf– agf2018年02月12日 01:36:45 +00:00Commented Feb 12, 2018 at 1:36
-
1@eric Please feel free to add additional information! This is >10 years old.agf– agf2022年09月30日 16:13:55 +00:00Commented Sep 30, 2022 at 16:13
If you are looking for a rapid, normalized cross correlation in either one or two dimensions
I would recommend the openCV library (see (削除) http://opencv.willowgarage.com/wiki/ (削除ここまで) http://opencv.org/). The cross-correlation code maintained by this group is the fastest you will find, and it will be normalized (results between -1 and 1).
While this is a C++ library the code is maintained with CMake and has python bindings so that access to the cross correlation functions is convenient. OpenCV also plays nicely with numpy. If I wanted to compute a 2-D cross-correlation starting from numpy arrays I could do it as follows.
import numpy
import cv
#Create a random template and place it in a larger image
templateNp = numpy.random.random( (100,100) )
image = numpy.random.random( (400,400) )
image[:100, :100] = templateNp
#create a numpy array for storing result
resultNp = numpy.zeros( (301, 301) )
#convert from numpy format to openCV format
templateCv = cv.fromarray(numpy.float32(template))
imageCv = cv.fromarray(numpy.float32(image))
resultCv = cv.fromarray(numpy.float32(resultNp))
#perform cross correlation
cv.MatchTemplate(templateCv, imageCv, resultCv, cv.CV_TM_CCORR_NORMED)
#convert result back to numpy array
resultNp = np.asarray(resultCv)
For just a 1-D cross-correlation create a 2-D array with shape equal to (N, 1 ). Though there is some extra code involved to convert to an openCV format the speed-up over scipy is quite impressive.
-
4FYI you can also do this with scikit-image if you don't want to use OpenCV. See this example.Neil Traft– Neil Traft2014年01月26日 06:08:23 +00:00Commented Jan 26, 2014 at 6:08
I just finished writing my own optimised implementation of normalized cross-correlation for N-dimensional arrays. You can get it from here .
It will calculate cross-correlation either directly, using scipy.ndimage.correlate
, or in the frequency domain, using scipy.fftpack.fftn
/ifftn
depending on whichever will be quickest.
For 1D array, numpy.correlate
is faster than scipy.signal.correlate
, under different sizes, I see a consistent 5x peformance gain using numpy.correlate
. When two arrays are of similar size (the bright line connecting the diagonal), the performance difference is even more outstanding (50x +).
# a simple benchmark
res = []
for x in range(1, 1000):
list_x = []
for y in range(1, 1000):
# generate different sizes of series to compare
l1 = np.random.choice(range(1, 100), size=x)
l2 = np.random.choice(range(1, 100), size=y)
time_start = datetime.now()
np.correlate(a=l1, v=l2)
t_np = datetime.now() - time_start
time_start = datetime.now()
scipy.signal.correlate(in1=l1, in2=l2)
t_scipy = datetime.now() - time_start
list_x.append(t_scipy / t_np)
res.append(list_x)
plt.imshow(np.matrix(res))
As default, scipy.signal.correlate calculates a few extra numbers by padding and that might explained the performance difference.
>> l1 = [1,2,3,2,1,2,3]
>> l2 = [1,2,3]
>> print(numpy.correlate(a=l1, v=l2))
>> print(scipy.signal.correlate(in1=l1, in2=l2))
[14 14 10 10 14]
[ 3 8 14 14 10 10 14 8 3] # the first 3 is [0,0,1]dot[1,2,3]