59

I was going through loops and found a significant difference in accessing loops. I can't understand what is the thing that causes such difference in both cases?

First Example:

Execution Time; 8 seconds

for (int kk = 0; kk < 1000; kk++)
{
 sum = 0;
 for (int i = 0; i < 1024; i++)
 for (int j = 0; j < 1024; j++)
 {
 sum += matrix[i][j];
 }
}

Second Example:

Execution Time: 23 seconds

for (int kk = 0; kk < 1000; kk++)
{
 sum = 0;
 for (int i = 0; i < 1024; i++)
 for (int j = 0; j < 1024; j++)
 {
 sum += matrix[j][i];
 }
}

What causes so much execution time difference just exchanging

matrix[i][j] 

to

matrix[j][i]

?

Kate Gregory
19k8 gold badges60 silver badges87 bronze badges
asked Oct 27, 2014 at 8:28
27
  • 32
    Locality of reference (in C++, primitive multidimensional arrays are stored in row major format.) Also, did you optimize the code? 8 seconds (let alone 23) seems a ridiculously long execution time for a billion elements. Commented Oct 27, 2014 at 8:29
  • 3
    @Etixpp oops, nope, it's of course a billion. I'm still not sure whether the code in question has been optimized. Commented Oct 27, 2014 at 8:33
  • 6
    @Massab You should generally optimize the code before measuring its performance, otherwise you may get meaningless results. Commented Oct 27, 2014 at 10:17
  • 4
    Also, to use the word "significant" in benchmarks please at least average the results from more than one run, and include the information about standard deviation. Commented Oct 27, 2014 at 10:24
  • 3
    @Massab Lol, "almost" and "bit up or down" really make no difference here. "significant" is a strong word, especially in benchmarking context. I'm not saying I doubt in your results, just be precise next time :) Commented Oct 27, 2014 at 11:32

8 Answers 8

112

It's an issue of memory cache.

matrix[i][j] has better cache hits than matrix[j][i], since matrix[i][j] has more continuous memory accessing chances.

For example, when we access matrix[i][0], the cache may load a continuous segment of memory containing matrix[i][0], thus, accessing matrix[i][1], matrix[i][2], ..., will benefit from caching speed, since matrix[i][1], matrix[i][2], ... are near to matrix[i][0].

However, when we access matrix[j][0], it is far from matrix[j - 1][0] and may not been cached, and can not benefit from caching speed. Especially, a matrix is normally stored as a continuous big segment of memory, and the cacher may predicate the behavior of memory accessing and always cache the memory.

That's why matrix[i][j] is faster. This is typical in CPU cache based performance optimizing.

OrangeDog
39.2k18 gold badges141 silver badges227 bronze badges
answered Oct 27, 2014 at 8:33
Sign up to request clarification or add additional context in comments.

7 Comments

@raison Can you please explain a little bit that cache issue? Because thats the topic am actually working on, so I need to understand it completely. Thanx alot.
@Massab wish it helps.
An interesting fact I ran into about a year ago is that this is not the case when programming for a GPU in CUDA as discussed in my previous question albeit that is beyond the scope of the current question.
@GodricSeer GPU has many cores to process parallel work, and the data must be transferred from memory to GPU to perform the computing, thus, we'll focus on utilize more cores with less data transfers.
The memory access pattern of this loop is extremely predictable; with prefetching there is no reason it should ever get a cache miss, should the optimizer (or a human) optimize for that. The problem is that the memory hardware is optimized to deliver contiguous memory -- a cache line -- rather than scattered memory, and so we are using memory inefficiently if we can't use an entire cache line at a time.
|
54

The difference in performance is caused by the caching strategy of the computer.

The 2 dimensional array matrix[i][j] is represented as a long list of values in the memory.

E.g the array A[3][4] looks like:

1 1 1 1 2 2 2 2 3 3 3 3

In this example every entry of A[0][x] is set to 1, every entry of A[1][x] set to 2, ...

If your first loop is applied to this matrix the order of access is this:

1 2 3 4 5 6 7 8 9 10 11 12

While the second loops access order looks like this:

1 4 7 10 2 5 8 11 3 6 9 12

When the program accesses an element of the array it also loads subsequent elements.

E.g. if you access A[0][1], A[0][2] and A[0][3] are loaded too.

Thereby the first loop has to do less load operations, as some elements are already in the cache when needed. The second loop loads entries into the cache that are not needed at the time, resulting in more load operations.

answered Oct 27, 2014 at 8:58

2 Comments

@Massab You should mark the most helpful reply as a valid answer.
raison is spot on, but this answer explains it the best for someone who doesn't know what the problem is already
34

Other people have done a good job explaining why one form of your code makes more efficient use of the memory cache than the other. I'd like to add some background information you may not be aware of: you probably don't realize just how expensive main memory accesses are nowadays.

The numbers posted in this question look to be in the right ballpark to me, and I'm going to reproduce them here because they're so important:

Core i7 Xeon 5500 Series Data Source Latency (approximate)
L1 CACHE hit, ~4 cycles
L2 CACHE hit, ~10 cycles
L3 CACHE hit, line unshared ~40 cycles
L3 CACHE hit, shared line in another core ~65 cycles
L3 CACHE hit, modified in another core ~75 cycles remote
remote L3 CACHE ~100-300 cycles
Local Dram ~60 ns
Remote Dram ~100 ns

Note the change in units for the last two entries. Depending exactly which model you have, this processor runs at 2.9–3.2 GHz; to make the math simpler, let's just call it 3 GHz. So one cycle is 0.33333 nanoseconds. So DRAM accesses are also 100-300 cycles.

The point is that the CPU could have executed hundreds of instructions in the time it takes to read one cache line from main memory. This is called the memory wall. Because of it, efficient use of the memory cache is more important than any other factor in overall performance on modern CPUs.

answered Oct 27, 2014 at 15:14

4 Comments

And then there's a page fault...
Thats great info. Thanks man. +1
While this is all true, it's (IMO) clear that memory latency isn't the only factor here: prefetching can eliminate cache misses, and the loop could be optimized to do so appropriately. In fact, the hardware prefetcher might even be able to figure much of it out without any help. Memory bandwidth is another relevant bottleneck at play here, because you load entire cache lines at a time, but one loop ordering only uses a fraction of the cache line before discarding it, and thus uses bandwidth inefficiently. The actual timings of the unoptimized loop are likely a combination of both.
@Hurkyl All of that is covered, at least by implication, in raison and H4kor's answers.
18

The answer depends a little bit on exactly how the matrix is defined. In a fully dynamically allocated array, you'd have:

T **matrix;
matrix = new T*[n];
for(i = 0; i < n; i++)
{
 t[i] = new T[m]; 
}

So, every matrix[j] will require a new memory lookup for the pointer. If you do the j loop outside, the inner loop can re-use the pointer for matrix[j] every for the whole inner loop.

If the matrix is a simple 2D array:

T matrix[n][m];

then the matrix[j] will simply be a multiplication by 1024 * sizeof(T) - which can be done by adding 1024 * sizeof(T) the loop index in the optimised code, so should be relatively fast either way.

On top of that, we have cache locality factors. Caches have "lines" of data that is typically 32 to 128 bytes per line. So if your code reads address X, the cache will load up with values 32 to 128 bytes around X. So if the NEXT thing you need is only sizeof(T) forward from the current location, it's highly likely already in the cache [and modern processors also detects that you are going round in a loop reading every memory location, and pre-loads the data].

In the case of j inner loop, you are reading a new location of sizeof(T)*1024 distance for each loop [or possiblya greater distance if it's dynamically allocated]. This means that the data being loaded will not be useful for the next loop, because it's not in the next 32 to 128 bytes.

And finally, it's entirely possible that the first loop is more optimised, thanks to SSE instructions or similar, which allow the calculation to be run even quicker. But this is probably marginal for such a large matrix, as the performance is highly memory bound at this size.

answered Oct 27, 2014 at 8:55

2 Comments

Thanks for your explanation dear. Its a bit hard but understood somehow. :)
I agree with Massab, the explanation is more thorough but could be written better.
10

Memory hardware is not optimized to deliver individual addresses: instead it tends to operate on larger chunks of continuous memory called cache lines. Every time you read one entry of your matrix, the entire cache line it lies in also gets loaded into cache along with it.

The faster loop ordering is set up to read memory in order; each time you load a cache line, you use all of the entries in that cache line. Each pass through the outer loop, you read each matrix entry only a single time.

The slower loop ordering, however, only uses a single entry from each cache line before moving on. Thus, each cache line has to get loaded multiple times, once for each matrix entry in the line. e.g. if a double is 8 byes and a cache line is 64 bytes long, then each pass through the outer loop has to read each matrix entry eight times rather than a single time.


All that said, if you had turned optimizations on, you would probably see no difference: optimizers understand this phenomenon, and good ones are capable of recognizing that they can swap which loop is the inner loop and which loop is the outer loop for this particular code snippet.

(also, a good optimizer would have only done one pass through the outermost loop, because it recognizes the first 999 times through are irrelevant to the final value of sum)

answered Oct 28, 2014 at 1:15

1 Comment

A bit late to the party, but I think your explanation is very well written, so +1 anyway.
8

The matrix is stored in memory as a vector. Accessing it the first way accesses the memory sequentially. Accessing it the second way requires jumping around memory locations. See http://en.wikipedia.org/wiki/Row-major_order

answered Oct 27, 2014 at 8:36

6 Comments

Though your explanation is right, I'm not happy about your use of the terms matrix and vector. Vectors are matrices with only one dimension (i.e. the vector (4, 5, 6) can be said to be a 1x3 matrix), the matrix in question clearly has more than one dimension, so it's not really correct to say it is being stored as a vector. You're right about the sequential memory access though, and using references is always a good thing.
@Pharap In memory it has only one dimension.
@Wlerin Memory is one dimension.
@Pharap Yes, exactly.
@Wlerin The point is that the mathematical terminology is wrong. Memory is all just one block of contiguous memory, but the structure being represented (a matrix) is not.
|
5

If you access j - i the j dimension is cached so the machine code does not have to change it every time, the second dimension isnt cached, so you actually delete the cache every time what causes the difference.

answered Oct 27, 2014 at 8:36

Comments

3

Based on the concept of locality of reference, it is very likely for a piece of code to access memory locations that are adjacent. So more values are loaded into the cache than what is asked for. This means more cache hits. Your first example is satisfying this well while you code in second example is not.

answered Oct 30, 2014 at 4:34

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.