Out of curiosity I decided to benchmark my own matrix multiplication function versus the BLAS implementation... I was to say the least surprised at the result:
Custom Implementation, 10 trials of 1000x1000 matrix multiplication:
Took: 15.76542 seconds.
BLAS Implementation, 10 trials of 1000x1000 matrix multiplication:
Took: 1.32432 seconds.
This is using single precision floating point numbers.
My Implementation:
template<class ValT>
void mmult(const ValT* A, int ADim1, int ADim2, const ValT* B, int BDim1, int BDim2, ValT* C)
{
if ( ADim2!=BDim1 )
throw std::runtime_error("Error sizes off");
memset((void*)C,0,sizeof(ValT)*ADim1*BDim2);
int cc2,cc1,cr1;
for ( cc2=0 ; cc2<BDim2 ; ++cc2 )
for ( cc1=0 ; cc1<ADim2 ; ++cc1 )
for ( cr1=0 ; cr1<ADim1 ; ++cr1 )
C[cc2*ADim2+cr1] += A[cc1*ADim1+cr1]*B[cc2*BDim1+cc1];
}
I have two questions:
- Given that a matrix-matrix multiplication say: nxm * mxn requires n*n*m multiplications, so in the case above 1000^3 or 1e9 operations. How is it possible on my 2.6Ghz processor for BLAS to do 10*1e9 operations in 1.32 seconds? Even if multiplcations were a single operation and there was nothing else being done, it should take ~4 seconds.
- Why is my implementation so much slower?
-
27BLAS has been optimized up one side and down the other by specialist in the field. I assume it is taking advantage of the SIMD floating point unit on your chip and playing lots of tricks to improve the caching behavior as well...dmckee --- ex-moderator kitten– dmckee --- ex-moderator kitten2009年08月19日 23:36:03 +00:00Commented Aug 19, 2009 at 23:36
-
8Still how do you do 1E10 operations on a 2.63E9 cycles/second processor in 1.3 seconds?DeusAduro– DeusAduro2009年08月19日 23:41:29 +00:00Commented Aug 19, 2009 at 23:41
-
18Multiple execution units, pipe-lining, and Single Instruction Multiple Data ((SIMD) which means doing the same operation on more than one pair of operands at the same time). Some compilers can target the SIMD units on common chips but you just about always have to explicitly turn in on, and it helps to know how it all works (en.wikipedia.org/wiki/SIMD). Insuring against cache misses is almost certainly the hard part.dmckee --- ex-moderator kitten– dmckee --- ex-moderator kitten2009年08月19日 23:45:50 +00:00Commented Aug 19, 2009 at 23:45
-
18Assumption is wrong. There are better algorithms known, see Wikipedia.MSalters– MSalters2009年08月20日 07:36:53 +00:00Commented Aug 20, 2009 at 7:36
-
3@DeusAduro: In my answer for How to write a matrix matrix product that can compete with Eigen? I posted a small example on how to implement a cache efficient matrix-matrix product.Michael C. Lehn– Michael C. Lehn2016年03月01日 20:11:37 +00:00Commented Mar 1, 2016 at 20:11
9 Answers 9
A good starting point is the great book The Science of Programming Matrix Computations by Robert A. van de Geijn and Enrique S. Quintana-Ortí. They provide a free download version.
BLAS is divided into three levels:
Level 1 defines a set of linear algebra functions that operate on vectors only. These functions benefit from vectorization (e.g. from using SIMD such as SSE).
Level 2 functions are matrix-vector operations, e.g. some matrix-vector product. These functions could be implemented in terms of Level 1 functions. However, you can boost the performance of these functions if you can provide a dedicated implementation that makes use of some multiprocessor architecture with shared memory.
Level 3 functions are operations like the matrix-matrix product. Again you could implement them in terms of Level 2 functions. But Level 3 functions perform O(N^3) operations on O(N^2) data. So if your platform has a cache hierarchy then you can boost performance if you provide a dedicated implementation that is cache optimized/cache friendly. This is nicely described in the book. The main boost of Level 3 functions comes from cache optimization. This boost significantly exceeds the second boost from parallelism and other hardware optimizations.
By the way, most (or even all) of the high performance BLAS implementations are NOT implemented in Fortran. ATLAS is implemented in C. GotoBLAS/OpenBLAS is implemented in C and its performance-critical parts in Assembler. Only the reference implementation of BLAS is implemented in Fortran. However, all these BLAS implementations provide a Fortran interface such that it can be linked against LAPACK (LAPACK gains all its performance from BLAS).
Optimized compilers play a minor role in this respect (and for GotoBLAS/OpenBLAS the compiler does not matter at all).
IMHO no BLAS implementation uses algorithms like the Coppersmith–Winograd algorithm or the Strassen algorithm. The likely reasons are:
- Maybe it's not possible to provide a cache-optimized implementation of these algorithms (i.e. you would lose more than you would win)
- These algorithms are numerically not stable. As BLAS is the computational kernel of LAPACK this is a no-go.
- Although these algorithms have a nice time complexity on paper, the Big O notation hides a large constant, so it only starts to become viable for extremely large matrices.
Edit/Update:
The new and groundbreaking papers for this topic are the BLIS papers. They are exceptionally well written. For my lecture "Software Basics for High Performance Computing" I implemented the matrix-matrix product following their paper. Actually I implemented several variants of the matrix-matrix product. The simplest variant is entirely written in plain C and has less than 450 lines of code. All the other variants merely optimize the loops
for (l=0; l<MR*NR; ++l) {
AB[l] = 0;
}
for (l=0; l<kc; ++l) {
for (j=0; j<NR; ++j) {
for (i=0; i<MR; ++i) {
AB[i+j*MR] += A[i]*B[j];
}
}
A += MR;
B += NR;
}
The overall performance of the matrix-matrix product only depends on these loops. About 99.9% of the time is spent here. In the other variants I used intrinsics and assembler code to improve the performance. You can see the tutorial going through all the variants here:
ulmBLAS: Tutorial on GEMM (Matrix-Matrix Product)
Together with the BLIS papers it becomes fairly easy to understand how libraries like Intel MKL can gain such performance. And why it does not matter whether you use row or column major storage!
The final benchmarks are here (we called our project ulmBLAS):
Benchmarks for ulmBLAS, BLIS, MKL, openBLAS and Eigen
Another Edit/Update:
I also wrote some tutorials on how BLAS is used for numerical linear algebra problems like solving a system of linear equations:
High Performance LU Factorization
(This LU factorization is for example used by Matlab for solving a system of linear equations.)
(削除) I hope to find time (削除ここまで) to extend the tutorial to describe and demonstrate how to realise a highly scalable parallel implementation of the LU factorization like in PLASMA.
Ok, here you go: Coding a Cache Optimized Parallel LU Factorization
P.S.: I also did make some experiments on improving the performance of uBLAS. It actually is pretty simple to boost (yeah, play on words :) ) the performance of uBLAS:
Here a similar project with BLAZE:
10 Comments
So first of all BLAS is just an interface of about 50 functions. There are many competing implementations of the interface.
Firstly I will mention things that are largely unrelated:
- Fortran vs C, makes no difference
- Advanced matrix algorithms such as Strassen, implementations dont use them as they dont help in practice
Most implementations break each operation into small-dimension matrix or vector operations in the more or less obvious way. For example a large 1000x1000 matrix multiplication may broken into a sequence of 50x50 matrix multiplications.
These fixed-size small-dimension operations (called kernels) are hardcoded in CPU-specific assembly code using several CPU features of their target:
- SIMD-style instructions
- Instruction Level Parallelism
- Cache-awareness
Furthermore these kernels can be executed in parallel with respect to each other using multiple threads (CPU cores), in the typical map-reduce design pattern.
Take a look at ATLAS which is the most commonly used open source BLAS implementation. It has many different competing kernels, and during the ATLAS library build process it runs a competition among them (some are even parameterized, so the same kernel can have different settings). It tries different configurations and then selects the best for the particular target system.
(Tip: That is why if you are using ATLAS you are better off building and tuning the library by hand for your particular machine then using a prebuilt one.)
3 Comments
First, there are more efficient algorithms for matrix multiplication than the one you're using.
Second, your CPU can do much more than one instruction at a time.
Your CPU executes 3-4 instructions per cycle, and if the SIMD units are used, each instruction processes 4 floats or 2 doubles. (of course this figure isn't accurate either, as the CPU can typically only process one SIMD instruction per cycle)
Third, your code is far from optimal:
- You're using raw pointers, which means that the compiler has to assume they may alias. There are compiler-specific keywords or flags you can specify to tell the compiler that they don't alias. Alternatively, you should use other types than raw pointers, which take care of the problem.
- You're thrashing the cache by performing a naive traversal of each row/column of the input matrices. You can use blocking to perform as much work as possible on a smaller block of the matrix, which fits in the CPU cache, before moving on to the next block.
- For purely numerical tasks, Fortran is pretty much unbeatable, and C++ takes a lot of coaxing to get up to a similar speed. It can be done, and there are a few libraries demonstrating it (typically using expression templates), but it's not trivial, and it doesn't just happen.
3 Comments
restrict
(no aliasing) is the default, unlike in C / C++. (And unfortunately ISO C++ doesn't have a restrict
keyword, so you have to use __restrict__
on compilers that provide it as an extension).I don't know specfically about BLAS implementation but there are more efficient alogorithms for Matrix Multiplication that has better than O(n3) complexity. A well know one is Strassen Algorithm
2 Comments
Most arguments to the second question -- assembler, splitting into blocks etc. (but not the less than N^3 algorithms, they are really overdeveloped) -- play a role. But the low velocity of your algorithm is caused essentially by matrix size and the unfortunate arrangement of the three nested loops. Your matrices are so large that they do not fit at once in cache memory. You can rearrange the loops such that as much as possible will be done on a row in cache, this way dramatically reducing cache refreshes (BTW splitting into small blocks has an analogue effect, best if loops over the blocks are arranged similarly). A model implementation for square matrices follows. On my computer its time consumption was about 1:10 compared to the standard implementation (as yours). In other words: never program a matrix multiplication along the "row times column" scheme that we learned in school. After having rearranged the loops, more improvements are obtained by unrolling loops, assembler code etc.
void vector(int m, double ** a, double ** b, double ** c) {
int i, j, k;
for (i=0; i<m; i++) {
double * ci = c[i];
for (k=0; k<m; k++) ci[k] = 0.;
for (j=0; j<m; j++) {
double aij = a[i][j];
double * bj = b[j];
for (k=0; k<m; k++) ci[k] += aij*bj[k];
}
}
}
One more remark: This implementation is even better on my computer than replacing all by the BLAS routine cblas_dgemm (try it on your computer!). But much faster (1:4) is calling dgemm_ of the Fortran library directly. I think this routine is in fact not Fortran but assembler code (I do not know what is in the library, I don't have the sources). Totally unclear to me is why cblas_dgemm is not as fast since to my knowledge it is merely a wrapper for dgemm_.
1 Comment
With respect to the original code in MM multiply, memory reference for most operation is the main cause of bad performance. Memory is running at 100-1000 times slower than cache.
Most of speed up comes from employing loop optimization techniques for this triple loop function in MM multiply. Two main loop optimization techniques are used; unrolling and blocking. With respect to unrolling, we unroll the outer two most loops and block it for data reuse in cache. Outer loop unrolling helps optimize data-access temporally by reducing the number of memory references to same data at different times during the entire operation. Blocking the loop index at specific number, helps with retaining the data in cache. You can choose to optimize for L2 cache or L3 cache.
Comments
This is a realistic speed up. For an example of what can be done with SIMD assembler over C++ code, see some example iPhone matrix functions - these were over 8x faster than the C version, and aren't even "optimized" assembly - there's no pipe-lining yet and there is unnecessary stack operations.
Also your code is not "restrict correct" - how does the compiler know that when it modifies C, it isn't modifying A and B?
6 Comments
-fstrict-aliasing
. There's also better explanation of "restrict" here: cellperformance.beyond3d.com/articles/2006/05/… I find the other answers somewhat lacking. It doesn't matter that much what BLAS implementations are doing in detail, but it matters what OP's code is not doing: Taking advantage of SIMD instruction sets like SSE (AVX was not yet available in 2009), taking advantage of cache and taking advantage of compiler optimization. 3 1000x1000 single precision matrices fit only in the very largest caches of 2009, and without caching, all data has to be loaded from memory. According to cpu-monkey.com, a Core 2 Quad Q9650 can achieve about 17 GB/s memory bandwidth and 10 1000x1000 matrix multiplications could require up to 10 * 1e9 memory accesses, or 40 GB for single precision floats. Even without any additional time for computation and assuming perfect utilization of RAM bandwidth, the required time would be about 2.3 seconds.
By working on smaller blocks, like BLAS implementations do, you can fit these blocks into cache and make use of cache's much higher bandwidth (probably at least 32 bytes per clock cycle for level 1 cache even in 2009). If you do it well enough that memory bandwidth doesn't bottleneck you any more, you are bottlenecked by computational performance. In 2009, you would be able to use SSE4.1, which should achieve a throughput of 4 single precision FLOPs per clock cycle using mulps
and addps
. At OP's 2.6 GHz that makes 10.4 GFLOP/s. To execute 20 * 1e9 (not 10 * 1e9, because you need both additions and multiplications) operations in 1.32 seconds means >15 GFLOP/s, so the BLAS implementation uses some trick to get around that limit and that trick is probably to interleave multiplications and additions so that the CPU can schedule them to different executions units (not sure how many a 2009 CPU had, but Sandy Bridge had 6 per core, so 2+ per core seems reasonable), so that additions and multiplications can be executed simultaneously. That raises the limit to 20.8 GFLOP/s, of which the BLAS implementation achieves 73%.
It is easy to improve OP's code to come closer to those 73%. Simply compiling with -O3 -march=core2
gives me about 5 seconds on an 3.7 GHz i7-8700K, compared to over 40 seconds without optimization. Slightly reordering the computation and unrolling the loops improves that to 3.5 seconds. This slightly improved code achieves about 33% of the performance of the highly optimized OpenBLAS code on my system, which in turn achieves about 84% of my system's theoretical maximum. OP's original code achieves <10% of a highly optimized implementation, so it probably was compiled without the highest compiler optimization and without SSE support. Of course compiler optimization has come a long way since 2009, so I would have to rerun the experiment with a 2009 compiler for a more accurate answer.
Comments
For many reasons.
First, Fortran compilers are highly optimized, and the language allows them to be as such. C and C++ are very loose in terms of array handling (e.g. the case of pointers referring to the same memory area). This means that the compiler cannot know in advance what to do, and is forced to create generic code. In Fortran, your cases are more streamlined, and the compiler has better control of what happens, allowing him to optimize more (e.g. using registers).
Another thing is that Fortran store stuff columnwise, while C stores data row-wise. I havent' checked your code, but be careful of how you perform the product. In C you must scan row wise: this way you scan your array along contiguous memory, reducing the cache misses. Cache miss is the first source of inefficiency.
Third, it depends of the blas implementation you are using. Some implementations might be written in assembler, and optimized for the specific processor you are using. The netlib version is written in fortran 77.
Also, you are doing a lot of operations, most of them repeated and redundant. All those multiplications to obtain the index are detrimental for the performance. I don't really know how this is done in BLAS, but there are a lot of tricks to prevent expensive operations.
For example, you could rework your code this way
template<class ValT>
void mmult(const ValT* A, int ADim1, int ADim2, const ValT* B, int BDim1, int BDim2, ValT* C)
{
if ( ADim2!=BDim1 ) throw std::runtime_error("Error sizes off");
memset((void*)C,0,sizeof(ValT)*ADim1*BDim2);
int cc2,cc1,cr1, a1,a2,a3;
for ( cc2=0 ; cc2<BDim2 ; ++cc2 ) {
a1 = cc2*ADim2;
a3 = cc2*BDim1
for ( cc1=0 ; cc1<ADim2 ; ++cc1 ) {
a2=cc1*ADim1;
ValT b = B[a3+cc1];
for ( cr1=0 ; cr1<ADim1 ; ++cr1 ) {
C[a1+cr1] += A[a2+cr1]*b;
}
}
}
}
Try it, I am sure you will save something.
On you #1 question, the reason is that matrix multiplication scales as O(n^3) if you use a trivial algorithm. There are algorithms that scale much better.
17 Comments
Explore related questions
See similar questions with these tags.