4
\$\begingroup\$

I'm using the following two functions to calculate the determinant of A.

Would this code be considered to have an asymptotic time complexity of O(n3)? The recursion is throwing me off, but I think that it would, since it would end up looking something like: n3 + n3 + ... based on however many cofactors of the original are needed.

void MatrixOps::cofactor(const std::vector<Entries>& A, std::vector<Entries>& C, int32_t n, int32_t k){
 
 for (int i = 1 ; i < n ; i++){ // never take row 1
 for (int j = 0 ; j < n ; j++){ // decide which col not to take
 if (j == k) continue;
 else C.push_back(A[i*n + j]); // very efficient
 }
 }
}
Entries MatrixOps::determinant(const std::vector<Entries> & A, const int32_t n){
 if (1 == n) return A[0];
 if (2 == n) return (A[0]*A[3] - A[1] * A[2]);
 
 Entries determinant = 0.0; int32_t sign = 1;
 vector<Entries> C; C.reserve((n-1)*(n-1));
 
 for (int k = 0 ; k < n ; k++){
 cofactor(A, C, n, k);
 determinant += sign * A[k] * MatrixOps::determinant(C, n-1);
 sign = -sign;
 C.clear();
 }
 
 return determinant;
}
Toby Speight
87k14 gold badges104 silver badges322 bronze badges
asked Dec 31, 2022 at 2:40
\$\endgroup\$

2 Answers 2

11
\$\begingroup\$

It's much worse than cubic time.

At every "level" of the recursion, there are n recursive calls to a determinant of a matrix that is smaller by 1:

T(n) = n * T(n - 1)

I left a bunch of things out there (which if anything means I'm underestimating the cost) to end up with a nicer formula: n * (n - 1) * (n - 2) ... which you probably recognize as n!.

Cofactor expansion, or Laplace expansion, which is what this algorithm is, is rarely used computationally for that reason.

There are other algorithms that compute the determinant that do run in cubic time, for example the Bareiss algorithm (suitable for integers, but be careful with overflow) or LU decomposition followed by taking the product of the entries on the diagonal (not very integer-friendly).

answered Dec 31, 2022 at 3:33
\$\endgroup\$
7
  • \$\begingroup\$ I see. Two followups. 1. For double values, which would you recommend? I may adjust the algorithm. 2. My use case is mostly matrices where N <=5, and 5^3 = 125 while 5! = 120, would you say Cofactor expansion is better, or at least equivalent, for matrices where N<=5? \$\endgroup\$ Commented Dec 31, 2022 at 7:50
  • 2
    \$\begingroup\$ @user8393252 you cannot directly compare asymptotic complexities like that by assigning a specific value for n and turning it into inequality equation. If you want to know which is faster for n<=5 your best option is to implement both and benchmark them. All asymptotic complexity tells you is one is going to be faster than the other for some n and any larger n, but it won't tell you which n it is. \$\endgroup\$ Commented Dec 31, 2022 at 8:35
  • \$\begingroup\$ @user8393252 LU decomposition is very useful anyway (to solve Ax=b) so you may as well implement it, it's a small step from there to the determinant. \$\endgroup\$ Commented Dec 31, 2022 at 14:15
  • \$\begingroup\$ @slepic I keep hearing this word "benchmarking", how is this typically done? Is there a software that can track the runtime for a program? \$\endgroup\$ Commented Dec 31, 2022 at 20:49
  • \$\begingroup\$ @user8393252 There are ways to track for how long a program runs, but that's not a good way to benchmarking performance of code units. Watching time of program execution may include time for interaction with os, copying the binary of the program to memory, time when the CPU is busy with other processes, etc. You usually want to track how long it takes from calling a function until it returns and nothing more before or after. And you do that by wrapping your tested function with benchmarking code written in the same language. There are entire benchmarking frameworks/libraries to help with that \$\endgroup\$ Commented Jan 1, 2023 at 13:20
1
\$\begingroup\$

The complexity is O(n^n) also called exponential, which is the highest complexity you can have. The exact calculation of that complexity was already mentioned by harold. Now about the code. const int32_t n is redundan it should always be sqrt(A.size()) for determinant and sqrt(C.size()) for cofactor`. Also what should hapen if n=0?

answered Dec 31, 2022 at 13:23
\$\endgroup\$
7
  • \$\begingroup\$ Can the downvoter give some explanation? \$\endgroup\$ Commented Dec 31, 2022 at 16:47
  • 3
    \$\begingroup\$ IDK, that wasn't me. But, the small o should really be a big O (it's not just arbitrary capitalization, they have different meanings), and it's not the highest complexity either (it's way up there among the worst complexities you'd probably ever actually encounter, but for example O(A(n,n)) is worse, using the Ackermann function). "Rounding up" O(n!) to O(n^n) is fine IMO (it's not uncommon to do that) so I hope that wasn't the main objection. \$\endgroup\$ Commented Dec 31, 2022 at 17:05
  • \$\begingroup\$ @harold When I was at University we learned that O(n^n) is the worst complexity, but it was about 15 years ago. \$\endgroup\$ Commented Dec 31, 2022 at 17:10
  • \$\begingroup\$ @convert I don't see how avoiding taking a sqrt() is by any means redundant. \$\endgroup\$ Commented Dec 31, 2022 at 20:51
  • \$\begingroup\$ @user8393252 n should normaly contain exactly that sqrt value from size of vector. However if you use some diferent value there can be some mismatch. \$\endgroup\$ Commented Dec 31, 2022 at 20:54

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.