We had a question in examination to compute determinant of general square matrix. Now I had made the normal program using Laplace Formula of determinant, ie, recursive solution.
Now I was thinking of reducing it into Upper Triangle Matrix and then the product of diagonal elements gives the determinant.
The code is following.
#include <iostream>
using namespace std;
const int N = 5;
int main()
{
double array[N][N], det = 1;
// initialise
cout << "Enter " << N << "*" << N << " matrix:\n";
for (int i = 0; i < N; ++i) {
for (int j = 0; j < N; ++j) {
cin >> array[i][j];
}
}
for (int i = 0; i < N; ++i)
{
bool flag = false; // flag := are all entries below a[i][i] including it zero?
if (array[i][i] == 0) // If a[i][i] is zero then check rows below and swap
{
flag = true;
for (int j = i; j < N; ++j)
{
if (array[j][i] != 0) {
det *= -1;
for (int k = 0; k < N; ++k) {
double t = array[i][k]; // swapping
array[i][k] = array[j][k];
array[j][k] = t;
flag = false;
}
}
}
}
if (flag == true) {
det = 0;
break;
}
else {
for (int j = i+1; j < N; ++j)
{
double store = array[j][i];
for (int k = i; k < N; ++k) {
array[j][k] -= (array[i][k]*store)/array[i][i];
}
}
det *= array[i][i];
}
}
cout << "Determinant: " << det;
return 0;
}
The problems with this approach is that matrix needs to be of floating point datatypes. Worse, it gives wrong result for least significant place (ones digit) for all integral entries if I use datatype of entries as float. To overcome that, we can use double.
But still I :
- need tips on improving performance of this method
- need to know whether this method is better than Simple Laplace Formula?
1 Answer 1
Some general remarks:
Don't use
namespace std;
, see for example Why is "using namespace std;" considered bad practice?.Don't put all the code into
main()
. Computing the determinant in a separate function increases the overall clarity of the program and makes it easier to add test cases. In addition, that gives you a function which can be reused in other programs.Swapping two values can be done in C++ simply with
std::swap
.return 0;
at the end of the main program can be omitted.
At some places your code does unnecessary work:
If the diagonal element
a[i][i]
is zero then all subsequent rows witharray[j][i] != 0
are swapped with thei
th row. It suffices to swap one row with a non-zero leading element.When swapping rows
i
andj
, it suffices to swap the elements starting at columni
because the preceding elements are not used anymore.Similarly, when adding a multiple of row
i
to rowj
, it suffices to update elements starting at columni + 1
, because all values of columni
will not be read anymore.
As already said in the comments, the Gaussian elimination is faster than the Laplace expansion for large matrices (\$ O(N^3) \$ vs \$ O(N!) \$ complexity). However, the "pivoting" (i.e. which rows to swap if an diagonal element is zero) can be improved. A common choice is "partial pivoting":
In partial pivoting, the algorithm selects the entry with largest absolute value from the column of the matrix that is currently being considered as the pivot element. Partial pivoting is generally sufficient to adequately reduce round-off error. ...
Using a (nested) std::vector
for the matrix storage makes it
easier to pass it an argument to another function.
In addition, it allows to swap two rows in constant time with vector::swap
.
Putting it all together, the determinant
function could look like this:
#include <vector>
#include <cmath>
double determinant(std::vector<std::vector<double>> &matrix) {
int N = static_cast<int>(matrix.size());
double det = 1;
for (int i = 0; i < N; ++i) {
double pivotElement = matrix[i][i];
int pivotRow = i;
for (int row = i + 1; row < N; ++row) {
if (std::abs(matrix[row][i]) > std::abs(pivotElement)) {
pivotElement = matrix[row][i];
pivotRow = row;
}
}
if (pivotElement == 0.0) {
return 0.0;
}
if (pivotRow != i) {
matrix[i].swap(matrix[pivotRow]);
det *= -1.0;
}
det *= pivotElement;
for (int row = i + 1; row < N; ++row) {
for (int col = i + 1; col < N; ++col) {
matrix[row][col] -= matrix[row][i] * matrix[i][col] / pivotElement;
}
}
}
return det;
}
Explore related questions
See similar questions with these tags.
O(n!)
complexity, whereas decomposition methods such as the one you chose haveO(n^3)
complexity and are as such much more efficient for larger matrices. You can avoid rounding problems by using adapted algorithms, such as Bareiss algorithm which don't perform divisions. \$\endgroup\$