Don’t Write "Three-Star Code"
This is the major issue. An array of pointers to pointers to pointers to arrays is just a terrible data structure, and you should never use it. It uses memory inefficiently. It requires three dereferences with poor locality, so they’re likely to cause cache misses. It requires a huge number of heap allocations and deallocations that are easy to get wrong.
If the rows and columns are a known constant size, you want a "rectangular" or "box-shaped" array of constant dimension. You can create one with a single call to the heap, or even automatically on the stack. Access to any of the elements takes constant time. The last dimension of the array can even grow, and keep all these advantages, allowing you to dynamically add rows to a two-dimensional array or layers to a three-dimensional one.
If your rows are so ragged that it wouldn’t be acceptable to allocate a maximum dimension for them, you have a sparse matrix, and would be better off using a sparse format.
In this instance, you have a very wordy comment to the effect of which indices are defined (excerpting)
// + (1) For all i_ < i, arr[i_] is allocated
// + (2) For all i_ < i, j_ < y, arr[i_][j_] is defined
// + (3) For all j_ < j, arr[i][j_] is defined
Where y
is defined as some kind of fixed element size. This suggests that you flatten the array into a single linear array whose logical [i][j]
indices can be converted into physical offsets and looked up in constant time. Just don’t store any of the missing elements. If it’s worth the effort, that will improve your code more than anything else. IfYou can think of this as taking advantage of the structure of the array to make a perfect hash function.
If calculating and debugging the index expressions are too much math, at least consider whether you can set the first two dimensions of the array to a maximum size.
Don’t Write "Three-Star Code"
This is the major issue. An array of pointers to pointers to pointers to arrays is just a terrible data structure, and you should never use it. It uses memory inefficiently. It requires three dereferences with poor locality, so they’re likely to cause cache misses. It requires a huge number of heap allocations and deallocations that are easy to get wrong.
If the rows and columns are a known constant size, you want a "rectangular" or "box-shaped" array of constant dimension. You can create one with a single call to the heap, or even automatically on the stack. Access to any of the elements takes constant time. The last dimension of the array can even grow, and keep all these advantages, allowing you to dynamically add rows to a two-dimensional array or layers to a three-dimensional one.
If your rows are so ragged that it wouldn’t be acceptable to allocate a maximum dimension for them, you have a sparse matrix, and would be better off using a sparse format.
In this instance, you have a very wordy comment to the effect of which indices are defined (excerpting)
// + (1) For all i_ < i, arr[i_] is allocated
// + (2) For all i_ < i, j_ < y, arr[i_][j_] is defined
// + (3) For all j_ < j, arr[i][j_] is defined
Where y
is defined as some kind of fixed element size. This suggests that you flatten the array into a single linear array whose logical [i][j]
indices can be converted into physical offsets and looked up in constant time. Just don’t store any of the missing elements. If it’s worth the effort, that will improve your code more than anything else. If calculating and debugging the index expressions are too much math, at least consider whether you can set the first two dimensions of the array to a maximum size.
Don’t Write "Three-Star Code"
This is the major issue. An array of pointers to pointers to pointers to arrays is just a terrible data structure, and you should never use it. It uses memory inefficiently. It requires three dereferences with poor locality, so they’re likely to cause cache misses. It requires a huge number of heap allocations and deallocations that are easy to get wrong.
If the rows and columns are a known constant size, you want a "rectangular" or "box-shaped" array of constant dimension. You can create one with a single call to the heap, or even automatically on the stack. Access to any of the elements takes constant time. The last dimension of the array can even grow, and keep all these advantages, allowing you to dynamically add rows to a two-dimensional array or layers to a three-dimensional one.
If your rows are so ragged that it wouldn’t be acceptable to allocate a maximum dimension for them, you have a sparse matrix, and would be better off using a sparse format.
In this instance, you have a very wordy comment to the effect of which indices are defined (excerpting)
// + (1) For all i_ < i, arr[i_] is allocated
// + (2) For all i_ < i, j_ < y, arr[i_][j_] is defined
// + (3) For all j_ < j, arr[i][j_] is defined
Where y
is defined as some kind of fixed element size. This suggests that you flatten the array into a single linear array whose logical [i][j]
indices can be converted into physical offsets and looked up in constant time. Just don’t store any of the missing elements. If it’s worth the effort, that will improve your code more than anything else. You can think of this as taking advantage of the structure of the array to make a perfect hash function.
If calculating and debugging the index expressions are too much math, at least consider whether you can set the first two dimensions of the array to a maximum size.
Don’t Write "Three-Star Code"
This is the major issue. An array of pointers to pointers to pointers to arrays is just a terrible data structure, and you should never use it. It uses memory inefficiently. It requires three dereferences with poor locality, so they’re likely to cause cache misses. It requires a huge number of heap allocations and deallocations that are easy to get wrong.
If the rows and columns are a known constant size, you want a "rectangular" or "box-shaped" array of constant dimension. You can create one with a single call to the heap, or even automatically on the stack. Access to any of the elements takes constant time. The last dimension of the array can even grow, and keep all these advantages, allowing you to dynamically add rows to a two-dimensional array or layers to a three-dimensional one.
If your rows are so ragged that it wouldn’t be acceptable to allocate a maximum dimension for them, you have a sparse matrix, and would be better off using a sparse format.
In this instance, you have a very wordy comment to the effect of which indices are defined (excerpting)
// + (1) For all i_ < i, arr[i_] is allocated
// + (2) For all i_ < i, j_ < y, arr[i_][j_] is defined
// + (3) For all j_ < j, arr[i][j_] is defined
Where y
is defined as some kind of fixed element size. This suggests that you flatten the array into a single linear array whose logical [i][j]
indices can be converted into physical offsets and looked up in constant time. Just don’t store any of the missing elements. If it’s worth the effort, that will improve your code more than anything else. If calculating and debugging the index expressions are too much math, at least consider whether you can set the first two dimensions of the array to a maximum size.