Explanation: Although merge sort runs in Ω(nlgn) and insertion sort runs in Ω(n^2), the constant factors in insertion sort can make it faster in implementation for small problem sizes. This sorting implementation should still be stable.
Recursive Merge sort subroutine method:
private static void recursiveMergeSort(double[] arr, int lowerBound, int upperBound) {
if (lowerBound < upperBound) {
// Split the sub array into halves
int mid = lowerBound + (upperBound - lowerBound) / 2;
recursiveMergeSort(arr, lowerBound, mid);
recursiveMergeSort(arr, mid + 1, upperBound);
merge(arr, lowerBound, mid, upperBound);
}
}
Merge method: *note- I would like to replace the while loop with for and if-else statements.
private static void merge(double[] arr, int left, int mid, int right) {
int i = 0, j = 0, k = left;
//System.out.println("used merge");
// Sizes of the temporary arrays to be copied
int n1 = (mid - left) + 1;
int n2 = (right - mid);
// Create temporary arrays and copy data
double[] leftTemp = Arrays.copyOfRange(arr, left, mid + 1);
double[] rightTemp = Arrays.copyOfRange(arr, mid + 1, right + 1);
// Merge the temp arrays back into arr[left...right]
while (i < n1 && j < n2) {
if (leftTemp[i] <= rightTemp[j]) {
arr[k++] = leftTemp[i++];
} else {
arr[k++] = rightTemp[j++];
}
}
// Copy remaining elements, if any
while (i < n1) {
arr[k++] = leftTemp[i++];
}
while (j < n2) {
arr[k++] = rightTemp[j++];
}
}
Insertion sort subroutine method:
private static void insertionSort(double[] arr, int left, int right){
for (int i = left + 1; i <= right; i++) {
double key = arr[i];
int j = i - 1;
while (j >= left && arr[j] > key) {
arr[j + 1] = arr[j];
j--;
}
arr[j + 1] = key;
}
}
Hybrid merge/insertion sorting method:
Optimized is a value that is best set between [25,100]
private static void insertionRecursiveMergeSort(double[] arr, int left, int right) {
// If <= OPTIMIZED use insertion sort subroutine
if (right <= left + OPTIMIZED - 1) {
insertionSort(arr, left, right);
} else {
int mid = left + (right - left) / 2;
insertionRecursiveMergeSort(arr, left, mid);
insertionRecursiveMergeSort(arr, mid + 1, right);
merge(arr, left, mid, right);
}
}
For test runs, I used array sizes 1M, 2M, 3M, and 5M with optimized set to 25, 50, 100, and 125.
1 Answer 1
Optimization notes.
Allocate the temporary array once, and pass it down to the recursive invocations.
(削除) Do not fall back to the insertion sort immediately. Postpone it until all the recursions complete, and insertion sort the entire array once.It feels scary to insertion sort the entire array. Wouldn't it degrade performance to \$O(n^2)\$? The answer is no. The array is already "almost sorted": every element is at most
OPTIMIZED
away from its final destination. Therefore, an inner loop does at mostOPTIMIZED
iterations per element, and the overall complexity is \$O(n * \texttt{OPTIMIZED})\$. This observation also hints thatOPTIMIZED
should be in a ballpark of \$\log n\$. (削除ここまで)Edit: I don't know what I was thinking. The above is totally wrong (it is valid for quicksort, but not here), except the observation that
OPTIMIZED
should be in a ballpark of \$\log n\$.A little-known trick allows to cut the number of comparisons in the insertion sort in half. Instead of
while (j >= left && arr[j] > key)
consider first comparing
key
toarr[left]
:if (key < arr[left]) { // The key shall land at the left. No need to compare values anymore for (j = i; j > 0; j--) { arr[j] = arr[j-1]; } } else { // arr[left] is a natural sentinel. No need to compare indices anymore for (j = i; arr[j] > key; j--) { arr[j] = arr[j-1]; } }
Now if you opt for a postponed insertion sort, it could be optimized even more: find the min among first
OPTIMIZED
elements and swap it witharr[0]
. Now there is no need to test forkey < arr[left]
- it is guaranteed to fail. Go straight into the unguarded loop.
Explore related questions
See similar questions with these tags.