You create too many arrays while sorting. You create a copy, then make a recursive call, where another copy is created (although only for a half of the original size), and then do it again all the way down to the bottom. Then the same is repeated for another half of the array. So for large arrays you would create too much pressure on the GC.
If you are not against reading some C++, then you could take a look at how to save on memory allocation in my answer to this question:
Implementation of the merge_sort - comparing the timing of an array versus a vector Implementation of the merge_sort - comparing the timing of an array versus a vector
In that sample the array is sorted in place. As your approach is to return a new array, then to adapt my solution you could make a copy at the start, sort it in place, and return it. For sorting you will need aux storage as well. All in all it is enough to create just 3 arrays - 1 for the result array and 2 for aux storage. The aux storage is reused on every level of recursion.
You create too many arrays while sorting. You create a copy, then make a recursive call, where another copy is created (although only for a half of the original size), and then do it again all the way down to the bottom. Then the same is repeated for another half of the array. So for large arrays you would create too much pressure on the GC.
If you are not against reading some C++, then you could take a look at how to save on memory allocation in my answer to this question:
Implementation of the merge_sort - comparing the timing of an array versus a vector
In that sample the array is sorted in place. As your approach is to return a new array, then to adapt my solution you could make a copy at the start, sort it in place, and return it. For sorting you will need aux storage as well. All in all it is enough to create just 3 arrays - 1 for the result array and 2 for aux storage. The aux storage is reused on every level of recursion.
You create too many arrays while sorting. You create a copy, then make a recursive call, where another copy is created (although only for a half of the original size), and then do it again all the way down to the bottom. Then the same is repeated for another half of the array. So for large arrays you would create too much pressure on the GC.
If you are not against reading some C++, then you could take a look at how to save on memory allocation in my answer to this question:
Implementation of the merge_sort - comparing the timing of an array versus a vector
In that sample the array is sorted in place. As your approach is to return a new array, then to adapt my solution you could make a copy at the start, sort it in place, and return it. For sorting you will need aux storage as well. All in all it is enough to create just 3 arrays - 1 for the result array and 2 for aux storage. The aux storage is reused on every level of recursion.
You create too many arrays while sorting. You create a copy, then make a recursive call, where another copy is created (although only for a half of the original size), and then do it again all the way down to the bottom. Then the same is repeated for another half of the array. So for large arrays you would create too much pressure on the GC.
If you are not against reading some C++, then you could take a look at how to save on memory allocation in my answer to this question:
Implementation of the merge_sort - comparing the timing of an array versus a vector
In that sample the array is sorted in place. As your approach is to return a new array, then to adapt my solution you could make a copy at the start, sort it in place, and return it. For sorting you will need aux storage as well. All in all it is enough to create just 3 arrays - 1 for the result array and 2 for aux storage. The aux storage is reused on every level of recursion.