Your code never even looks at the sorted array. An all to smart compiler could figure out that the whole sorting is useless and optimize it out. You could avoid this, for example, by assigning a randomly picked element of the sorted array to a
volatile
variable. Don't make the whole arrayvolatile
, however, or you'll lose a lot of useful optimizations.You are sorting the same permutation over and over again. You'd get more representative results if you'd use different permutations. This is important especially since the run-time of some sorting algorithms depends on the inputs. Again, a very smart compiler might also figure out that sorting the same data again is pointless. Since the expected run-time of sorting such small arrays is tiny, it might seem reasonable to time a larger number of runs to become less dependent on the clock accuracy. You could, however, sort the same array again using a different comparator. For example, you could XOR a random value to the values before comparing them. Pick a different value each time you sort.
The second algorithm might have an unfair advantage since the data will already be cached. If you are worried about this, randomize the order in which the algorithms run.
It is implementation defined whether
std::chrono::high_resolution_clock
is a real-time clock or steady. If it is not steady and you have running an NTP daemon in the background, your timing results will be worthless. You could usestd::chrono::steady_clock
instead or statically select the best available clock at compile-time statically select the best available clock at compile-time. In your case, it might also be worth consideration to usestd::clock
instead, if you are not interested in system effects.Instead of only plotting the average values, compute mean and standard deviation and plot them as data points with error bars. Then overlay a weighted linear regression of α + β n log(n) (or whatever curve you deem appropriate). This will give you the desired "smoother" and more honest curve.
Your code never even looks at the sorted array. An all to smart compiler could figure out that the whole sorting is useless and optimize it out. You could avoid this, for example, by assigning a randomly picked element of the sorted array to a
volatile
variable. Don't make the whole arrayvolatile
, however, or you'll lose a lot of useful optimizations.You are sorting the same permutation over and over again. You'd get more representative results if you'd use different permutations. This is important especially since the run-time of some sorting algorithms depends on the inputs. Again, a very smart compiler might also figure out that sorting the same data again is pointless. Since the expected run-time of sorting such small arrays is tiny, it might seem reasonable to time a larger number of runs to become less dependent on the clock accuracy. You could, however, sort the same array again using a different comparator. For example, you could XOR a random value to the values before comparing them. Pick a different value each time you sort.
The second algorithm might have an unfair advantage since the data will already be cached. If you are worried about this, randomize the order in which the algorithms run.
It is implementation defined whether
std::chrono::high_resolution_clock
is a real-time clock or steady. If it is not steady and you have running an NTP daemon in the background, your timing results will be worthless. You could usestd::chrono::steady_clock
instead or statically select the best available clock at compile-time. In your case, it might also be worth consideration to usestd::clock
instead, if you are not interested in system effects.Instead of only plotting the average values, compute mean and standard deviation and plot them as data points with error bars. Then overlay a weighted linear regression of α + β n log(n) (or whatever curve you deem appropriate). This will give you the desired "smoother" and more honest curve.
Your code never even looks at the sorted array. An all to smart compiler could figure out that the whole sorting is useless and optimize it out. You could avoid this, for example, by assigning a randomly picked element of the sorted array to a
volatile
variable. Don't make the whole arrayvolatile
, however, or you'll lose a lot of useful optimizations.You are sorting the same permutation over and over again. You'd get more representative results if you'd use different permutations. This is important especially since the run-time of some sorting algorithms depends on the inputs. Again, a very smart compiler might also figure out that sorting the same data again is pointless. Since the expected run-time of sorting such small arrays is tiny, it might seem reasonable to time a larger number of runs to become less dependent on the clock accuracy. You could, however, sort the same array again using a different comparator. For example, you could XOR a random value to the values before comparing them. Pick a different value each time you sort.
The second algorithm might have an unfair advantage since the data will already be cached. If you are worried about this, randomize the order in which the algorithms run.
It is implementation defined whether
std::chrono::high_resolution_clock
is a real-time clock or steady. If it is not steady and you have running an NTP daemon in the background, your timing results will be worthless. You could usestd::chrono::steady_clock
instead or statically select the best available clock at compile-time. In your case, it might also be worth consideration to usestd::clock
instead, if you are not interested in system effects.Instead of only plotting the average values, compute mean and standard deviation and plot them as data points with error bars. Then overlay a weighted linear regression of α + β n log(n) (or whatever curve you deem appropriate). This will give you the desired "smoother" and more honest curve.
Your code never even looks at the sorted array. An all to smart compiler could figure out that the whole sorting is useless and optimize it out. You could avoid this, for example, by assigning a randomly picked element of the sorted array to a
volatile
variable. Don't make the whole arrayvolatile
, however, or you'll lose a lot of useful optimizations.You are sorting the same permutation over and over again. You'd get more representative results if you'd use different permutations. This is important especially since the run-time of some sorting algorithms depends on the inputs. Again, a very smart compiler might also figure out that sorting the same data again is pointless. Since the expected run-time of sorting such small arrays is tiny, it might seem reasonable to time a larger number of runs to become less dependent on the clock accuracy. You could, however, sort the same array again using a different comparator. For example, you could XOR a random value to the values before comparing them. Pick a different value each time you sort.
The second algorithm might have an unfair advantage since the data will already be cached. If you are worried about this, randomize the order in which the algorithms run.
It is implementation defined whether
std::chrono::high_resolution_clock
is a real-time clock or steady. If it is not steady and you have running an NTP daemon in the background, your timing results will be worthless. You could usestd::chrono::steady_clock
instead or statically select the best available clock at compile-time. In your case, it might also be worth consideration to usestd::clock
instead, if you are not interested in system effects.Instead of only plotting the average values, compute mean and standard deviation and plot them as data points with error bars. Then overlay a weighted linear regression of α + β n log(n) (or whatever curve you deem appropriate). This will give you the desired "smoother" and more honest curve.