Sort Algorithms



The La Plata Mountains as seen from above the author痴 home.


Durango Bill痴
Time Trial Analysis of Sorting Algorithms
Includes Source Code



A common task for computers is to sort data as an aid to analyzing it. There are a large number of possible sorting algorithms. We will look at (source code included) and check out time trials for the following:

Bubble Sort - For some reason it is popular, but you really should pick one of the others.
Insertion Sort - Good by itself and even better since it is a basis for some of the others.
Median-of-three Quicksort - Highly optimized source code included for the famous classic.
Multiple Link List Sort - Perhaps the fastest possible sort but only for special cases.
Shell Sort - If you are only going to learn one sort algorithm, make it Shell Sort.


Source Code

鼎? Source Code for the above algorithms is on the http://www.durangobill.com/SortAlgorithms/Sort_Source_Code.html page. Permission is granted to use the source code without obligation, but if you are a student, you will be better off in the long run if you learn how the algorithms work as opposed to just copying the code. The source code program will run as is without modification if compiled with the lcc-win32 鼎? compiler. http://www.cs.virginia.edu/~lcc-win32/

A brief description of the individual algorithms is given below, but more complete documentation is included in the source code.


Time Trial Considerations

Test data for the time trials was generated using a pseudo random number generator. (Included in the above source code.) This allowed each sort test to work on identical number sets. The elapsed times shown in the tables are averages for 10 different runs. While times are quoted to 1 one-thousandth of a second, true time measurement was only accurate to about 1 one-hundredth of a second. Thus the last digit should only be regarded as 殿 guide?.

Time trials were run using a 3 GHz Pentium processor. No other user initiated program was running during the time trials. However, the Windows XP operating system, possible system operations, and/or who knows what might have been running in the background (e.g. system/virus updates). All of the above could have influenced the time data.



Bubble Sort

Bubble sort is easy to code, but it is an 哲 squared? algorithm - and one of the least efficient of the 哲 squared? algorithms. The 哲 squared? phrase means that if you double the number of items to be sorted, the sort time will increase by a factor of four. If you multiply the number of items by 10, then the sort time will be increased by a factor of 100. Time trial tests produced the following:

Sort 10,000 Sort 100,000
Random Numbers Random Numbers
Time 0.366 sec. 36.702 sec.



Insertion Sort

Insertion Sort is similar to the way you might sort a hand of cards. Let痴 assume that you are dealt a poker or bridge hand. One way of sorting your hand is to pick up the cards one at a time, look to see what the card is, and then insert it into your hand of cards.

While Insertion Sort is another 哲 Squared? algorithm, it has several advantages over 釘ubble Sort?.

1) Its intrinsic efficiency makes it hard to beat for small lists. (e.g. 30 or less).

2) Insertion Sort is the basis for Shell Sort. Thus once you learn Insertion Sort, Shell Sort is just a minor 殿dd on?.

3) Insertion Sort is very fast for arrays that are nearly sorted. It is very efficient for the final pass after Quicksort has done most of its job.

4) Modern computers have an 登n board? cache memory that can be accessed much faster than ordinary random access memory. Insertion Sort does most of its 鍍hrashing? within a small section of the array that is being sorted. This small section of the array that is being sorted is frequently in the computer痴 cache memory. This further increases the efficiency of Insertion Sort. This cache memory efficiency has a significant effect on the optimal size of 溺inGroup? when you are fine tuning 轍uicksort?.

Of interest, compare the time trials for Insertion Sort vs. Bubble Sort

Sort 10,000 Sort 100,000
Random Numbers Random Numbers
Time 0.055 sec. 5.392 sec.




Multiple Link List Sort

Multiple Link List Sort is only efficient under certain conditions (see source code), but for what it does, there is probably nothing else that can match it for speed. Thus it is probably the fastest sorting algorithm (fastest sort algorithm). MLLsort is an address calculation sort. For each number that it is going to sort, it looks at the number, and calculates where it should go in the final sorted list.

The process is very similar to the following algorithm for assembling a jig-saw puzzle. Pick up the first piece of the puzzle, compare the small picture on the puzzle piece with the picture of the entire puzzle, and place the piece in its exact location on the table where you are going to assemble the entire puzzle. Then pick up the second piece of the puzzle, and repeat the above process. Each puzzle piece is examined only once, and after you have examined each puzzle piece exactly once, the puzzle is finished.

The quantity of elements and sort times in the table below are not mistakes. Multiple Link List Sort runs in linear time (no log(n) or log(log(n))), and it is as fast as they come.

Sort 1,000,000 Sort 10,000,000
Random Numbers Random Numbers
Time 0.127 sec. 1.230 sec.



Median-of-three Quicksort

Quicksort is the fastest general purpose sorting algorithm. By 敵eneral Purpose?, it can handle any kind of data. It only requires minimal extra memory while MLLsort needs lots of memory.

The bad news for Quicksort is that it is probably the most difficult to code. Also, a poorly implemented Quicksort may require 哲 squared? time on some data sets - which frequently includes partially sorted data. If Quicksort is coded properly, it will almost always run in 哲*(Log(N))? time, and will almost always beat any other N*(Log(N)) algorithm. (e.g. It will normally beat Heapsort, Merge Sort, etc.)

We can use our jig-saw puzzle analogy to see how Quicksort operates. First, dump all the puzzle pieces out on to a table. (Make sure you turn all the puzzle pieces right side up - although 田heating? and looking at the grain on the back side of the pieces is frequently of help.) If you are trying to solve the puzzle, you might separate (partition) the edge pieces into one group and everything else into a second group. The first pass of Quicksort performs a similar operation by separating the smaller numbers into one group and the larger numbers into a second group.

If you are solving a jig-saw puzzle, you then might divide the 渡on-edge? pieces into two separate groups. (e.g. 釘lue sky? in one group and everything else in the other group.) The second pass of Quicksort performs a similar operation. It takes one of the groups from 鍍he first pass? and partitions this subgroup into groups that contain 途elatively smaller values? and 途elatively larger values?.

Both the puzzle algorithm and Quicksort continue the partitioning process until the small groups (subgroups) are reduced to manageable size. If you are working on the jig-saw puzzle, you eventually assemble each of the small groups. If Quicksort is working on a list of numbers to sort, the final assembly process is performed by Insertion Sort.

Details of the code can be seen on the Source Code page. The source code is based on original work by Robert Sedgewick (author of 鄭lgorithms?) with a few small additions by the author.

The variable 溺inGroup? (within the code) is of interest. If all memory access into the array to be sorted has equal access time, then the optimal value for 溺inGroup? is about 15 to 20. Present-day computers have an 登n board? cache memory which greatly aids the final Insertion Sort, and thus alters the optimal size for 溺inGroup?. The optimal size for 溺inGroup? on the author痴 computer is in the 60 to 70 range. Users may wish to experiment with other computers.

<- - - - - - - - Sort 1,000,000 Elements - - - - - - - ->
MinGroup MinGroup MinGroup MinGroup MinGroup MinGroup MinGroup
= 20 = 40 = 50 = 60 = 70 = 80 = 100
Time 0.186 0.178 0.176 0.178 0.181 0.177 0.178 sec.


<- - - - - - - - Sort 10,000,000 Elements - - - - - - - ->
MinGroup MinGroup MinGroup MinGroup MinGroup MinGroup MinGroup
= 20 = 40 = 50 = 60 = 70 = 80 = 100
Time 2.163 2.112 2.097 2.089 2.097 2.094 2.111 sec.



Shell Sort

If you are only going to learn one sorting algorithm, concentrate on Shell Sort. A properly coded Shell Sort is much easier to code than Quicksort, and it is nearly as fast as Quicksort.

Shell Sort is based on Insertion Sort. In fact, if you are working on a very small group of numbers, it is exactly the same as Insertion Sort. If you are sorting a larger group, then it is just an expansion of Insertion Sort. Shell Sort breaks down large arrays into a series of small sections which it then treats as though they were 的nsertion Sort?.

---------------------------------------------------------------
RandNbrs | A | B | C | D | A | B | C | D | A | B | C | D | A | B | C | D |
---------------------------------------------------------------
Index(Pos) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

For example, let's assume you are going to sort 16 elements and are using the 1, 4, 13, etc. sequence for 堵ap? sizes. The algorithm will first use a 敵ap? size of 4. It will run 4 separate simultaneous 的nsertion Sorts?. One of these will be an "Insertion Sort" on the numbers in the 鄭? positions. The other 3 的nsertion Sorts? will use the 釘?, 鼎?, and 泥? groups. When the 敵ap = 4? process is finished, the array will be roughly sorted with most of the small value numbers concentrated near the low end of the array. Similarly, most of large value numbers will be concentrated near the high end of the array. The algorithm then continues with a 敵ap? size of 1. This is essentially an ordinary 的nsertion Sort?, and 的nsertion Sort? is very efficient on arrays that are nearly sorted.

While some sorting algorithms can be easily classified as executing in 哲 squared?, 哲*Log(N)?, etc. time, Shell Sort does not fall into an easily analyzed pattern. On top of that, the sort time is a function of the 堵ap? sequence that is used and is somewhat a function of the cache memory that resides in a computer. Time trials used by the author included the following gap sequences:

1) 1, 3, 7, 21, 48, 112, 336, 861, 1968, 4592, 13776, 33936, 86961, 198768, 463792, 1391376, 3402672, 8382192, 21479367, 49095696, 114556624, 343669872

2) 1, 4, 13, 40, 121, 364, 1093, 3280, 9841, 29524, 88573, 265720, 797161, 2391484, 7174453, 21523360, 64570081, 193710244, 581130733, 1743392200

3) 1, 8, 23, 77, 281, 1073, 4193, 16577, 65921, 262913, 1050113, 4197377, 16783361, 67121153, 268460033, 1073790977

(See the source code for more information)

The third sequence seems to produce the best results, but there is no final word (and probably never will be) regarding a 澱est sequence?.

The tables below show the results as run on the author痴 computer.

<- - - - - - Sort 1,000,000 Elements - - - - - ->
Use 1, 3, 7, Use 1, 4, 13, Use 1, 8, 23,
for 敵aps? for 敵aps? for 敵aps?
Time 0.337 sec. 0.422 sec. 0.297 sec.

<- - - - - - Sort 10,000,000 Elements - - - - - ->
Use 1, 3, 7, Use 1, 4, 13, Use 1, 8, 23,
for 敵aps? for 敵aps? for 敵aps?
Time 4.389 sec. 6.858 sec. 3.997 sec.



Return to Durango Bill's Home Page.


Web page generated via Sea Monkey's Composer HTML editor
within a Linux Cinnamon Mint 18 operating system.
(Goodbye Microsoft)

AltStyle によって変換されたページ (->オリジナル) /