You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, if you keep the input size constant, you can notice the difference between an efficient algorithm and a slow one. An excellent sorting algorithm is `mergesort` for instance, and inefficient algorithm for large inputs is `bubble sort` .
65
64
Organizing 1 million elements with merge sort takes 20 seconds while bubble sort takes 12 days, ouch!
66
65
The amazing thing is that both programs are measured on the same hardware with the same data!
Copy file name to clipboardExpand all lines: book/chapters/big-o-examples.adoc
-3Lines changed: 0 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,6 @@ As you can see, in both examples (array and linked list) if the input is a colle
46
46
== Logarithmic
47
47
48
48
Represented in Big O notation as *O(log n)*, when an algorithm has this running time it means that as the size of the input grows the number of operations grows very slowly. Logarithmic algorithms are very scalable. One example is the *binary search*.
49
-
50
49
indexterm:[Runtime, Logarithmic]
51
50
52
51
[#logarithmic-example]
@@ -128,7 +127,6 @@ How do we obtain the running time of the merge sort algorithm? The mergesort div
128
127
== Quadratic
129
128
130
129
indexterm:[Runtime, Quadratic]
131
-
132
130
Running times that are quadratic, O(n^2^), are the ones to watch out for. They usually don’t scale well when they have a large amount of data to process.
133
131
134
132
Usually, they have double-nested loops that where each one visits all or most elements in the input. One example of this is a naïve implementation to find duplicate words on an array.
@@ -223,7 +221,6 @@ A factorial is the multiplication of all the numbers less than itself down to 1.
223
221
=== Getting all permutations of a word
224
222
225
223
One classic example of an _O(n!)_ algorithm is finding all the different words that can be formed with a given set of letters.
Copy file name to clipboardExpand all lines: book/chapters/dynamic-programming--fibonacci.adoc
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,6 @@
3
3
Let's solve the same Fibonacci problem but this time with dynamic programming.
4
4
5
5
When we have recursive functions doing duplicated work is the perfect place for a dynamic programming optimization. We can save (or cache) the results of previous operations and speed up future computations.
6
-
7
6
indexterm:[Fibonacci]
8
7
9
8
.Recursive Fibonacci Implemenation using Dynamic Programming
@@ -25,7 +24,6 @@ graph G {
25
24
....
26
25
27
26
This looks pretty linear now. It's runtime _O(n)_!
28
-
29
27
indexterm:[Runtime, Linear]
30
28
31
29
TIP: Saving previous results for later is a technique called "memoization" and is very common to optimize recursive algorithms with exponential time complexity.
Copy file name to clipboardExpand all lines: book/chapters/linked-list.adoc
-1Lines changed: 0 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -250,7 +250,6 @@ So far, we have seen two liner data structures with different use cases. Here’
250
250
|===
251
251
252
252
indexterm:[Runtime, Linear]
253
-
254
253
If you compare the singly linked list vs. doubly linked list, you will notice that the main difference is deleting elements from the end. For a singly list is *O(n)*, while for a doubly list is *O(1)*.
255
254
256
255
Comparing an array with a doubly linked list, both have different use cases:
Copy file name to clipboardExpand all lines: book/chapters/map-hashmap.adoc
-1Lines changed: 0 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -296,5 +296,4 @@ Hash Map it’s very optimal for searching values by key in constant time *O(1)*
296
296
{empty}* = Amortized run time. E.g. rehashing might affect run time.
297
297
298
298
indexterm:[Runtime, Linear]
299
-
300
299
As you can notice we have amortized times since, in the unfortunate case of a rehash, it will take O(n) while it resizes. After that, it will be *O(1)*.
0 commit comments