Determining The Complexity Of Algorithm (The Basic Part)
Hi buddy? many students or even your self talking about analysis algorithm often seen as a scary thing and I agree 100% cos I’m also not really sure about it since I believe analysis algorithm is a large thing and needs a high math knowledge to understand it, and the bad new is I hate talking about mathematics :-). However, through this article I’ll share you about the basics to analysis algorithm based on what I know about it with the hope at least it can help beginners who are trying to analysis algorithm. Finally, this is not as a perfection, there are lots of rooms for improvement or better articles about this stuff. And I state this is just a sharing! π
Well, The first thing we should know is what the algorithm is? If you are a computer science student or such of that, you must be know it. A simple definition, algorithm is the steps to solve a problem that transform the input to be output.
However in computer science, an algorithm isn’t only seen whether that algorithm is correct or can solve the problem, but it must be effective. An algorithm can be said effective if it requires a small space and time to execute its procedures. But, the problem is how can we determine or at least estimate the effectiveness of an algorithm? To answer that question we need to know the asymptotic notations which often used to describe the complexity of an algorithm. Besides that, there is something called Master Theorem that often used to describe the recurrence (running time of recursive algorithm), and much more methods for analyzing the algorithm. (Oh, that’s really bored, I just can hope the scientists stop doing research to find something new ) π¦
Big Oh (O)
Big oh is a mathematics notation that often used to describe the asymptotic function. In computer science Big Oh also can be used to describe how the large input size affects the performance of an algorithm. (I’m sorry, if that’s little confused). In analysis algorithm, Big Oh is often used to describe the worst-case of an algorithm by taking the highest order of a polynomial function and ignoring all the constants value since they aren’t too influential for sufficiently large input. So, if an algorithm has running time like f(n) = 3n + 100, we can simply state that algorithm has the complexity O(n) which means it always execute at most n procedures. Thus, we can guarantee that algorithm would not be bad than the worst-case. So, what’s the idea behind Big Oh? Well, let’s see the definition of Big Oh (I’m sorry if this definition is little confusing because I’m also not really sure what I wrote here and I’m not good in Math).
Definition: Let f(n) and g(n) be non-negative functions. Can be said that f(n) = O(g(n)) (read f(n) is big oh of g(n)) if and only if there are two positive constants C and N such that f(n) β€ g(n) when n β₯ N.
Big O Graphic
Example: prove that f(n) = 5n2 + 3n + 2 has Big Oh O(n2) !
From the definition of Big Oh, we can say that f(n) = 5n2 + 3n + 2 = O(n2), since for all n β₯ 1 : 5n2 + 3n + 2 β€ 5n2 + 3n2 + 2n2 β 5n2 + 3n + 2 β€ 10n2 . By assigning the constants C = 10, N = 1, it’s right f(n) β€ C g(n). Thus, f(n) = 5n2 + 3n + 2 β O(g(n)) = O(n2).
Now, is f(n) = 5n + 10 = O(n2)? [Yes/No]
Big Omega (Ξ©)
Now that you’ve known about Big Oh, what’s about Big Omega? Well, Big Omega is the opposite of Big Oh, if Big Oh was used to describe the upper bound (worst-case) of a asymptotic function, Big Omega is used to describe the lower bound of a asymptotic function. In analysis algorithm, this notation is usually used to describe the complexity of an algorithm in the best-case, which means the algorithm will not be better than its best-case.
Definition: Definition: Let f(n) and g(n) be non-negative functions. Can be said that f(n) = Ξ© (g(n)) (read f(n) is big omega of g(n)) if and only if there are two positive constants C and N such that C g(n) < f(n) when n β₯ N.
Big Omega Graphic
Example: Prove that f(n) = 8n – 15 has Big Omega Ξ©(n) !
From the definition of Big Omega, we can say that f(n) = 8n – 15 = Ξ©(n2), since for all n β₯ 1 : 8n – 15n β€ 8n – 15 β -7n β€ 8n – 15, since f(n) and g(n) are the non-negative function it can be written to be |-7n| β€ |8n – 15| β 7n β€ 8n – 15. By assigning the constants C = 15, N = 1, it’s right C g(n) β€ f(n) when n β₯ N. Thus, f(n) = 8n – 5 = Ξ©(n).
With the same way we can say that f(n) = 3n2 + 10n + 6 = Ξ©(n) = Ξ©(n2) <> Ξ©(n3).
Big Theta (Ξ)
When an algorithm has a complexity with lower bound = upper bound, say that an algorithm has a complexity O(n log n) and Ξ©(n log n), it’s actually has the complexity Ξ(n log n), which means the running time of that algorithm always falls in n log n in the best-case and worst-case.
Big Theta
Definition: Let f(n) and g(n) be non-negative functions. Can be said that f(n) = Ξ(g(n)) (read f(n) is big theta of g(n)) if and only if there are positive constants C1, C2, and N, where f(n) = O(g(n)) and f(n) = Ξ©(g(n)), such that C1 g(n) β€ f(n) β€ C2 g(n) when n β₯ N.
Example: Given a function f(n) = 3n2 + 10n + 6. Can be said that f(n) = Ξ©(n2). From the definition of Big Oh and Big Omega, we can conclude that:
f(n) β O(n) f(n) β Ξ©(n3)
Since Big Theta said that function f(n) must be in O(g(n)) and Ξ©(g(n)), so we only choose f(n) = O(n2) and f(n) = Ξ©(n2). Thus, f(n) = Ξ(n2).
Determining The Complexity of Algorithm
From the above notations, actually there are still other notations like little-oh, little-omega, etc. However, in analysis algorithm Big Oh is often used since as I said it can guarantee the maximum performance of an algorithm.
When we’re talking about big oh of an algorithm, actually we’re talking about the complexity of that algorithm which explains how much the number of operations are affected as the input get larger. The complexity of an algorithm usually counted based on what statements are being used. Here are constant time for some statements.
1. Basic Operations (arithmetic, comparisons, accessing array’s elements, assignment) : The running time is always constant O(1)
Example :
read(x) // O(1) a = 10; // O(1) a = 1.000.000.000.000.000.000 // O(1)
2. If then else statement: Only taking the maximum running time from two or more possible statements.
Example:
age = read(x) // (1+1) = 2 if age < 17 then begin // 1 status = "Not allowed!"; // 1 end else begin status = "Welcome! Please come in"; // 1 visitors = visitors + 1; // 1+1 = 2 end;
So, the complexity of the above pseudo code is T(n) = 2 + 1 + max(1, 1+2) = 6. Thus, its big oh is still constant T(n) = O(1).
3. Looping (for, while, repeat): Running time for this statement is the number of looping multiplied by the number of operations inside that looping.
Example:
total = 0; // 1 for i = 1 to n do begin // (1+1)*n = 2n total = total + i; // (1+1)*n = 2n end; writeln(total); // 1
So, its complexity is T(n) = 1+4n+1 = 4n + 2. Thus, T(n) = O(n).
4. Nested Loop (looping inside looping): Since there is at least one looping inside the main looping, running time of this statement used O(n2) or O(n3).
Example:
for i = 1 to n do begin // (1+1)*n = 2n for j = 1 to n do begin // (1+1)n*n = 2n^2 x = x + 1; // (1+1)n*n = 2n^2 print(x); // (n*n) = n^2 end; end;
Common Running Time
There are some common running times when analyzing an algorithm:
1. O(1) – Constant Time
Constant time means the running time is constant, it’s not affected by the input size.
2. O(log n) – Logarithmic Time
Algorithm that has running time O(log n) is slight faster than O(n). Commonly, algorithm divides the problem into sub problems with the same size. Example: binary search algorithm, binary conversion algorithm.
3. O(n) – Linear Time
When an algorithm accepts n input size, it would perform n operations as well.
4.O(n log n) – Linearithmic Time
This running time is often found in “divide & conquer algorithms” which divide the problem into sub problems recursively and then merge them in n time. Example: Merge Sort algorithm.
5. O(n2) – Quadratic Time
Look Bubble Sort algorithm!
6. O(n3) – Cubic Time
It has the same principle with O(n2).
7. O(2n) – Exponential Time
It is very slow as input get larger, if n = 1000.000, T(n) would be 21000.000. Brute Force algorithm has this running time.
8. O(n!) – Factorial Time
THE SLOWEST !!! Example : Travel Salesman Problem (TSP)
Closing Words
That’s all about simple theory about determining the complexity of an algorithm. I hope it can help you. Further reading, please refer a textbook from Thomas Cormen (Introduction to Algorithms, 2nd Edition), it was to be a huge help for me when learning about analyzing of algorithm. Good Luck π
This entry was posted on March 7, 2011, 1:03 am and is filed under Stuff About Algorithm. You can follow any responses to this entry through RSS 2.0. You can leave a response, or trackback from your own site.
#1 by Bodden on November 6, 2013 - 2:43 am
good job !
#2 by JasonHuang on February 13, 2014 - 3:48 pm
nice job ! man ~ it helped me a lot !
#3 by Philips Tel on January 29, 2015 - 2:45 am
Thank you, happy can help
#4 by Sourov Datta on December 8, 2016 - 6:06 am
Nice explanation boss π
#5 by Philips Tel on January 24, 2017 - 2:41 am
Thank you, hope it help
#6 by Amjad on May 26, 2017 - 2:48 am
Thank you so much ! , It really helped me understanding more of what happening inside the algorithms ! β€
#7 by Nawendu Singh on July 16, 2018 - 1:25 pm
can u explain the example for nested loops?
#8 by Philips Tel on December 27, 2019 - 5:03 am
If you read detailed my post you will find this nested loop example:
for i = 1 to n do begin // (1+1)*n = 2n
for j = 1 to n do begin // (1+1)n*n = 2n^2
x = x + 1; // (1+1)n*n = 2n^2
print(x); // (n*n) = n^2
end;
end;
#9 by edieka on March 16, 2021 - 10:56 pm
omg thank you so much