I mean, we could in principle use $\Theta$, $\omega,o$ and so on but it seems so that $O$ is a favourite in presenting computational complexities. Why?
3 Answers 3
The main reason is that knowing worst-case-in-practice behaviour is usually more useful than anything else when actually choosing an algorithm. Hence, if given a choice, upper bounds are more useful than lower bounds.
"Wall time" (i.e. the time that the algorithm would take measured by a clock on the wall) is probably the most important time-based measure in practice, but that's not a measurement that translates well between decades as computers vary in the speed of their various components. What does translate is how it scales.
But that does mean that we end up with galactic algorithms: algorithms which theoretically scale better, but where the speedup is not realisable on any problem smaller than a galaxy in size.
Note that this argument doesn't apply to space usage. A bit doesn't change in size over time; it's always exactly one bit in size no matter when you happen to be living. So measuring space complexity "up to an arbitrary constant factor" isn't that helpful by itself. If an algorithm takes $O(n)$ space, is that 1% overhead or ×ばつ overhead?
-
1$\begingroup$ As a simple example, take 2n vs n^1.01. You need n >= 2^100 until the second is larger than the first. If both are the runtime of an algorithm in nanoseconds then this is thousands of billions of years. $\endgroup$gnasher729– gnasher7292025年11月05日 14:17:04 +00:00Commented Nov 5 at 14:17
The informal way of seeing the notations you mentioned is:
- $f = O(g)$, $f$ grows comparably or slowly than $g$ asymptotically.
- $f = o(g)$, $f$ grows slowly than $g$ asymptotically.
- $f = \Theta(g)$, $f$ grows comparably to $g$ asymptotically.
- $f = \Omega(g)$, $f$ grows comparably or faster than $g$ asymptotically.
- $f = \omega(g)$, $f$ grows faster than $g$ asymptotically.
I don't think there is anything favourite about $O$, and it depends on what you are trying to do. Say if you are trying to upper-bound the resources taken by an algorithm, you're likely to use $O$ or $\Theta$. For example, if you use Bubble-sort, then the number of comparisons you make given an array of $n$ elements can vary significantly with input. If the array is already sorted, you'll do $n$ comparisons. In the worst case, you could do about $n^2$ comparisons. Thus, if you are just interested in upper-bounding comparisons in the worst case, you can say the algorithm does $O(n^2)$ comparisons. If you instead use merge-sort (where you split into sub-arrays every time), it will always take about $n\log{n}$ comparisons, so in this case it's more natural to use $\Theta(n\log n)$.
Now, if you are, on the other hand, interested in lower-bounding the resources you're interested in using $\omega$ or $\Omega$. For example, $\Omega(n\log{n})$ is a known lower bound for sorting in the comparison-based model (bit unrelated, but a nice question to look at Is it really possible to prove lower bounds?).
First, big-O and little-o can be written without Greek letters. Seriously, that is a big advantage.
Second, big-O is often used, and it is often all that is needed. So in many cases, you could use different letters, but people use big-O by default unless it is incorrect.
Explore related questions
See similar questions with these tags.