Are there any problems where there is no smallest time complexity, so there is a big-$\mathcal{O}$ but no big-$\Theta$?
More formally, I am looking for any problem $p$ where
- for any algorithm that solves $p$ in time $f(n)$
- there provably exists another algorithm that solves $p$ in time $g(n)$
- where $g(n) = o(f(n))$.
An example would be a problem for which there exist algorithms of complexity $\mathcal{O}(n^k)$ for all $k > 0,ドル but for which there is no algorithm of lower complexity, like $\mathcal{O}(1)$ or $\mathcal{O}(\log n)$.
-
2$\begingroup$ There is no $<$ relation between $O(f)$ and $O(g),ドル you could use the little $o$ notation and say $g(n)=o(f(n))$. One example for what you required in your last statement could be finding an element in a sorted array. I think this post answers the question in the title. $\endgroup$Ariel– Ariel2017年12月08日 12:20:26 +00:00Commented Dec 8, 2017 at 12:20
-
$\begingroup$ @Ariel Thanks, I've never heard of Blum's Speedup Theorem before. I fixed the niggles in the post as well. $\endgroup$Veedrac– Veedrac2017年12月08日 12:36:22 +00:00Commented Dec 8, 2017 at 12:36
2 Answers 2
The existence of such problems follows from the Blum speedup theorem. The theorem shows, for example, that there exists a problem such that for any algorithm solving it in time $T(n),ドル there is another one solving it in time at most $\log T(n)$.
-
$\begingroup$ Thanks. Are there any good examples? I don't have time to read the proof right now. $\endgroup$Veedrac– Veedrac2017年12月08日 12:37:34 +00:00Commented Dec 8, 2017 at 12:37
-
$\begingroup$ Not really. The examples constructed by the theorem are very contrived. $\endgroup$Yuval Filmus– Yuval Filmus2017年12月08日 12:40:50 +00:00Commented Dec 8, 2017 at 12:40
You might be interested in something similar to what you are describing (in terms of a hierarchy of algorithms of different runtimes that get closer and closer to some limiting runtime). Polynomial time approximation schemes (PTAS) have runtimes stated like $O(2^{1/\epsilon}n^2)$ or $O(n^{1/\epsilon})$ for any epsilon, and you are sort of free to pick your epsilon as a trade-off between the runtime or the accuracy to an approximate solution.
See http://www.win.tue.nl/~mdberg/Onderwijs/AdvAlg_Material/Course%20Notes/lecture7.pdf for an intro to PTAS and an example showing a $O(n^3/\epsilon)$ PTAS for a solution to Knapsack which is at least $(1-\epsilon)\times OPT$
Explore related questions
See similar questions with these tags.