4
$\begingroup$

Inspired by this quote attributed to Alan Perlis:

For every polynomial-time algorithm you have, there is an exponential algorithm that I would rather run.

How I interpret this statement is that some exponential running times should be preferred over polynomial ones for realistic problem sizes. I have never actually seen such a case though.

I wonder if there are any problems for which exponential algorithms are known that are preferable to the best polynomial ones up to a nontrivial problem sizes?

asked Nov 4, 2015 at 10:20
$\endgroup$
5
  • 3
    $\begingroup$ Note that there may be other reasons besides asymptotic running time. Real running time on real inputs (average vs worst case; also, google "galactic algorithms"), implementability, memory usage, ... -- the list goes on and on. $\endgroup$ Commented Nov 4, 2015 at 11:00
  • 1
    $\begingroup$ That said, I think Alan is horribly wrong, at least in the letter. I don't think he'd solve search in sorted arrays, sorting, and a myriad other problems by anything but the standard, poly-time algorithms. He's probably talking about a specific domain, though (which you should give as context). Also, similar statements can be made inside P (cf. Quicksort vs Mergesort). $\endgroup$ Commented Nov 4, 2015 at 11:01
  • 2
    $\begingroup$ On a related note, we can use Karatsuba multiplication to form the product of two $n$-bit numbers in time 0ドル(n^{1.58})$ rather than the $O(n^2)$ time it would take naively. You can extend this to algorithms that run in time $O(n^{1+\epsilon})$ for any $\epsilon>0$ but the actual multiplicative constant is so huge that most of these are only practical when $n$ is titanically large. $\endgroup$ Commented Nov 5, 2015 at 2:02
  • 2
    $\begingroup$ For smallish problems, it could very well be more practical to just check all possibilities (or using some general technique like branch and bound) instead of carefully researching (and programming, and debugging) a sophisticated "optimal" algorithm. $\endgroup$ Commented Nov 5, 2015 at 16:37
  • 1
    $\begingroup$ Interesting to see how many ways a Perlis crypticism can be interpreted. I read it as "there's going to be an exponential algorithm that is so much smaller, easier to read, easier to understand, and whose correctness is easier to be convinced of." In this interpretation, the crucial "run" is an allusion to "someone has to write the code and get it working correctly." $\endgroup$ Commented May 10, 2022 at 2:00

2 Answers 2

13
$\begingroup$

The simplex method for linear programming has worst case exponential time complexity but is widely used in practice instead of the polynomial algorithms (which do exist).

answered Nov 4, 2015 at 10:27
$\endgroup$
3
  • $\begingroup$ Too bad I'm limited to one upvote per post. $\endgroup$ Commented Nov 5, 2015 at 1:48
  • $\begingroup$ Good answer to be fair, although this is mainly true because in practice it does not reach its worst case complexity. Could you think of a case where the exponential algorithm is practical up to a certain size just because the worst case) growth rate is low? $\endgroup$ Commented Nov 5, 2015 at 13:07
  • 1
    $\begingroup$ Algorithms are classified (very roughly!) by worst running time, when often the very low to practically inexistent probability of the worst case makes the average (or even realistical bad cases) much more important. E.g we use quicksort because it is blindingly fast on average, even while it's worst case is very bad (but it has vanishing probability in practice if mild precautions are taken). $\endgroup$ Commented Nov 5, 2015 at 16:33
-1
$\begingroup$

another big/ widespread example is Prime detection/ generation for RSA cryptography. theres a P-Time algorithm discovered early this century but it runs slow in practice so the probabilistic Miller-Rabin test is used. the algorithm runs slower the more bases are tested and could be said to run in exponential time (or at least nonpolynomial) in worst case as certainty of (non)primality is increased. a pragmatic/ P-time limit/ "compromise" is chosen in implementations so that its efficient but there is still high certainty.

see also

answered Nov 5, 2015 at 19:25
$\endgroup$
2
  • 1
    $\begingroup$ This does not answer the question; it does not make much sense to bring randomized algorithms with expected polynomial running time into play here. $\endgroup$ Commented Nov 5, 2015 at 20:34
  • $\begingroup$ reaction seems to miss the point. probabilistic tests have no expected running time in the typical sense. the question was not specifically/ technically defined as to exclude this "out-of-the-box-thinking" answer & dont think lateral thinking should be penalized here, although ofc thats often an uphill battle :( $\endgroup$ Commented Nov 5, 2015 at 21:22

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.