Say there is a function $f:\mathcal{X} \mapsto \{1, 2,...,n\}$. We want to solve a specific instance of $f(x)$.
We have black box access to a BPP algorithm where it takes $T$ time to answer $\{YES, NO\}$ to $LessThan(x,k) := f(x) < k$ and has error probability of 1ドル/3$.
1) How can you solve $f(x)$ in time $O(T\log n \log \log n)$ with error probability 1ドル/3$ using $LessThan$?
2) How can you use $LessThan(x,k)$ to implement a new algorithm called $ImprovedLessThan(x, L, H)$ where it returns:
HIGH when (L+H)/2 < f(x) < H
LOW when L < f(x) < (L+H)/2
OUT OF RANGE when f(x) is outside of [L,H)
and has error probability 1ドル/3$ and runs in $O(T)$ time.
3) How can you use $ImprovedLessThan$ to solve $f(x)$ in $O(T \log n)$ time with high probability?
Thoughts so far: For 1) we are using $LessThan$ to do a binary search (or, traverse a binary search tree). We start with $k = n/2$. If $LessThan$ returns $YES$, we update to $k=n/4$. Similarly, we update $k= 3n/2$ for $NO$. I understand we may traverse the the tree incorrectly and have some need to backtrack. I don't see how we end up with $O(T\log n \log \log n)$ time and how are error probability stays bounded.
For 2) it's clear to me that given inputs $L$ and $H$, we can just set $k = (L + H)/2$ and call $LessThan(x, k)$. It's also clear that we want to detect incorrect traversals as early as possible to get a better upper bound on time. Not sure where to go from there though.
For 3) I'm not sure how to get high probability guarantees. I just know that if we make $\log n$ correct moves in a row, then we will probably be at the correct answer.
Generally, I'm having a hard time turning my notions about the answer into something rigorous.
-
$\begingroup$ Nice question! What are your thoughts? We expect you to make a serious attempt at the problem first before asking it here. $\endgroup$Yuval Filmus– Yuval Filmus2019年09月12日 10:03:46 +00:00Commented Sep 12, 2019 at 10:03
-
$\begingroup$ ok! updating now $\endgroup$zerekabdur– zerekabdur2019年09月12日 10:05:06 +00:00Commented Sep 12, 2019 at 10:05
-
$\begingroup$ See Section 3 of Feige, Raghavan, Peleg and Upfal, Computing with noisy information. $\endgroup$Yuval Filmus– Yuval Filmus2019年09月12日 10:13:23 +00:00Commented Sep 12, 2019 at 10:13
-
$\begingroup$ checking now, updated the question $\endgroup$zerekabdur– zerekabdur2019年09月12日 10:17:59 +00:00Commented Sep 12, 2019 at 10:17
1 Answer 1
The binary search algorithm that you describe in your answer for Question 1 will have error probability almost 1, in the worst case. Suppose that $f(x) = 1$. In order for your algorithm to be correct, it needs to make no mistake during the binary search which happens with probability $(2/3)^{\log_2 n} = 1/n^{\log_2(3/2)}$.
In order to fix this, you need to boost the success probability of each step to 1ドル-1/(3\log n)$, which you can do by repeating each comparison $O(\log \log n)$ times and taking a majority vote. Analysis (using Chernoff's bound) left to you.
Question 2 is quite simple — you just need to compare your input to $L,\frac{L+H}{2},H$, repeating the comparison $O(1)$ times to boost the success probability.
The algorithm underlying Question 3 is a bit complicated. It is described in Section 3 of Feige, Raghavan, Peleg and Upfal, Computing with noisy information. Roughly speaking, we perform binary search as usual (without boosting the success probability). Sometimes we will notice that we have made a mistake at some point in the past (this happens when we get "out of range"). We then backtrack until we no longer get "out of range". On expectation, we make progress (progress is measured as the size of the common prefix of the correct path through the decision tree underlying binary search, and the one undertaken by the algorithm). In order to boost the success probability, we continue the binary search for a few more steps even after locating the correct cell.
Explore related questions
See similar questions with these tags.