Chapter 2.3.2 Analysing divide-and-conquer algorithms of Introduction to Algorithms, fourth edition, by CLRS, says the following:
A recurrence for the running time of a divide-and-conquer algorithm falls out from the three steps of the basic method. As we did for insertion sort, let $T(n)$ be the worst-case running time on a problem of size $n$. If the problem size is small enough, say $n < n_0$ for some constant $n_0 > 0$, the straightforward solution takes constant time, which we write as $\Theta(1)$. Suppose that the division of the problem yields $a$ subproblems, each with size $n/b$, that is, 1ドル/b$ the size of the original. For merge sort, both $a$ and $b$ are 2ドル$, but we'll see other divide-and-conquer algorithms in which $a \not= b$. It takes $T(n/b)$ time to solve one subproblem of size $n/b$, and so it takes $aT(n/b)$ time to solve all $a$ of them. If it takes $D(n)$ time to divide the problem into subproblems and $C(n)$ time to combine the solutions to the subproblems into the solution to the original problem, we get the recurrence $$T(n) = \begin{cases} \Theta(1) & \text{if} \ n < n_0, \\ D(n) + aT(n / b) + C(n) & \text{otherwise} \end{cases}$$
What exactly is $a$ and $b$ here? Based on this description, it seems to me that $b$ might be the branching factor, and $a$ might be something like the depth, but I'm not really sure.
-
$\begingroup$ Does this answer your question? How to find the runtime out of a recursion formula when using divide and conquer $\endgroup$Nathaniel– Nathaniel2022年10月25日 21:01:38 +00:00Commented Oct 25, 2022 at 21:01
1 Answer 1
In fact, $a$ is the branching factor (the number of subproblems) and $n/b$ means that each subproblem has size $n/b$.
For example, if you do matrix multiplication with a recurrence relation of $T(n) \leq 7T(n/2) + n$, then you branch into 7 subproblems, each of size half.
Meaning that at depth $j$ in the recursion, you have $a^j$ many subproblems, each of size $n/b^j$.