In my reference, Exercise 0.4(e), Algorithms by Sanjoy Dasgupta, Christos H. Papadimitriou, and Umesh V. Vazirani, it is given that $$ \begin{bmatrix}F_n\\F_{n+1}\end{bmatrix}=\begin{bmatrix}0&1\1円&1\end{bmatrix}^n\begin{bmatrix}F_0\\F_1\end{bmatrix} $$
Prove that the running time of $fib3$ is $\mathcal{O}(M(n))$, where $M(n)$ be the running time of an algorithm for multiplying n-bit numbers, and assume that $M(n)=\mathcal{O}(n^2)$
My Attempt
Consider the idea of binary exponentiation in which,
the number $n$ has exactly $\lfloor\log_2 n\rfloor+1$ digits in base 2, ie., we need to perform $\lfloor\log_2 n\rfloor$ multiplications if we know the powers $a^1,a^2,\cdots,a^{2^{\lfloor\log_2 n\rfloor}}\implies \mathcal{O}(\log n)$ multiplications at most.
So we need to compute $\lfloor\log_2 n\rfloor$ powers of $a$, and also at most $\lfloor\log_2 n\rfloor$ multiplications, ie., in total $\le 2\lfloor\log_2 n\rfloor\implies\mathcal{O}(\log n)$.
Squaring the matrix $X$ doubles the number of bits of its entries, and we need to compute $\lfloor\log n\rfloor$ powers of $X$. $$ \text{The number of bits of the entries of }X^{\lfloor\log n\rfloor}\le 2^{\lfloor\log n\rfloor}\le 2^{\log n}=n\\\\ \implies\mathcal{O}(n) $$ $\implies $all intermediate results must be of length $\mathcal{O}(n)$
Now, with all these facts in mind how do we prove the claim ?
1 Answer 1
To prove that the running time is $O(n^2)$, you need to observe that not every multiplication step takes $O(n^2)$ time. For example, the first matrix multiplication can be done in $O(1)$ time. For finding all the powers $X^{2^i}$ for $i \in \{1,\dotsc,\lfloor \log n \rfloor \}$, if you carefully sum the complexities over all matrix multiplications, the running time would be:
$8ドル \cdot (M(1) + M(2) + M(4) + M(8) + \dotsc + M(2^{\log n}))$$
where 8ドル$ are the multiplication operations we do for one matrix multiplication.
Furthermore, to compute $X^i$ for any $i \in \{1,\dotsc,n\}$ using binary exponentiation, the complexity can again be bounded by $8ドル \cdot (M(1) + M(2) + M(4) + M(8) + \dotsc + M(2^{\log n}))$$ since at any time we would be multiplying a matrix $X^{2^{i}}$ with a matrix $Y$ where each entry in matrix $Y$ has at most 2ドル^i$ bits. And, multipylying matrices $X$ and $Y$ takes $M(2^i)$ time.
Thus, the total complexity is: $16ドル \cdot \sum_{i = 0}^{\log n} M(2 ^ i)$$ For $M(n) = O(n^2)$, this form a series: $$O(1) \cdot \sum_{i = 0}^{\log n} 2 ^ {2i} = O(1) \cdot \sum_{i = 0}^{\log n} 4^i = O(1) \cdot \sum_{i = 0}^{\log n} \frac{4^{\log n}}{4 ^ {i}} = O(1) \cdot \sum_{i = 0}^{\log n} \frac{n^2}{4 ^ {i}} = O(n^2) \cdot \sum_{i = 0}^{\log n} \frac{1}{4 ^ {i}} = O(n^2)$$.
Thus the cost due to all the multiplication operations is $O(n^2)$. Note that the cost due to addition operations only take $O(n \log n)$ overall all matrix multiplications. Thus the total running time is $O(n^2)$. Such type of analysis is known as amortized analysis.
-
$\begingroup$ I think the number of multiplications accounting for multiplying the matrix powers is rather 8ドル\big[M(2)+M(2^2)+\cdots+M(2^{\lfloor\log n\rfloor})\big]=8\big[M(1)+M(2)+\cdots+M(2^{\lfloor\log n\rfloor})\big]-8M(1),ドル right? because first multiplication is with matrices with elements of bit length 1ドル$ and 2ドル,ドル and for them it is $<8.M(2)$. $\endgroup$Sooraj Soman– Sooraj Soman2023年07月13日 16:22:09 +00:00Commented Jul 13, 2023 at 16:22
-
1$\begingroup$ @SoorajS Right. I just mentioned an upper bound, so that seems fine. $\endgroup$Inuyasha Yagami– Inuyasha Yagami2023年07月13日 16:27:11 +00:00Commented Jul 13, 2023 at 16:27
Explore related questions
See similar questions with these tags.