A very basic question has bugged me for a while. We know that random-access machines (RAM) are polynomially equivalent to Turing machines. I assume the RAM model to have unit cost addition, subtraction, indirect referencing, (no multiplication) etc, as in the paper Time bounded random access machines.
In this model, I can always add two given integers in $O(1)$ steps/time. However, in Turing machines (TMs), the addition of two numbers (represented in binary) scales with the input length. The input to the Turing machine is $a_1a_2\cdots a_i\# b_1\cdots b_j$ (where $a_k,b_k \in \{0,1\}$).
Is that when comparing RAM vs TM, we need to make sure that both see the same input length (as done in the paper)? Thus, I would have to give RAM the input as say $a_1,a_2,\cdots a_i, 2 , b_1,\cdots b_j$ (where 2ドル$ is used to represent $\#$). And then only I can compare the runtimes (and I do see in this case the polynomial equivalence would hold). If they don't see the same input length, then it's comparing apples to oranges; am I right?
-
$\begingroup$ Actually in the cited paper the addition is $O(l(n))$ (table 1) where $n$ is the input number and $l(n)$ is the length of the number (the log effectively). $\endgroup$ratchet freak– ratchet freak2025年08月27日 12:14:34 +00:00Commented Aug 27 at 12:14
-
$\begingroup$ @ratchetfreak the paper considers both cost model: unit-cost $l(n)=1$ for any integer $n$ and $l(n)$ being logarithmic in $n$. Theorem 2 states polynomial equivalence for both the cost models. $\endgroup$advocateofnone– advocateofnone2025年08月27日 12:37:41 +00:00Commented Aug 27 at 12:37
1 Answer 1
Yes, in any cross-model comparison you fix a single encoding of inputs and measure time as a function of the same size parameter up to linear distortion. In the Cook–Reckhow RAM the input string $w=\alpha_{i_1}\cdots \alpha_{i_n}$ is presented as the sequence of integers $i_1,i_2,\ldots,i_n,0$, and each READ incurs a cost depending on the chosen cost function $\ell$ for operand size. This is stated when they define recognition and input format and when they give instruction timings in Table I, where addition has time $\ell(X_j)+\ell(X_k)$ (see page 356 and Table I).
With that convention your TM input $a_1\cdots a_i \# b_1\cdots b_j$ can be mapped to the RAM input $(a_1,\ldots,a_i,2,b_1,\ldots,b_j,0)$, where 2ドル$ plays the role of $\#$ and 0ドル$ is the terminator used in the paper (use positive codes so the terminator 0 does not collide with data symbols; e.g., map 0,1,ドル\#\mapsto 1,2,3$). The size parameters then differ by a constant $$ n_{\mathrm{TM}}=i+j+1 \quad\text{and}\quad n_{\mathrm{RAM}}=i+j+2 $$ so $n_{\mathrm{RAM}}=n_{\mathrm{TM}}+1=\Theta(n_{\mathrm{TM}})$. Using a fixed linear-time decoding in either direction gives the same asymptotic input length. This avoids apples-to-oranges comparisons.
Your concern about $O(1)$ addition on a RAM versus length-dependent addition on a TM disappears once you use the RAM cost function the paper was designed around. Cook and Reckhow charge each instruction time according to $\ell(\cdot)$, where the natural choice is logarithmic $\ell(n)=\Theta(\log\max\{2,|n|\})$. Then the addition instruction takes time $\ell(X_j)+\ell(X_k)=\Theta(\text{bit length}(X_j)+\text{bit length}(X_k))$, which matches the linear-in-bit-length behavior of a TM for binary addition (see the discussion that introduces the logarithmic cost and the timings in Table I on page 356).
The formal simulation theorems in the paper make the polynomial equivalence precise once a common size measure is fixed. If a RAM with logarithmic cost runs in time $T(n)$, then a multitape TM recognizes the same language in time $O(T(n)^2)$. If a RAM uses constant unit cost for numbers of arbitrary size, then the simulation is $O(T(n)^3)$. Conversely, a TM running in time $T(n)\ge n$ is simulated by a RAM in time $O(T(n)\cdot \ell(T(n)))$, yielding $O(T(n)\log T(n))$ under the logarithmic cost. These statements are Theorem 2 parts (a) and (b) on pages 361–363, and the proof sketch explains why the log-cost case gives a quadratic blowup and the unit-cost case gives a cubic blowup.
So the right way to compare is $$ \text{Fix an encoding } E \text{ with } |E(x)|=\Theta(|x|) \text{ and measure time as } T(|E(x)|) $$ then apply the simulation bounds above. Under this framework, your idea of feeding the RAM the sequence $a_1,\ldots,a_i,2,b_1,\ldots,b_j,0$ is perfectly valid and gives the same asymptotic input length as the TM tape. In the logarithmic cost model, each arithmetic step scales with bit length. In either cost model, the Cook–Reckhow simulations guarantee polynomial equivalence.
-
$\begingroup$ Thanks alot makes sense. $\endgroup$advocateofnone– advocateofnone2025年08月28日 06:48:10 +00:00Commented Aug 28 at 6:48