Your code works correctly – as far as I can see – for strictly positive integers \$ a, b, c, d \$. It fails if the integers are allowed to be zero or negative. For example:
solve(-1, -1, -2, -3)
returnsFalse
although \$(-1, -1) \$ can be transformed to \$ (-2, -3) \$.solve(0, 0, 1, 1)
fails with a "maximum recursion depth exceeded" because it calls itself with the same arguments.
The remainder of this review is based on the assumption that \$ a, b, c, d > 0\$.
The number of recursive calls increases quickly with larger input values because in most cases you have two possible branches at
return solve(a+b, b, c, d) or solve(a, a+b, c, d)
As an example, solve()
is called
- \29ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \8189ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
The problem is that one does not know if \$(a, b)\$ should become \$(a+b, b)\$ or \$(a, a+b)\$ in order to reach \$(c, d)\$.
The "trick" to do the transformations backwards. If \$(c, d)\$ is reached eventually, then the previous pair must be either \$(c-d, d)\$ or \$(c, d-c)\$. But only one of these is possible, depending on the sign of \$ c-d \$.
That leads to the following implementation:
def solve(a, b, c, d):
if a == c and b == d:
return True
elif a > c or b > d:
return False
elif c >= d:
return solve(a, b, c - d, d)
else:
return solve(a, b, c, d - c)
which is considerable faster: Now solve()
is called
- \4ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \14ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
The method can still fail with a "maximum recursion depth exceeded" runtime error. But with this changed algorithm it is easy to replace the recursion by an iteration:
def solve(a, b, c, d):
while c >= a and d >= b:
if c == a and d == b:
return True
(c, d) = (c - d, d) if c >= d else (c, d - c)
return False
The algorithm itself can be improved further by replacing multiple transformations $$ (c, d) \to (c-d, d) \to (c-2d, d) \to \ldots $$ by a single transformation $$ (c, d) \to (c - kd, d) $$ with a suitable integer \$k\$. How large can \$k\$ be chosen? If that reminds you of the Euclidean algorithm for computing the greatest common divisor then you are on the right track to implement an optimal method.
The number of recursive calls increases quickly with larger input values because in most cases you have two possible branches at
return solve(a+b, b, c, d) or solve(a, a+b, c, d)
As an example, solve()
is called
- \29ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \8189ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
The problem is that one does not know if \$(a, b)\$ should become \$(a+b, b)\$ or \$(a, a+b)\$ in order to reach \$(c, d)\$.
The "trick" to do the transformations backwards. If \$(c, d)\$ is reached eventually, then the previous pair must be either \$(c-d, d)\$ or \$(c, d-c)\$. But only one of these is possible, depending on the sign of \$ c-d \$.
That leads to the following implementation:
def solve(a, b, c, d):
if a == c and b == d:
return True
elif a > c or b > d:
return False
elif c >= d:
return solve(a, b, c - d, d)
else:
return solve(a, b, c, d - c)
which is considerable faster: Now solve()
is called
- \4ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \14ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
The method can still fail with a "maximum recursion depth exceeded" runtime error. But with this changed algorithm it is easy to replace the recursion by an iteration:
def solve(a, b, c, d):
while c >= a and d >= b:
if c == a and d == b:
return True
(c, d) = (c - d, d) if c >= d else (c, d - c)
return False
The algorithm itself can be improved further by replacing multiple transformations $$ (c, d) \to (c-d, d) \to (c-2d, d) \to \ldots $$ by a single transformation $$ (c, d) \to (c - kd, d) $$ with a suitable integer \$k\$. How large can \$k\$ be chosen? If that reminds you of the Euclidean algorithm for computing the greatest common divisor then you are on the right track to implement an optimal method.
Your code works correctly – as far as I can see – for strictly positive integers \$ a, b, c, d \$. It fails if the integers are allowed to be zero or negative. For example:
solve(-1, -1, -2, -3)
returnsFalse
although \$(-1, -1) \$ can be transformed to \$ (-2, -3) \$.solve(0, 0, 1, 1)
fails with a "maximum recursion depth exceeded" because it calls itself with the same arguments.
The remainder of this review is based on the assumption that \$ a, b, c, d > 0\$.
The number of recursive calls increases quickly with larger input values because in most cases you have two possible branches at
return solve(a+b, b, c, d) or solve(a, a+b, c, d)
As an example, solve()
is called
- \29ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \8189ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
The problem is that one does not know if \$(a, b)\$ should become \$(a+b, b)\$ or \$(a, a+b)\$ in order to reach \$(c, d)\$.
The "trick" to do the transformations backwards. If \$(c, d)\$ is reached eventually, then the previous pair must be either \$(c-d, d)\$ or \$(c, d-c)\$. But only one of these is possible, depending on the sign of \$ c-d \$.
That leads to the following implementation:
def solve(a, b, c, d):
if a == c and b == d:
return True
elif a > c or b > d:
return False
elif c >= d:
return solve(a, b, c - d, d)
else:
return solve(a, b, c, d - c)
which is considerable faster: Now solve()
is called
- \4ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \14ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
The method can still fail with a "maximum recursion depth exceeded" runtime error. But with this changed algorithm it is easy to replace the recursion by an iteration:
def solve(a, b, c, d):
while c >= a and d >= b:
if c == a and d == b:
return True
(c, d) = (c - d, d) if c >= d else (c, d - c)
return False
The algorithm itself can be improved further by replacing multiple transformations $$ (c, d) \to (c-d, d) \to (c-2d, d) \to \ldots $$ by a single transformation $$ (c, d) \to (c - kd, d) $$ with a suitable integer \$k\$. How large can \$k\$ be chosen? If that reminds you of the Euclidean algorithm for computing the greatest common divisor then you are on the right track to implement an optimal method.
The number of recursive calls increases quickly with larger input values because in most cases you have two possible branches at
return solve(a+b, b, c, d) or solve(a, a+b, c, d)
As an example, solve()
is called
- \29ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \8189ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
The problem is that one does not know if \$(a, b)\$ should become \$(a+b, b)\$ or \$(a, a+b)\$ in order to reach \$(c, d)\$.
The "trick" to do the transformations backwards. If \$(c, d)\$ is reached eventually, then the previous pair must be either \$(c-d, d)\$ or \$(c, d-c)\$. But only one of these is possible, depending on the sign of \$ c-d \$.
That leads to the following implementation:
def solve(a, b, c, d):
if a == c and b == d:
return True
elif a > c or b > d:
return False
elif c >= d:
return solve(a, b, c - d, d)
else:
return solve(a, b, c, d - c)
which is considerable faster: Now solve()
is called
- \4ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \14ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
ThisThe method can still fail with a "maximum recursion depth exceeded" runtime error. But with this changed algorithm it is easy to replace the recursion by an iteration:
def solve(a, b, c, d):
while c >= a and d >= b:
if c == a and d == b:
return True
(c, d) = (c - d, d) if c >= d else (c, d - c)
return False
The algorithm itself can be improved further by replacing multiple transformations $$ (c, d) \to (c-d, d) \to (c-2d, d) \to \ldots $$ by a single transformation $$ (c, d) \to (c - kd, d) $$ with a suitable integer \$k\$. How large can \$k\$ be chosen? If that reminds you of the Euclidean algorithm for computing the greatest common divisor then you are on the right track to implement an optimal method.
The number of recursive calls increases quickly with larger input values because in most cases you have two possible branches at
return solve(a+b, b, c, d) or solve(a, a+b, c, d)
As an example, solve()
is called
- \29ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \8189ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
The problem is that one does not know if \$(a, b)\$ should become \$(a+b, b)\$ or \$(a, a+b)\$ in order to reach \$(c, d)\$.
The "trick" to do the transformations backwards. If \$(c, d)\$ is reached eventually, then the previous pair must be either \$(c-d, d)\$ or \$(c, d-c)\$. But only one of these is possible, depending on the sign of \$ c-d \$.
That leads to the following implementation:
def solve(a, b, c, d):
if a == c and b == d:
return True
elif a > c or b > d:
return False
elif c >= d:
return solve(a, b, c - d, d)
else:
return solve(a, b, c, d - c)
which is considerable faster: Now solve()
is called
- \4ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \14ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
This can be improved further by replacing multiple transformations $$ (c, d) \to (c-d, d) \to (c-2d, d) \to \ldots $$ by a single transformation $$ (c, d) \to (c - kd, d) $$ with a suitable integer \$k\$. How large can \$k\$ be chosen? If that reminds you of the Euclidean algorithm for computing the greatest common divisor then you are on the right track to implement an optimal method.
The number of recursive calls increases quickly with larger input values because in most cases you have two possible branches at
return solve(a+b, b, c, d) or solve(a, a+b, c, d)
As an example, solve()
is called
- \29ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \8189ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
The problem is that one does not know if \$(a, b)\$ should become \$(a+b, b)\$ or \$(a, a+b)\$ in order to reach \$(c, d)\$.
The "trick" to do the transformations backwards. If \$(c, d)\$ is reached eventually, then the previous pair must be either \$(c-d, d)\$ or \$(c, d-c)\$. But only one of these is possible, depending on the sign of \$ c-d \$.
That leads to the following implementation:
def solve(a, b, c, d):
if a == c and b == d:
return True
elif a > c or b > d:
return False
elif c >= d:
return solve(a, b, c - d, d)
else:
return solve(a, b, c, d - c)
which is considerable faster: Now solve()
is called
- \4ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \14ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
The method can still fail with a "maximum recursion depth exceeded" runtime error. But with this changed algorithm it is easy to replace the recursion by an iteration:
def solve(a, b, c, d):
while c >= a and d >= b:
if c == a and d == b:
return True
(c, d) = (c - d, d) if c >= d else (c, d - c)
return False
The algorithm itself can be improved further by replacing multiple transformations $$ (c, d) \to (c-d, d) \to (c-2d, d) \to \ldots $$ by a single transformation $$ (c, d) \to (c - kd, d) $$ with a suitable integer \$k\$. How large can \$k\$ be chosen? If that reminds you of the Euclidean algorithm for computing the greatest common divisor then you are on the right track to implement an optimal method.
The number of recursive calls increases quickly with larger input values because in most cases you have two possible branches at
return solve(a+b, b, c, d) or solve(a, a+b, c, d)
As an example, solve()
is called
- \29ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \8189ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
The problem is that one does not know if \$(a, b)\$ should become \$(a+b, b)\$ or \$(a, a+b)\$ in order to reach \$(c, d)\$.
The "trick" to do the transformations backwards. If \$(c, d)\$ is reached eventually, then the previous pair must be either \$(c-d, d)\$ or \$(c, d-c)\$. But only one of these is possible, depending on the sign of \$ c-d \$.
That leads to the following implementation:
def solve(a, b, c, d):
if a == c and b == d:
return True
elif a > c or b > d:
return False
elif c >= d:
return solve(a, b, c - d, d)
else:
return solve(a, b, c, d - c)
which is considerable faster: Now solve()
is called
- \4ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \14ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
This can be improved further by replacing multiple transformations $$ (c, d) \to (c-d, d) \to (c-2d, d) \to \ldots $$ by a single transformation $$ (c, d) \to (c - kd, d) $$ with a suitable integer \$k\$. How large can \$k\$ be chosen? If that reminds you of the Euclidean algorithm for computing the greatest common divisor then you are on the right track to implement an optimal method.
The number of recursive calls increases quickly with larger input values because in most cases you have two possible branches at
return solve(a+b, b, c, d) or solve(a, a+b, c, d)
As an example, solve()
is called
- \29ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \8189ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
The problem is that one does not know if \$(a, b)\$ should become \$(a+b, b)\$ or \$(a, a+b)\$ in order to reach \$(c, d)\$.
The "trick" to do the transformations backwards. If \$(c, d)\$ is reached eventually, then the previous pair must be either \$(c-d, d)\$ or \$(c, d-c)\$. But only one of these is possible, depending on the sign of \$ c-d \$.
That leads to the following implementation:
def solve(a, b, c, d):
if a == c and b == d:
return True
elif a > c or b > d:
return False
elif c >= d:
return solve(a, b, c - d, d)
else:
return solve(a, b, c, d - c)
which is considerable faster: Now solve()
is called
- \4ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \14ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
This can be improved further by replacing multiple transformations $$ (c, d) \to (c-d, d) \to (c-2d, d) \to \ldots $$ by a single transformation $$ (c, d) \to (c - kd, d) $$ with a suitable integer \$k\$. How large can \$k\$ be chosen? If that reminds you of the Euclidean algorithm then you are on the right track to implement an optimal method.
The number of recursive calls increases quickly with larger input values because in most cases you have two possible branches at
return solve(a+b, b, c, d) or solve(a, a+b, c, d)
As an example, solve()
is called
- \29ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \8189ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
The problem is that one does not know if \$(a, b)\$ should become \$(a+b, b)\$ or \$(a, a+b)\$ in order to reach \$(c, d)\$.
The "trick" to do the transformations backwards. If \$(c, d)\$ is reached eventually, then the previous pair must be either \$(c-d, d)\$ or \$(c, d-c)\$. But only one of these is possible, depending on the sign of \$ c-d \$.
That leads to the following implementation:
def solve(a, b, c, d):
if a == c and b == d:
return True
elif a > c or b > d:
return False
elif c >= d:
return solve(a, b, c - d, d)
else:
return solve(a, b, c, d - c)
which is considerable faster: Now solve()
is called
- \4ドル\$ times to compute that \$(4, 6)\$ is not reachable from \$(1, 1)\$, and
- \14ドル\$ times to compute that \$(127, 99)\$ is reachable from \$(1, 1)\$.
This can be improved further by replacing multiple transformations $$ (c, d) \to (c-d, d) \to (c-2d, d) \to \ldots $$ by a single transformation $$ (c, d) \to (c - kd, d) $$ with a suitable integer \$k\$. How large can \$k\$ be chosen? If that reminds you of the Euclidean algorithm for computing the greatest common divisor then you are on the right track to implement an optimal method.