Project Euler 14 asks: Which starting number, under one million, produces the longest Collatz sequence?
I've been trying to optimize this solution to execute in less then 1s. Agenda for first 50 eulers. Currently it executes in 1,2s for me, so I guess that I want it to be 20% faster. Any tips or other methods?
from time import time
def rec_col(n, cache):
if n in cache:
return cache[n]
x = (rec_col(n//2, cache) if n % 2 == 0 else rec_col(3*n+1, cache))+1
cache[n] = x
return x
def main():
start = time()
highest = 0
ans = 1
cache = {1: 1}
for i in range(1, 10**6):
temp = rec_col(i, cache)
if temp > highest:
ans = i
highest = temp
print("{0}\nTime {1:.4f} s".format(ans, time() - start))
if __name__ == '__main__':
main()
This has the same preformance:
def rec_col(n, cache):
cache[n] = (rec_col(n//2, cache) if n % 2 == 0 else rec_col(3*n+1, cache)) + 1 \
if n not in cache else cache[n]
return cache[n]
-
\$\begingroup\$ You're doing it recursively. Wouldn't it be better, performance-wise, if you ran it inlined? \$\endgroup\$Olivier Grégoire– Olivier Grégoire2016年11月25日 10:25:28 +00:00Commented Nov 25, 2016 at 10:25
2 Answers 2
Using a 3.5 interpreter, things yielding "negative speed-up" included:
@functools.lru_cache(None)
(mainly due to explicit base-case handling?)- substituting
cache.get()
andis not None
forin
and[]
divmod(n, 2)
instead of%2
and//2
- "inlining the even case"
Negligible speed-up:
- Binary operators instead of
%2
and//2
- Global
cache
instead of an explicit parameter
Slight speed-up:
rec_col((3*n+1) >> 1)+2
Of 1588206 cache entries, almost half are not used at all, not even one eighth twice, none more often.
There is a conceptual leaps that will yield major speedups. We can eliminate any number that is less than 500,000
, as all of those numbers will have a number twice as big which has one more step. This will be a 2x speedup.
-
1\$\begingroup\$
This will be a 2x speedup
- without memoisation or similar, it will still be less than that: not only are the sequences starting from even numbers from 5E5 to 1E6 one step longer, there are about the same number of odd start values. With functools.lru_cache, I got about one fifth less run time: well spotted. (Almost no decrease in cache use.) \$\endgroup\$greybeard– greybeard2018年02月06日 00:06:04 +00:00Commented Feb 6, 2018 at 0:06
Explore related questions
See similar questions with these tags.