#Runtime Complexity:
Runtime Complexity:
#Runtime Complexity:
Runtime Complexity:
(test-times 1 1e61e7 2)
=>
[[1 2.2600213396974157E3196602948749615E-7]
[2 2.3112686316939942E3105128053865377E-7]
[4 1.3991660811257498E42932799280724E-6]
[8 5.320524852029437E63559279071997E-6]
[16 1.2513049818951138E2984918224299065E-5]
[32 3.5472120941558446E289705735278571E-5]
[64 67.905003618649965E087895492662475E-5]
[128 1.4226329763560502E4285673446327686E-4]
[256 2.9170654349951127E923177758112095E-4]
[512 6.21253591919192E273025793650794E-4]
[1024 0.0012675748080168776]0012981272479166666]
[2048 0.002575853764957265]0025652252416666667]
[4096 0.0052047926]005141740791666667]
[8192 0.010514279366666667]01068772265]
[16384 0.023774149333333335]022486903166666666]
[32768 0.05310468508333334]05367695991666667]
[65536 0.11205571983333333]11212592816666668]
[131072 0.23198671283333336]23180363016666666]
[262144 0.4894090738333333]48056071016666674]
[524288 1.166101614]]1464995601666668]
[1048576 2.8823068596666666]
[2097152 6.4231157065]
[4194304 15.808042165333335]
[8388608 56.98961213533334]]
Plotting it out, it appears to be roughly O(n)O(n^3)
. The time taken seems to roughly double every time n
is doubled (O(n)
) until n=524288, then it explodes. A custom Wolfram regression calculator gave a best fit with a cubic curve.
(test-times 1 1e6 2)
=>
[[1 2.2600213396974157E-7]
[2 2.3112686316939942E-7]
[4 1.3991660811257498E-6]
[8 5.320524852029437E-6]
[16 1.2513049818951138E-5]
[32 3.5472120941558446E-5]
[64 6.905003618649965E-5]
[128 1.4226329763560502E-4]
[256 2.9170654349951127E-4]
[512 6.21253591919192E-4]
[1024 0.0012675748080168776]
[2048 0.002575853764957265]
[4096 0.0052047926]
[8192 0.010514279366666667]
[16384 0.023774149333333335]
[32768 0.05310468508333334]
[65536 0.11205571983333333]
[131072 0.23198671283333336]
[262144 0.4894090738333333]
[524288 1.166101614]]
Plotting it out, it appears to be roughly O(n). The time taken seems to roughly double every time n
is doubled.
(test-times 1 1e7 2)
=>
[[1 2.3196602948749615E-7]
[2 2.3105128053865377E-7]
[4 1.42932799280724E-6]
[8 5.63559279071997E-6]
[16 1.2984918224299065E-5]
[32 3.289705735278571E-5]
[64 7.087895492662475E-5]
[128 1.4285673446327686E-4]
[256 2.923177758112095E-4]
[512 6.273025793650794E-4]
[1024 0.0012981272479166666]
[2048 0.0025652252416666667]
[4096 0.005141740791666667]
[8192 0.01068772265]
[16384 0.022486903166666666]
[32768 0.05367695991666667]
[65536 0.11212592816666668]
[131072 0.23180363016666666]
[262144 0.48056071016666674]
[524288 1.1464995601666668]
[1048576 2.8823068596666666]
[2097152 6.4231157065]
[4194304 15.808042165333335]
[8388608 56.98961213533334]]
Plotting it out, it appears to be roughly O(n^3)
. The time taken seems to roughly double every time n
is doubled (O(n)
) until n=524288, then it explodes. A custom Wolfram regression calculator gave a best fit with a cubic curve.
#Runtime Complexity:
I wrote up a testing function to automate timing with the help of Criterium for benchmarking:
(defn test-times [start-n end-n step-factor]
(->> (iterate #(* step-factor %) start-n)
(take-while #(< % end-n))
(mapv (fn [n] [n (->> (c/quick-benchmark*
(fn [] (sieve-primes n))
nil)
(:mean)
(first))]))))
This tests how long it takes for various values of n
, then packages the results in a vector of [n time-taken]
pairs. Here are the results (times in seconds):
(test-times 1 1e6 2)
=>
[[1 2.2600213396974157E-7]
[2 2.3112686316939942E-7]
[4 1.3991660811257498E-6]
[8 5.320524852029437E-6]
[16 1.2513049818951138E-5]
[32 3.5472120941558446E-5]
[64 6.905003618649965E-5]
[128 1.4226329763560502E-4]
[256 2.9170654349951127E-4]
[512 6.21253591919192E-4]
[1024 0.0012675748080168776]
[2048 0.002575853764957265]
[4096 0.0052047926]
[8192 0.010514279366666667]
[16384 0.023774149333333335]
[32768 0.05310468508333334]
[65536 0.11205571983333333]
[131072 0.23198671283333336]
[262144 0.4894090738333333]
[524288 1.166101614]]
Plotting it out, it appears to be roughly O(n). The time taken seems to roughly double every time n
is doubled.
#Runtime Complexity:
I wrote up a testing function to automate timing with the help of Criterium for benchmarking:
(defn test-times [start-n end-n step-factor]
(->> (iterate #(* step-factor %) start-n)
(take-while #(< % end-n))
(mapv (fn [n] [n (->> (c/quick-benchmark*
(fn [] (sieve-primes n))
nil)
(:mean)
(first))]))))
This tests how long it takes for various values of n
, then packages the results in a vector of [n time-taken]
pairs. Here are the results (times in seconds):
(test-times 1 1e6 2)
=>
[[1 2.2600213396974157E-7]
[2 2.3112686316939942E-7]
[4 1.3991660811257498E-6]
[8 5.320524852029437E-6]
[16 1.2513049818951138E-5]
[32 3.5472120941558446E-5]
[64 6.905003618649965E-5]
[128 1.4226329763560502E-4]
[256 2.9170654349951127E-4]
[512 6.21253591919192E-4]
[1024 0.0012675748080168776]
[2048 0.002575853764957265]
[4096 0.0052047926]
[8192 0.010514279366666667]
[16384 0.023774149333333335]
[32768 0.05310468508333334]
[65536 0.11205571983333333]
[131072 0.23198671283333336]
[262144 0.4894090738333333]
[524288 1.166101614]]
Plotting it out, it appears to be roughly O(n). The time taken seems to roughly double every time n
is doubled.