You asked for a stable data formulation with comparable speed. My suggestion:
import Data.Array;
part(n)=let{a=zipWith(*)[1,3..][1,4..];c=zipWith(*)[1..][5,11..];
b=zipWith(*)[1,3..][2,5..];d=zipWith(*)[1..][7,13..];
y=array(0,n)((0,1):[(i,s(a)+s(b)-s(c)-s(d))|i<-[1..n],
let s=foldr(+)0.map((y!).(i-)).takeWhile(<=i)])}in(y!n);
main=print(part(60000));
– when compiled with ghc -O to optimize away stack overflows – computes P(60k) in ~12s@800MHz O(n2) runtime practically equal to your expressly stateful computation here!
I agree that an O(1)-accessible array of boxed big integers yields unbeatable minimal cost for sufficient memories when index jumps of up to 200 cells occur in the computation of P(60k).
I hid my computation state behind my i<index<-[1..n]-sequenced y-recursion though.
And lest iI recompute most products of multiplications for array indexing, I memoized them in lazily infinite named sequences
- a := <[1,3,..]*[1,4..]> = (2h+1)(3h+1) = (2h+1)1⁄2(6h+3-1)/2 = k1⁄2k(3k-1)/2 for odd k=2h+1
- b := <[1,3,..]*[2,5..]> = (2h+1)(3h+2) = (2h+1)1⁄2(6h+3+1)/2 = k1⁄2k(3k+1)/2 for odd k=2h+1
- c := <[1,2,..]*[5,11..]> = (h+1)(6h-1) = 1⁄2(2h+2)/2(6h-1) = k1⁄2k(3k-1)/2 for even k=2h+2
- d := <[1,2,..]*[7,13..]> = (h+1)(6h+1) = 1⁄2(2h+2)/2(6h+1) = k1⁄2k(3k+1)/2 for even k=2h+2
and feel rewarded with best code brevity.
P(60k) stack overflows my debian/stableDebian8 ghc -O onof @BenceKodaj's O(log nlog2n) IntMap structure :(
You asked for a stable data formulation with comparable speed. My suggestion:
import Data.Array;
part(n)=let{a=zipWith(*)[1,3..][1,4..];c=zipWith(*)[1..][5,11..];
b=zipWith(*)[1,3..][2,5..];d=zipWith(*)[1..][7,13..];
y=array(0,n)((0,1):[(i,s(a)+s(b)-s(c)-s(d))|i<-[1..n],
let s=foldr(+)0.map((y!).(i-)).takeWhile(<=i)])}in(y!n);
main=print(part(60000));
– when compiled with ghc -O to optimize away stack overflows – computes P(60k) in ~12s@800MHz O(n2) runtime practically equal to your expressly stateful computation here!
I agree that an O(1)-accessible array of boxed big integers yields unbeatable minimal cost for sufficient memories when index jumps of up to 200 cells occur in the computation of P(60k).
I hid my computation state behind my i<-[1..n]-sequenced y-recursion though.
And lest i recompute most products of multiplications for array indexing, I memoized them in lazily infinite named sequences
- a := <[1,3,..]*[1,4..]> = (2h+1)(3h+1) = (2h+1)(6h+3-1)/2 = k(3k-1)/2 for odd k=2h+1
- b := <[1,3,..]*[2,5..]> = (2h+1)(3h+2) = (2h+1)(6h+3+1)/2 = k(3k+1)/2 for odd k=2h+1
- c := <[1,2,..]*[5,11..]> = (h+1)(6h-1) = (2h+2)/2(6h-1) = k(3k-1)/2 for even k=2h+2
- d := <[1,2,..]*[7,13..]> = (h+1)(6h+1) = (2h+2)/2(6h+1) = k(3k+1)/2 for even k=2h+2
and feel rewarded with best code brevity.
P(60k) stack overflows my debian/stable ghc -O on @BenceKodaj's O(log n) IntMap structure :(
You asked for a stable data formulation with comparable speed. My suggestion:
import Data.Array;
part(n)=let{a=zipWith(*)[1,3..][1,4..];c=zipWith(*)[1..][5,11..];
b=zipWith(*)[1,3..][2,5..];d=zipWith(*)[1..][7,13..];
y=array(0,n)((0,1):[(i,s(a)+s(b)-s(c)-s(d))|i<-[1..n],
let s=foldr(+)0.map((y!).(i-)).takeWhile(<=i)])}in(y!n);
main=print(part(60000));
– when compiled with ghc -O to optimize away stack overflows – computes P(60k) in ~12s@800MHz O(n2) runtime practically equal to your expressly stateful computation here!
I agree that an O(1)-accessible array of boxed big integers yields unbeatable minimal cost for sufficient memories when index jumps of up to 200 cells occur in the computation of P(60k).
I hid my computation state behind my index<-[1..n]-sequenced y-recursion though.
And lest I recompute most products of multiplications for array indexing, I memoized them in lazily infinite named sequences
- a := <[1,3,..]*[1,4..]> = (2h+1)(3h+1) = (2h+1)1⁄2(6h+3-1) = 1⁄2k(3k-1) for odd k=2h+1
- b := <[1,3,..]*[2,5..]> = (2h+1)(3h+2) = (2h+1)1⁄2(6h+3+1) = 1⁄2k(3k+1) for odd k=2h+1
- c := <[1,2,..]*[5,11..]> = (h+1)(6h-1) = 1⁄2(2h+2)(6h-1) = 1⁄2k(3k-1) for even k=2h+2
- d := <[1,2,..]*[7,13..]> = (h+1)(6h+1) = 1⁄2(2h+2)(6h+1) = 1⁄2k(3k+1) for even k=2h+2
and feel rewarded with best code brevity.
P(60k) stack overflows my Debian8 ghc -O of @BenceKodaj's O(log2n) IntMap structure :(
Both your state-monadic code andYou asked for a stable data formulation with comparable speed. My suggestion:
import Data.Array;
part(n)=let{a=zipWith(*)[1,3..][1,4..];c=zipWith(*)[1..][5,11..];
b=zipWith(*)[1,3..][2,5..];d=zipWith(*)[1..][7,13..];
y=array(0,n)((0,1):[(i,s(a)+s(b)-s(c)-s(d))|i<-[1..n],
let s=foldr(+)0.map((y!).(i-)).takeWhile(<=i)])}in(y!n);
main=print(part(60000));
compute– when compiled with ghc -O to optimize away stack overflows – computes P(60k) in almost equal ~12s@800MHz O(n2) runtime practically equal to your expressly stateful computation here!
I agree that an O(1)-accessible array of boxed big integers yields unbeatable minimal cost for sufficient memories when compiledindex jumps of up to 200 cells occur in the computation of P(60k).
I hid my computation state behind my i<-[1..n]-sequenced y-recursion though.
And lest i recompute most products of multiplications for array indexing, I memoized them in lazily infinite named sequences
- a := <[1,3,..]*[1,4..]> = (2h+1)(3h+1) = (2h+1)(6h+3-1)/2 = k(3k-1)/2 for odd k=2h+1
- b := <[1,3,..]*[2,5..]> = (2h+1)(3h+2) = (2h+1)(6h+3+1)/2 = k(3k+1)/2 for odd k=2h+1
- c := <[1,2,..]*[5,11..]> = (h+1)(6h-1) = (2h+2)/2(6h-1) = k(3k-1)/2 for even k=2h+2
- d := <[1,2,..]*[7,13..]> = (h+1)(6h+1) = (2h+2)/2(6h+1) = k(3k+1)/2 for even k=2h+2
and feel rewarded with best code brevity.
P(60k) stack overflows my debian/stable ghc -O here!on @BenceKodaj's O(log n) IntMap structure :(
Both your state-monadic code and
import Data.Array;
part(n)=let{a=zipWith(*)[1,3..][1,4..];c=zipWith(*)[1..][5,11..];
b=zipWith(*)[1,3..][2,5..];d=zipWith(*)[1..][7,13..];
y=array(0,n)((0,1):[(i,s(a)+s(b)-s(c)-s(d))|i<-[1..n],
let s=foldr(+)0.map((y!).(i-)).takeWhile(<=i)])}in(y!n);
main=print(part(60000));
compute P(60k) in almost equal ~12s@800MHz runtime when compiled with ghc -O here!
You asked for a stable data formulation with comparable speed. My suggestion:
import Data.Array;
part(n)=let{a=zipWith(*)[1,3..][1,4..];c=zipWith(*)[1..][5,11..];
b=zipWith(*)[1,3..][2,5..];d=zipWith(*)[1..][7,13..];
y=array(0,n)((0,1):[(i,s(a)+s(b)-s(c)-s(d))|i<-[1..n],
let s=foldr(+)0.map((y!).(i-)).takeWhile(<=i)])}in(y!n);
main=print(part(60000));
– when compiled with ghc -O to optimize away stack overflows – computes P(60k) in ~12s@800MHz O(n2) runtime practically equal to your expressly stateful computation here!
I agree that an O(1)-accessible array of boxed big integers yields unbeatable minimal cost for sufficient memories when index jumps of up to 200 cells occur in the computation of P(60k).
I hid my computation state behind my i<-[1..n]-sequenced y-recursion though.
And lest i recompute most products of multiplications for array indexing, I memoized them in lazily infinite named sequences
- a := <[1,3,..]*[1,4..]> = (2h+1)(3h+1) = (2h+1)(6h+3-1)/2 = k(3k-1)/2 for odd k=2h+1
- b := <[1,3,..]*[2,5..]> = (2h+1)(3h+2) = (2h+1)(6h+3+1)/2 = k(3k+1)/2 for odd k=2h+1
- c := <[1,2,..]*[5,11..]> = (h+1)(6h-1) = (2h+2)/2(6h-1) = k(3k-1)/2 for even k=2h+2
- d := <[1,2,..]*[7,13..]> = (h+1)(6h+1) = (2h+2)/2(6h+1) = k(3k+1)/2 for even k=2h+2
and feel rewarded with best code brevity.
P(60k) stack overflows my debian/stable ghc -O on @BenceKodaj's O(log n) IntMap structure :(
Both your state-monadic code and
import Data.Array;
part(n)=let{a=zipWith(*)[1,3..][1,4..];c=zipWith(*)[1..][5,11..];
b=zipWith(*)[1,3..][2,5..];d=zipWith(*)[1..][7,13..];
y=array(0,n)((0,1):[(i,s(a)+s(b)-s(c)-s(d))|i<-[1..n],
let s=foldr(+)0.map((y!).(i-)).takeWhile(<=i)])}in(y!n);
main=print(part(60000));
compute P(60k) in almost equal ~12s~12s@800MHz runtime when compiled with ghc -O here!
Both your state-monadic code and
import Data.Array;
part(n)=let{a=zipWith(*)[1,3..][1,4..];c=zipWith(*)[1..][5,11..];
b=zipWith(*)[1,3..][2,5..];d=zipWith(*)[1..][7,13..];
y=array(0,n)((0,1):[(i,s(a)+s(b)-s(c)-s(d))|i<-[1..n],
let s=foldr(+)0.map((y!).(i-)).takeWhile(<=i)])}in(y!n);
main=print(part(60000));
compute P(60k) in almost equal ~12s runtime when compiled with ghc -O here!
Both your state-monadic code and
import Data.Array;
part(n)=let{a=zipWith(*)[1,3..][1,4..];c=zipWith(*)[1..][5,11..];
b=zipWith(*)[1,3..][2,5..];d=zipWith(*)[1..][7,13..];
y=array(0,n)((0,1):[(i,s(a)+s(b)-s(c)-s(d))|i<-[1..n],
let s=foldr(+)0.map((y!).(i-)).takeWhile(<=i)])}in(y!n);
main=print(part(60000));
compute P(60k) in almost equal ~12s@800MHz runtime when compiled with ghc -O here!