Message228779
| Author |
gladman |
| Recipients |
brg@gladman.plus.com, gladman, mark.dickinson, mrabarnett, scoder, steven.daprano, terry.reedy, vstinner, wolma |
| Date |
2014年10月08日.08:14:03 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<1412756044.45.0.729779727095.issue22477@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
You might be right that it is not worth adding the ability to handle a variable number of parameters in the new gcd. But this depends on whether you are right that this would add a significant burden to the implementation. I am not sure that it would.
But for pure Python implementations of gcd and gcdm:
def gcd(a, b):
while b:
a, b = b, a % b
return a
def gcdm(a, *r):
for b in r:
while b:
a, b = b, a % b
return a
using the second of these alone when compared with the first plus reduce on a 10000 length sequence of random numbers in 0 <= r < 10 ** 12 gives speed improvement of over 25%.
And, since we are looking for speed improvements, a further possible 25% is a worthwhile gain for a common pattern of gcd use. Which is why I believe it is worth asking the question. |
|