homepage

This issue tracker has been migrated to GitHub , and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Decimal.__int__ overflows for large values
Type: Stage:
Components: Library (Lib) Versions:
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: facundobatista Nosy List: ajaksu2, aryx, facundobatista, georg.brandl, mark.dickinson
Priority: normal Keywords:

Created on 2007年08月08日 20:43 by aryx, last changed 2022年04月11日 14:56 by admin. This issue is now closed.

Messages (10)
msg32602 - (view) Author: Jason G (aryx) Date: 2007年08月08日 20:43
This also affects Decimal.__hash__, since it [indirectly] calls Decimal.__int__.
>>> from decimal import Decimal as D
>>> e = D("1e1234567890987654321")
>>> int(e)
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
 File "/usr/lib/python2.5/decimal.py", line 1501, in __int__
 s = ''.join(map(str, self._int)) + '0'*self._exp
OverflowError: cannot fit 'long' into an index-sized integer
>>> e = D("1e1234567890")
>>> int(e)
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
 File "/usr/lib/python2.5/decimal.py", line 1501, in __int__
 s = ''.join(map(str, self._int)) + '0'*self._exp
MemoryError
Also, for values that do work this is incredibly slow if they are still fairly large.
msg32603 - (view) Author: Daniel Diniz (ajaksu2) * (Python triager) Date: 2007年08月09日 08:09
Hi Jason,
The OverflowError is related to "index-sized ints" as in "ints that are valid indexes for sequences", try:
>>> e = "0" * 1234567890
So it seems that this error is avoiding the creation of a string of length 1234567890, which is a good thing (sorta) :)
Once I tried to implement a dec2long function that was based on numbers instead of strings, see if it helps (it's VERY slow and naive, but IIRC it was a bit faster than the original version and correct): http://groups.google.com/group/comp.lang.python/msg/aba7264ab38eb25e
Now, do you really need all that precision for such huge numbers? I know I didn't ;)
Daniel
msg32604 - (view) Author: Jason G (aryx) Date: 2007年08月09日 15:09
Hey Daniel,
The bigger issue for us is mostly the fact that Decimal.__hash__ us calling Decimal.__int__ and not because we want an integer/long version of a very large Decimal. We do not actually cast the decimal into an int/long explicitly. I wouldn't have any issues if Decimal.__int__ remained as it is, but I think it would be a good idea for Decimal.__hash__ to do something differently. Probably something that is fast and simple, such as hash( self.as_tuple() ), self being a Decimal of course.
Our project is a CAS and we use Decimal for our real number class. I happened to run into this issue when I was implementing approximation of log(x) for extremely large/small values of x. I just started keyboard bashing numbers and behold, Decimal crashed on me :)
msg32605 - (view) Author: Daniel Diniz (ajaksu2) * (Python triager) Date: 2007年08月09日 17:21
I see. Inheriting from Decimal and overloading __hash__ is a way to solve your problem, but it's IMHO a shallow bug and worth reporting.
I just tried hash(D.as_tuple()) and it is blazing fast. I think that unless the official line is "don't touch decimal.py until X", this change to hashing would be useful and (AFAICT) harmless enough to fit in e.g. 2.5.2. To avoid incompatibilities, __hash__ could check for Overflow and only use .as_tuple for values higher than the previous maximum (keeping, unfortunately, __hash__ slow for values below).
Could the current status of Decimal be made a bit more clear? Are bug reports/patches welcome? Is bugging Facundo or RHettinger welcome? :)
If getting __int__ a bit faster and able to convert sans huge strings is desired, I've updated that old function (see below) and AFAIK it could be used to replace Lib/decimal.py/Decimal.[__int__,__long__]. It gets about ten times faster on best cases and is about as slow on worst cases (could be much worse if "long(rint_part + rdec_part)/exponent" is a STUPID thing to do, but seems easy to avoid). As the original __int__ optimizes str(Decimal._int) and doesn't split/check for substrings, using the same path should speed this up more. I can run the tests and benchmark it (next month...) if there's interest.
def dec2long(number):
 """ Convert decimal.Decimal to long (abridged, non-checking version)"""
 decimal_string = str(number)
 if "e" in decimal_string:
 radix, exponent = decimal_string.split("e")
 elif "E" in decimal_string:
 radix, exponent = decimal_string.split("E")
 else:
 radix, exponent = (decimal_string, 0)
 if exponent:
 exponent = int(exponent)
 if "." in radix:
 rint, rdec = radix.split(".")
 radix_decimal_part_len = long(len(rdec))
 if radix_decimal_part_len <= exponent:
 radix_as_long = long(rint + rdec)
 corrected_exponent = exponent - radix_decimal_part_len
 result = radix_as_long * 10L** corrected_exponent
 else:
 result = long(rint + rdec) / exponent
 else:
 radix_as_long = long(radix)
 result = radix_as_long * 10L**exponent
 else:
 if "." in radix:
 radix_integer_part = long(radix.split(".")[0])
 else:
 radix_integer_part = long(radix)
 result = radix_integer_part
 return result
As a comparison, here's __int__ (abridged) from decimal.py:
 def __int__(number):
 """Converts self to an int, truncating if necessary."""
 if number._exp >= 0:
 s = ''.join(map(str, number._int)) + '0'*number._exp
 else:
 s = ''.join(map(str, number._int))[:number._exp]
 if s == '':
 s = '0'
 sign = '-'*self._sign
 return int(sign + s)
msg32606 - (view) Author: Georg Brandl (georg.brandl) * (Python committer) Date: 2007年08月09日 17:37
Assigning to Facundo, he's actively working on decimal ATM.
msg32607 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2007年08月12日 17:57
Doesn't using hash(D.as_tuple()) break the principle that if two objects compare equal then they should have equal hash?
An alternative to converting to long before hashing is to use the fact that for the current hash implementation for long we have
hash(n) == hash(n % (2**32-1)) (except when n is a multiple of 2**32-1). For a Decimal d that's integral, one should be
able to compute d % (2**32-1) very quickly: if d = c*10**e then just use (c * pow(10, e, 2**32-1)) % (2**32-1), which should be acceptably fast even when d = 123456789E999999999999999.
The only tricky bit is that on a 64-bit system all those 2**32-1 need to be replaced by 2**64-1. Though now I come to think of it,
since 2**32-1 is a factor of 2**64-1 it would be enough just to do everything modulo 2**64-1 even on a 32-bit system.
msg32608 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2007年08月12日 19:43
Mark Dickinson (marketdickinson) stupidly claimed that:
hash(n) == hash(n % (2**32-1))
It's not true. Sorry for the noise.
msg32609 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2007年08月13日 04:50
See patch #1772851 
msg32610 - (view) Author: Facundo Batista (facundobatista) * (Python committer) Date: 2007年08月13日 15:10
hash will NOT be changed as is requested in this bug:
>>> import decimal
>>> d = decimal.Decimal(1)
>>> hash(d)
1
>>> hash(1)
1
>>> hash(d.as_tuple())
-790594979
>>> 
See this requirement from PEP 327:
 hash(n) == hash(Decimal(n)) # Only if n is int, long, or Decimal
Regarding the overflow, this must to be fixed... I'll take a look to marketdickinson patch...
msg57797 - (view) Author: Facundo Batista (facundobatista) * (Python committer) Date: 2007年11月24日 01:01
This was fixed at the same time than issue 1772851.
int(D("1e1234567890987654321")) stills take too long, but this is fault
of doing 10**1234567890987654321 to convert it to an int.
Note that hash(D("1e1234567890987654321")) is fast now.
History
Date User Action Args
2022年04月11日 14:56:25adminsetgithub: 45291
2007年11月24日 01:01:52facundobatistasetstatus: open -> closed
resolution: fixed
messages: + msg57797
2007年08月08日 20:43:22aryxcreate

AltStyle によって変換されたページ (->オリジナル) /