There is already a prior question dealing with why certain bit-widths were chosen (although I do find it somewhat insufficient, but that's another topic), but what strikes me as unusual is how the bits are distributed rather than anything else. If we are to divide a 32-bit, single-precision floating point number into bytes, we see that the exponent is split across bytes 3 and 4, with one bit in byte 3 and seven in byte 4. There are various "homebrew" floating point libraries around online targetting chips without FPUs that consolidate the exponent bits into one byte; one such example is Zeda's Z80 FP routines library.
Is there a concrete reason why the sign bit "pushes out" one exponent bit into a byte of the mantissa? Is this perhaps just irrelevant and I'm focusing too much on byte-level alignment? Wouldn't putting the sign bit with the mantissa make more sense, and if not: why?
-
For native floating-point processors there is no particular reason to align with byte segments since the circuit is designed specifically to the sizes of the bit fields. For the Z80 library it has to adapt to an 8-bit processor.JacquesB– JacquesB02/27/2025 23:35:24Commented Feb 27 at 23:35
-
2I don't think it is feasible to have a FPU without at least 32 bit registers, and it would be trivial for the execution unit(s) to shuffle bits around. I do not see the bus or memory system making any difference, it just needs to move bits around, it would not care what they represent. All the bits needs to be in the register before doing anything anyway. And if you can afford a FPU it makes little sense to couple it to a 8 bit integer unit.JonasH– JonasH02/28/2025 09:06:59Commented Feb 28 at 9:06
-
1This question was closed for being "opinion-based", but I don't think this was right; the last paragraph has three questions answerable by factual answers, even if those answers are "no; irrelevant; no because FPUs both at the time and since then have never needed byte-level alignment"petroleus– petroleus03/03/2025 00:30:28Commented Mar 3 at 0:30
-
1@petroleus you say that the question isn't opinion-based because the answers can just be "no", but I wrote an answer exactly like that and you complained that you found it unconvincing, so clearly "no" wasn't an adequate answer after all.Useless– Useless03/27/2025 19:17:10Commented Mar 27 at 19:17
-
3The other interesting facet of the IEEE-754 bit ordering is that they make the values monotonically ordered when interpreted as raw bit patterns. The exponent is more significant than the significand, and the sign bit is most significant of all. If you take two floating-point values, and as long as they're both positive, if you compare their bit patterns as unsigned integers rather than floating point values, you'll still get the right answer. Zero is less than the subnormals are less than the normals are less than Infinity is less than the NaNs.Steve Summit– Steve Summit04/07/2025 12:26:59Commented Apr 7 at 12:26
2 Answers 2
... it feels like it'd still be simpler for early circuitry to have byte-aligned fields
is just an argument from personal disbelief.
If you're taping out paths for 32 bits into the arithmetic units, there's nothing at all that's automatically grouped into 8s. You want to avoid lines crossing, but that's all.
In a general-purpose 32-bit processor you have some instructions that operate on octets, some that operate on 16-bit halfwords, and some that operate on full 32-bit registers. It makes sense here to align everything on 8-bit boundaries, but none of these considerations apply to a floating-point unit.
Is there a concrete reason why the sign bit "pushes out" one exponent bit into a byte of the mantissa?
There doesn't need to be a concrete reason if bytes are not a natural unit in the floating-point unit, which they aren't.
Is this perhaps just irrelevant and I'm focusing too much on byte-level alignment?
Yes. As I said, byte-level alignment is natural in circuits which operate on bytes. It isn't somehow innate to circuits or to binary numbers.
Wouldn't putting the sign bit with the mantissa make more sense
No. Your sense of what "makes sense" seems to have been formed by what "makes sense" for CPUs. Floating point units as originally conceived were separate co-processors with no byte-level operations at all.
-
It is indeed rooted in personal disbelief, which is why I asked for whether there was a justification for picking the order that was picked (hence providing justification to the argument), or if that alignment didn't matter in the first place at the time (hence dispelling the possible argument). Do you mean to say that there's no real answer and that it's just so? If that's the case, maybe editing the answer a bit to clarify would make it a good answer for the questionpetroleus– petroleus03/03/2025 00:27:52Commented Mar 3 at 0:27
-
I did exactly describe why that belief may be relevant for general purpose CPU registers + integer ALUs, and is not relevant for FPUs. Is the issue that you didn't understand the answer, or that you don't understand the difference between an integer ALU and the original floating-point co-processors, or what?Useless– Useless03/03/2025 21:30:21Commented Mar 3 at 21:30
-
I understand the answer, but I feel it leaves the trio of actual questions in my OP incompletely answeredpetroleus– petroleus03/05/2025 00:37:05Commented Mar 5 at 0:37
-
I've spelled it out, but everything I just wrote comes directly from the original answer.Useless– Useless03/05/2025 08:58:37Commented Mar 5 at 8:58
You have a sign bit, exponent bits, an implicit highest mantissa bit (not stored) and explicit mantissa bits. You have 32 or 64 bit available. IEEE 754 uses 8 exponent bits and 23 explicit mantissa bits. This allows numbers in the range 2^+/127 ≈ 10^+/-38 with 23-24 bits mantissa.
If you wanted the sign and exponent fit into 8bits you’d only have a range of 10^+/-15 with an additional mantissa bits; that’s quite a small dynamic range. For 64 bit you’d change a dynamic range of 10^+/-300 to 10^+/-15 or a ridiculous 10^+/-1000 with five bits less precision.
The given ranges were a compromise. There would always be a compromise between exponent range and mantissa; and nobody is complaining about those ranges. Changing the number of bits to access multiple of eight bits has exactly zero advantage, because no 8 bit hardware is used in the implementation.
-
2-1 I'm not asking about the sizes of the fields in numbers of bits, just the actual order of them. The size of the single is not relevant to the question at all, and instead I'm asking why we are fitting the sign bit with the seven high bits of the exponent into a byte. Reordering them such that all eight bits of the exponent be in one byte, and the sign moved into one of the mantissa bytes, doesn't require changing anything about either the precision or range of the singlepetroleus– petroleus02/28/2025 14:26:06Commented Feb 28 at 14:26
-
1@petroleus your question about bit ordering to match byte boundaries is only relevant with certain number of bits, so the number of bits is very much relevant. Second, we give answers to questions to help everyone, not as a personal answering service to you.gnasher729– gnasher72902/28/2025 18:45:48Commented Feb 28 at 18:45
-
@petroleus, I think the meat of gnasher's answer is in the last paragraph. The first two paragraphs are just providing a mathematical basis for the conclusion reached in the last paragraph: aligning to 8-bits has no advantage, nobody is complaining about it, and it would only matter on 8-bit hardware that nobody uses in the real world. Assuming I understood the answer correctly, maybe an edit to this answer to clarify or highlight this conclusion would help?Greg Burghardt– Greg Burghardt02/28/2025 20:36:26Commented Feb 28 at 20:36
-
If you had an exponent field with nine bits, eight bit alignment would mean you have 6 mantissa bits before and 16 after the exponent field. Which would be utterly stupid.gnasher729– gnasher72909/02/2025 18:19:44Commented yesterday