When I execute these two lines:
printf("%.5d\n", 3); // use of precision filed
printf("%05d\n", 3); // use of 0 flag to prepend with 0
I get the following output:
00003
00003
The same result
So I wonder what is the meaning of the precision field for integer types
-
Please read thisWeather Vane– Weather Vane2017年06月06日 15:40:56 +00:00Commented Jun 6, 2017 at 15:40
1 Answer 1
For %d
, the precision is the minimum number of digits to print.
From the man page:
The precision
An optional precision, in the form of a period ('.') followed by an optional decimal digit string. Instead of a decimal digit string one may write "*" or "*m$" (for some decimal integer m) to specify that the precision is given in the next argument, or in the m-th argument, respectively, which must be of type int. If the precision is given as just '.', or the precision is negative, the precision is taken to be zero. This gives the minimum number of digits to appear for d, i, o, u, x, and X conversions, the number of digits to appear after the radix character for a, A, e, E, f, and F conversions, the maximum number of significant digits for g and G conversions, or the maximum number of characters to be printed from a string for s and S conversions.
For positive values, this works out to be the same as giving the same value for the field width and a 0
flag. If the value is negative, the negative sign will take up one character in the width of %05d
.
printf("%.5d\n", -3); // use of precision filed
printf("%05d\n", -3); // use of 0 flag to prepend with 0
Output:
-00003
-0003