10

When dynamically allocating chars, I've always done it like this:

char *pCh = malloc(NUM_CHARS * sizeof(char));

I've recently been told, however, that using sizeof(char) is redundant and unnecessary because, "by definition, the size of a char is one byte," so I should/could write the above line like this:

char *pCh = malloc(NUM_CHARS);

My understanding is the size of a char depends on the native character set that is being used on the target computer. For example, if the native character set is ASCII, a char is one byte (8 bits), and if the native character set is UNICODE a char will necessarily require more bytes (> 8 bits).

To provide maximum portability, wouldn't it be necessary to use sizeof(char), as malloc simply allocates 8-bit bytes? Am I misunderstanding malloc and sizeof(char)?

asked Dec 19, 2013 at 14:19
7
  • +1 for leaving out the unnecessary (char*) cast on the right hand side Commented Dec 19, 2013 at 14:23
  • 3
    I'd do char * pCh = malloc(NUM_CHARS * sizeof(*pCh)); and turn to other issues. Commented Dec 19, 2013 at 14:24
  • s/right/left/, right @Bathsheba? Commented Dec 19, 2013 at 14:36
  • 1
    "malloc simply allocates 8-bit bytes" No. While it's true that malloc allocates bytes, C defines a byte to be however big a char is. So malloc always allocates in units of sizeof(char) which is always 1, however many bits that is. malloc(N) will allocate N*CHAR_BIT bits. Commented Dec 19, 2013 at 14:54
  • @nos Good comment... should be an answer. :-D Commented Dec 19, 2013 at 15:18

6 Answers 6

14

Yes, it is redundant since the language standard specifies that sizeof (char) is 1. This is because that is the unit in which things are measured, so of course the size of the unit itself must be 1.

Life becomes strange with units defined in terms of themselves, that simply doesn't make any sense. Many people seem to "want" to assume that "there are 8-bit bytes, and sizeof tells me how many such there are in a particular value". That is wrong, that's simply not how it works. It's true that there can be platforms with larger characters than 8 bits, that's why we have CHAR_BIT.

Typically you always "know" when you're allocating characters anyway, but if you really want to include sizeof, you should really consider making it use the pointer, instead:

char *pCh = malloc(NUM_CHARS * sizeof *pCh);

This "locks" the unit size of the thing being allocated the pointer that is used to store the result of the allocation. These two types should match, if you ever see code like this:

int *numbers = malloc(42 * sizeof (float));

that is a huge warning signal; by using the pointer from the left-hand side in the sizeof you make that type of error impossible which I consider a big win:

int *numbers = malloc(42 * sizeof *numbers);

Also, it's likely that if you change the name of the pointer, the malloc() won't compile which it would if you had the name of the (wrong) basic type in there. There is a slight risk that if you forget the asterisk (and write sizeof numbers instead of sizeof *numbers) you'll not get what you want. In practice (for me) this seems to never happen, since the asterisk is pretty well established as part of this pattern, to me.

Also, this usage relies on (and emphasizes) the fact that sizeof is not a function, since no ()s are needed around the pointer de-referencing expression. This is a nice bonus, since many people seem to want to deny this. :)

I find this pattern highly satisfying and recommend it to everyone.

answered Dec 19, 2013 at 14:35

2 Comments

You should have answered earlier, I would have given you the correct answer.
@BitFiddlingCodeMonkey Aawww. Thanks. :) I do believe you can move the accepted-status, if you like. See this meta question.
5

The C99 draft standard section 6.5.3.4 The sizeof operator paragraph 3 states:

When applied to an operand that has type char, unsigned char, or signed char, (or a qualified version thereof) the result is 1. [...]

In the C11 draft standard it is paragraph 4 but the wording is the same. So NUM_CHARS * sizeof(char) should be equivalent to NUM_CHARS.

We can see from the definition of byte in 3.6 that it is a:

addressable unit of data storage large enough to hold any member of the basic character set of the execution environment

and Note 2 says:

A byte is composed of a contiguous sequence of bits, the number of which is implementation defined. The least significant bit is called the low-order bit; the most significant bit is called the high-order bit.

answered Dec 19, 2013 at 14:21

Comments

4

The C specification states that sizeof(char) is 1, so as long as you are dealing with conforming implementations of C it is redundant.

The size unit used by mallocis the same. malloc(120) allocates space for 120 char.

A char must be at least 8 bits, but may be larger.

answered Dec 19, 2013 at 14:20

6 Comments

So on systems that have 16-bit chars, allocating memory in multiples of 8-bits is not possible?
@BitFiddlingCodeMonkey: exactly. The point is that a char (=byte) is defined as the smallest addressable data type (as far as is bigger than 8 bit), so having finer granularity doesn't make sense.
@BitFiddlingCodeMonkey: You cannot ask for 24 bits on such a system. malloc(1) will usually allocate 4 bytes anyway due to memory alignment, so I don't see the problem.
@CodeMonkey On systems with limited memory I wouldn't expect 16 bit chars.
Also, optimizing your code for the unlikely chance of a port to hypothetical platforms with little memory but big byte sizes is something that takes premature optimization to new peaks of madness =). Write correct, portable code and worry about platform-specific optimizations just for platforms where it is actually likely to run.
|
3

sizeof(char) will always return 1 so it doesn't matter if you use it or nit, it will not change. You may be confusing this with UNICODE wide characters, which have two bytes, but they have a different type wchar_t so you should use sizeof in that case.

If you are working on a system where a byte is defined to have 16 bits, then sizeof(char) would still return 1 as this is what the underlying architecture would allocate. 1 Byte with 16 bits.

answered Dec 19, 2013 at 14:21

2 Comments

So if 1 byte is 16-bits on a system, does malloc always return multiples of 16-bits, that is, you cannot dynamically allocate a multiple of 8-bits?
Yes, if that is the specification of the machine, it is so. The compiler just reflects that design. On such a machine you can not address less then 16 bits. So if you do malloc(2) you would get a pointer pointing to two bytes, but consisting of 32 bits.
3

Allocation sizes are always measured in units of char, which has size 1 by definition. If you are on a 9-bit machine, malloc understands its argument as a number of 9-bit bytes.

answered Dec 19, 2013 at 14:24

2 Comments

Are you using 9-bits as a hypothetical example? I've never heard of such a thing.
@BitFiddlingCodeMonkey: IIRC some mainframes used 9-bit bytes - probably due to the 36 bit words. Nowadays the bizarre bit sizes are found normally in DSPs, which tend to have 12 to 16 bits per byte. See here for some real world examples.
2

sizeof(char) is always 1, but not because char is always one byte (it needn't be), but rather because the sizeof operator returns the object/type size in units of char.

answered Dec 19, 2013 at 14:21

1 Comment

char typically is the "platform byte" (=the smallest addressable data type), the point is that not on all platforms bytes are octets.

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.