Why do people use a variable to specify a pin number when the pin is unlikely to change throughout the execution of the code?
Many times I see an int
being used for a pin definition,
int led = 13;
when the use of a const int
const int led = 13;
#define LED 13
makes much more sense.
It is even in tutorials on the Arduino site, for example, the first tutorial that most people run, Blink.
I read somewhere that const int
is preferred over #define
. Why isn't this encouraged right from the beginning, rather than allowing people to develop bad habits, from the outset? I noticed it a while back, but recently it has started to irritate me, hence the question.
Memory/processing/computing wise is a const int
, enum
, or for that matter #define
, better than a plain int
, i.e. occupies less memory, stored in different memory (Flash, EEPROM, SRAM), faster execution, quicker to compile?
This may appear to be a duplicate of Is it better to use #define or const int for constants?, but I am addressing the question of why people use variables, and how does the performance improve when they don't, rather than which type of constant is better.
-
9Because terrible begets terrible. Most hobbyists aren't seasoned programmers and so teach other hobbyists bad habits.Ignacio Vazquez-Abrams– Ignacio Vazquez-Abrams2015年08月14日 07:27:55 +00:00Commented Aug 14, 2015 at 7:27
-
1With pins in particular, the simplistic form of the basic arduino API functions like digitalWrite don't encourage proper embedded design, i.e. using masks and a single memory address for the entire portcrasic– crasic2015年08月21日 01:52:39 +00:00Commented Aug 21, 2015 at 1:52
4 Answers 4
const int led = 13;
That is the correct method. Or even:
const byte led = 13;
How many pins do you have?
Some of the tutorials did not quite go through as much quality control as they might have.
Performance will be better using const byte
, compare to int
however the compiler may be smart enough to realize what you are doing.
What you can do is gently encourage people to use more efficient techniques by using them in your own code.
Responses to comments
A commenter has suggested that
byte
is not standard C. This is correct, however this is an Arduino StackExchange site, and I believe using standard types supplied by the Arduino IDE is acceptable.In Arduino.h there is this line:
typedef uint8_t byte;
Note that this is not exactly the same as
unsigned char
. See uint8_t vs unsigned char and When is uint8_t ≠ unsigned char?.Another commenter has suggested that using byte will not necessarily improve performance, because numbers smaller than
int
will be promoted toint
(see Integer Promotion Rules if you want more on this).However in the context of a const identifier, the compiler will generate efficient code in any case. For example, disassembling "blink" gives this in the original form:
00000086 <loop>: 86: 8d e0 ldi r24, 0x0D ; 13 88: 61 e0 ldi r22, 0x01 ; 1 8a: 1b d1 rcall .+566 ; 0x2c2 <digitalWrite>
In fact it generates the same code whether the
13
:- Is a literal
- Is a
#define
- Is a
const int
- Is a
const byte
The compiler know when it can fit a number into one register and when it can't. However it is good practice to use coding that indicates your intent. Making it const
makes it clear that the number won't change, and making it byte
(or uint8_t
) makes it clear that you are expecting a small number.
Confusing error messages
Another major reason to avoid #define
is the error messages you get if you make a mistake. Consider this "blink" sketch which has an error:
#define LED = 13;
void setup() {
pinMode(LED, OUTPUT); // <---- line with error
}
void loop() {
digitalWrite(LED, HIGH); // <---- line with error
delay(1000);
digitalWrite(LED, LOW); // <---- line with error
delay(1000);
}
On the surface it looks OK, but it generates these error messages:
Blink.ino: In function ‘void setup()’:
Blink:4: error: expected primary-expression before ‘=’ token
Blink:4: error: expected primary-expression before ‘,’ token
Blink:4: error: expected `;' before ‘)’ token
Blink.ino: In function ‘void loop()’:
Blink:8: error: expected primary-expression before ‘=’ token
Blink:8: error: expected primary-expression before ‘,’ token
Blink:8: error: expected `;' before ‘)’ token
Blink:10: error: expected primary-expression before ‘=’ token
Blink:10: error: expected primary-expression before ‘,’ token
Blink:10: error: expected `;' before ‘)’ token
You look at the first highlighted line (line 4) and don't even see a "=" symbol. Plus, the line looks fine. Now it's fairly obvious what the problem is here (= 13
is being substituted for LED
), but when the line is 400 lines further down in the code, it isn't obvious the problem is with the way LED is defined.
I've seen people fall for this many times (including myself).
-
How many pins do you have? is a very good point Nick, as most boards have only in the range of the tens, not hundreds (i.e. greater than 255), so an
int
is overkill... that is, until Arduino finally come out with the Tera board... :-)Greenonline– Greenonline2015年08月14日 10:14:15 +00:00Commented Aug 14, 2015 at 10:14 -
2C doesn't have a
byte
type. You meanunsigned char
.Kevin– Kevin2015年08月14日 13:47:49 +00:00Commented Aug 14, 2015 at 13:47 -
Performance won't necessarily be better with
byte
instead ofint
, since in most contexts, integer value with types smaller thanint
are promoted toint
.Pete Becker– Pete Becker2015年08月14日 17:45:58 +00:00Commented Aug 14, 2015 at 17:45 -
1
C doesn't have a byte type. You mean unsigned char.
- My answer was in the Arduino context, which has thistypedef uint8_t byte;
. So for an Arduino, usingbyte
is OK.2015年08月14日 21:21:58 +00:00Commented Aug 14, 2015 at 21:21 -
Performance won't necessarily be better with byte instead of int
- see amended post.2015年08月14日 21:37:11 +00:00Commented Aug 14, 2015 at 21:37
As Ignacio has rightly states, it's basically because they don't know better. And they don't know better because the people who taught them (or the resources they used when learning) didn't know better.
Much of the Arduino code and tutorials are written by people who have never had any training in programming and are very much "self taught" from resources by people who themselves are very much self taught with no proper training in programming.
Many of the snippets of tutorial code I see around the place (and especially those that are only available within YouTube videos --- urgh) would be a fail mark if I were marking them in an exam.
Yes, a const
is preferred over a non-const, and even over a #define
, because:
- A
const
(like a#define
, unlike a non-const) does not allocate any RAM - A
const
(like a non-const, but unlike a#define
) gives the value an explicit type
The second point there is of particular interest. Unless specifically told otherwise with embedded type-casting ((long)3
) or a type suffix (3L
) or the presence of a decimal point (3.0
), a #define
of a number will always be an integer and all mathematics performed on that value will be as if it were an integer. Most of the time that's not a problem, but you can run into interesting scenarios when you try to #define
a value that is larger than an integer can store, such as #define COUNT 70000
and then perform a mathematical operation with other int
values on it. By using a const
you get to tell the compiler "This value is to be treated as this variable type" - so you would instead use: const long count = 70000;
and all would work as expected.
It also has the knock-on effect that it checks the type when passing the value around the place. Try passing a const long
to a function that expects an int
and it would complain about narrowing the variable range (or even completely fail to compile depending on the scenario). Do that with a #define
and it would just silently carry on giving you the wrong results and leave you scratching your head for hours.
-
7It's worth noting that a
const
variable may require RAM, depending on the context, e.g. if it's initialised using the return value from a non-constexpr function.Peter Bloomfield– Peter Bloomfield2015年08月14日 10:21:20 +00:00Commented Aug 14, 2015 at 10:21 -
Similarly,
const int foo = 13; bar(&foo);
will definitely require the compiler to allocate actual memory forfoo
.Ilmari Karonen– Ilmari Karonen2015年08月14日 10:59:25 +00:00Commented Aug 14, 2015 at 10:59 -
3If you define a macro that expands to a value that won't fit in an
int
the compiler treats the value as having the smallest type in which it will fit (modulo rules about signed vs. unsigned). If you're on a system whereint
is 16 bits,#define count 70000
will result incount
looking like along
, just as if it had been defined asconst long count = 70000;
. Further, if you pass either of those versions ofcount
to a function expectingint
, any sane compiler will treat them the same.Pete Becker– Pete Becker2015年08月14日 17:44:13 +00:00Commented Aug 14, 2015 at 17:44 -
1I agree with @PeteBecker - a construct like
#define COUNT 70000
does not truncate into an int, but the compiler treats it as a type large enough to hold that number. It is true that it might not be obvious when you useCOUNT
that it isn't an int, but you could say the same thing about aconst long
anyway.2015年08月15日 04:29:30 +00:00Commented Aug 15, 2015 at 4:29 -
2"a #define will always be an integer" That is not true. You are taking the rules of integer literals and applying them to preprocessor macros. It's like comparing apples and pop music. The expression
COUNT
in your example is replaced before compilation with the expression70000
, which has a type defined by the rules of literals, just like2
or13L
or4.0
are defined by the rules of literals. The fact that you use#define
to alias those expressions is irrelevant. You can use#define
to alias arbitrary chunks of C code, if you like.Lightness Races in Orbit– Lightness Races in Orbit2015年08月15日 17:12:13 +00:00Commented Aug 15, 2015 at 17:12
As a 2-week newbie to Arduino I'd pick up on the general idea of Arduino being occupied by non-programmers. Most sketches I have examined, including those on the Arduino site, show a total lack of order, with sketches that do not work & barely a coherent comment in sight. Flow charts are non-existent, and the "Libraries" are an un-moderated jumble.
My answer is... they do it because it works. I'm having a hard time not asking a question in my answer such as "why does it have to be 'wrong'?"
-
3One hallmark of a good programmer is that the code always reflects their intentions.Ignacio Vazquez-Abrams– Ignacio Vazquez-Abrams2015年08月20日 23:38:09 +00:00Commented Aug 20, 2015 at 23:38
-
1We're still talking about Arduinos, right? ;)linhartr22– linhartr222015年08月21日 00:10:35 +00:00Commented Aug 21, 2015 at 0:10
-
3Arduino already has a bad rep in the larger EE community because of the mediocre-to-terrible hardware designs put out by the community. Shouldn't we try to give a sh*t about something?Ignacio Vazquez-Abrams– Ignacio Vazquez-Abrams2015年08月21日 00:12:34 +00:00Commented Aug 21, 2015 at 0:12
-
2"Most projects aren't going to involve risk of life or finances..." No surprise there. Who would want to involve Arduino where there's any chance of risk after looking at the community at large.Ignacio Vazquez-Abrams– Ignacio Vazquez-Abrams2015年08月21日 01:06:25 +00:00Commented Aug 21, 2015 at 1:06
-
2It's 'wrong' not because it doesn't work in one particular situation but because, compared to doing it 'right,' there are more situations in which it doesn't work. This makes the code fragile; changes to the code can cause mysterious failures that eat up debugging time. The compiler's type checking and error messages are there to help you catch those sorts of errors earlier, rather than later.cjs– cjs2017年04月02日 06:38:30 +00:00Commented Apr 2, 2017 at 6:38