How does `delay()` differ from using `millis()` in terms of processor efficiency and responsiveness?
In many Arduino sketches, especially beginner ones, delay()
is commonly used to pause between actions. However, I’ve seen more advanced projects use millis()
instead to handle timing.
From a theoretical perspective, how exactly do these two approaches differ in terms of:
- Processor efficiency
- Multitasking capability (e.g., running multiple sensors or functions)
- Interrupt handling or responsiveness
- Energy consumption (especially in low-power applications)
Are there use cases where delay()
is preferable despite its drawbacks? Or should it be completely avoided in well-structured code?
1 Answer 1
Processor efficiency
A delay is basically waste of time (busy loop). You can do something else when using millis for timing, but still mcu isn't stopping
Multitasking capability (e.g., running multiple sensors or functions)
It's bit easier with millis (or RTOS based arduino)
Interrupt handling
Unless you block interrupts, it should work as usual. It might get delayed when interrupt request arrives during handling millisecond ISR (uint32_t arithmetic is expensive on AVRs)
or responsiveness
This one is definitely easier with checking millis than with delay. However depends how it's used. You'll probably end with state machines or similar
Energy consumption (especially in low-power applications)
Arduinos with RTOS core might be entering sleep mode in "idle task" when using delay. In this case delay would be much more power efficient. But there isn't many of them (Giga, Nano 33 ble, etc.)