In general, the idea is sound. But there are problems with some of the details and tuning.
The most important detail: Use the correct data type for storing and computing with time variables (like time
, checkPointMillis
and checkPointRTC
). The correct data type is unsigned long
(or, equivalently, unsigned long int
or uint32_t
).
A second detail: Rather than storing your calibration correction in two variables which you always use as a difference (in effect, checkPointRTC-checkPointMillis
) just compute and store the difference. For example:
...
unsigned long clockCorrection;
...
unsigned long time = millis() - clockCorrection;
...
clockCorrection = getRTCTime() - checkPointMillis;
This will save a few bytes of RAM and a few cycles of computation.
By tuning, I refer to the number of measurement intervals to wait between recomputing the clock correction. Under reasonable assumptions, the number of compute cycles is so closely the same with a one second interval versus a ten minute interval, that one probably should use only a second (or at the outside, a minute) interval between clock correction computes.
For example, suppose the total number of cycles awake per second is given by the following equation:
a = 20s + 1000t + k * r
where s is the number of cycles used per sensor-set reading and recording; t is the number of cycles used per clock interrupt; k is the number of clock correction computes per second; and r is the number of cycles used per clock correction compute.
For example, if s is 2000, t is 100, and r is 200, the equation becomes
a = 202000 + 1000100 + k * 200 = 140000 + k * 200
Now consider three cases: k equal to 20, or 1, or 1/600, corresponding to a clock correction compute 20 times per second, or once a second, or every 10 minutes:
k a
20 144000
1 144200
1/600 144000.3
As you can see, under the assumptions s is 2000, t is 100, and r is 200, there is no compelling reason to prefer 1/600 corrections per second to one correction per second.
If your RTC can be read reliably and quickly, reading it either every time (ie 20 per second) or every second has other advantages: you compensate more quickly for MCU clock drift (ie, every second, rather than every 10 minutes) and strongly decrease the risk of out-of-order times.
For example, if your MCU clock drifts 2 seconds fast per 10 minutes, readings taken during the first two seconds of each new 10-minute interval would show smaller times than those taken during the last two seconds of the previous interval. With secondly corrections, no such non-monotonic readings will occur.
Here is a slightly more correct analysis of the 2 seconds fast per 10 minutes case: 2 seconds error in 600 seconds is 3.33 milliseconds per second. With sensor readings 50 milliseconds apart, and corrected clock readings not more than 3.33 milliseconds out of whack, non-monotonicity won't occur. However, this does not meet the "logged time needs to be exact down to ms" criterion. To meet that, drift of more than a half millisecond must be prevented. That requires drift correction at least 6.67 times per second. You could accomplish that by making a clock correction compute at every third sensor cycle.
It should be clear from the example calculations of awake cycles that the major contributor to the count is ISR cycles, here taken as 100*1000, or 100000 cycles per second. You could set up timer 1 to interrupt 20 times per second and turn timer 0 off (which would disable millis()
and require a different time = ...
formula). If each timer 1 interrupt took 1000 cycles, that would contribute 20000 cycles instead of 100000 per second.
- 8.9k
- 3
- 21
- 33