Edit 4: As I noted in a comment to Edgar Bonet's answer, one can change the TOP value used for timer interrupts, to control interrupt rate with better resolution than can be obtained by merely subtracting base values.
According to Gerben's comment, "getRTCTime
only has 1sec resolution". As I haven't seen any code for your getRTCTime()
I don't know if that is true or not; but if so, here is another approach for time-rate correction:
• At some beginning point coincident with an RTC seconds-change, record
millis()
and an RTC reading, in egmillibase
andRTCbase
.
• At some interval (eg, 10 seconds or a minute, in each case coincident with an RTC seconds-change) measuremillis()
and an RTC reading, in egmillinow
andRTCnow
. ComputeRTCdeltaK
as the number of milliseconds fromRTCbase
toRTCnow
. Computemillidelta
asmillinow-millibase
.
• Whenever elapsed time in milliseconds is needed, computeelapsedms = ((millis()-millibase)*RTCdeltaK)/millidelta
.
The above is an algorithm, and may need some adjustment before use as an implementation. First, the elapsedms
calculation can be arranged to avoid division, using a scaled multiplicative factor based on millidelta
, RTCdeltaK
, and some powers of 2. Secondly, relatively constant clock drift is assumed. If that assumption is incorrect, an average ratio with exponential decay could be used, instead of just a factor equivalent to RTCdeltaK/millidelta
.
Edit 4: As I noted in a comment to Edgar Bonet's answer, one can change the TOP value used for timer interrupts, to control interrupt rate with better resolution than can be obtained by merely subtracting base values.
According to Gerben's comment, "getRTCTime
only has 1sec resolution". As I haven't seen any code for your getRTCTime()
I don't know if that is true or not; but if so, here is another approach for time-rate correction:
• At some beginning point coincident with an RTC seconds-change, record
millis()
and an RTC reading, in egmillibase
andRTCbase
.
• At some interval (eg, 10 seconds or a minute, in each case coincident with an RTC seconds-change) measuremillis()
and an RTC reading, in egmillinow
andRTCnow
. ComputeRTCdeltaK
as the number of milliseconds fromRTCbase
toRTCnow
. Computemillidelta
asmillinow-millibase
.
• Whenever elapsed time in milliseconds is needed, computeelapsedms = ((millis()-millibase)*RTCdeltaK)/millidelta
.
The above is an algorithm, and may need some adjustment before use as an implementation. First, the elapsedms
calculation can be arranged to avoid division, using a scaled multiplicative factor based on millidelta
, RTCdeltaK
, and some powers of 2. Secondly, relatively constant clock drift is assumed. If that assumption is incorrect, an average ratio with exponential decay could be used, instead of just a factor equivalent to RTCdeltaK/millidelta
.
A third detail: If outside of the ISR you will need to access any of the values computed in the ISR, declared their variables volatile
. Example:
volatile unsigned long time;
Fourth: If getRTCTime()
or any of your sensor-reading code use interrupts, you will need to move such code out of the "timer interrupt at 20Hz" section, into (for example) loop()
, triggered by a volatile variable going true. [That's the model under which the balance of this answer was conceived; I overlooked, at first, the "timer interrupt at 20Hz" label.] Note that millis()
itself does not use interrupts. But if your 20-Hz ISR runs longer than a millisecond, the millis()
total will drop one millisecond per millisecond of additional ISR time.
By tuning, I refer to the number of measurement intervals to wait between recomputing the clock correction. Under reasonable assumptions, the number of compute cycles is so closely the same with a one second interval versus a ten minute interval, that one probably should use only a second (or at the outside, a minute) interval between clock correction computes.
By tuning, I refer to the number of measurement intervals to wait between recomputing the clock correction. Under reasonable assumptions, the number of compute cycles is so closely the same with a one second interval versus a ten minute interval, that one probably should use only a second (or at the outside, a minute) interval between clock correction computes.
A third detail: If outside of the ISR you will need to access any of the values computed in the ISR, declared their variables volatile
. Example:
volatile unsigned long time;
Fourth: If getRTCTime()
or any of your sensor-reading code use interrupts, you will need to move such code out of the "timer interrupt at 20Hz" section, into (for example) loop()
, triggered by a volatile variable going true. [That's the model under which the balance of this answer was conceived; I overlooked, at first, the "timer interrupt at 20Hz" label.] Note that millis()
itself does not use interrupts. But if your 20-Hz ISR runs longer than a millisecond, the millis()
total will drop one millisecond per millisecond of additional ISR time.
By tuning, I refer to the number of measurement intervals to wait between recomputing the clock correction. Under reasonable assumptions, the number of compute cycles is so closely the same with a one second interval versus a ten minute interval, that one probably should use only a second (or at the outside, a minute) interval between clock correction computes.
For example, suppose the total number of cycles awake per second is given by the following equation:
a = 20s + 1000t + k * r
where s is the number of cycles used per sensor-set reading and recording; t is the number of cycles used per clock interrupt; k is the number of clock correction computes per second; and r is the number of cycles used per clock correction compute.
For example, if s is 2000, t is 100, and r is 200, the equation becomes
a = 202000 + 1000100 + k * 200 = 140000 + k * 200
Now consider three cases: k equal to 20, or 1, or 1/600, corresponding to a clock correction compute 20 times per second, or once a second, or every 10 minutes:
k a
20 144000
1 144200
1/600 144000.3
As you can see, under the assumptions s is 2000, t is 100, and r is 200, there is no compelling reason to prefer 1/600 corrections per second to one correction per second.
If your RTC can be read reliably and quickly, reading it either every time (ie 20 per second) or every second has other advantages: you compensate more quickly for MCU clock drift (ie, every second, rather than every 10 minutes) and strongly decrease the risk of out-of-order times.
For example, if your MCU clock drifts 2 seconds fast per 10 minutes, readings taken during the first two seconds of each new 10-minute interval would show smaller times than those taken during the last two seconds of the previous interval. With secondly corrections, no such non-monotonic readings will occur.
Here is a slightly more correct analysis of the 2 seconds fast per 10 minutes case: 2 seconds error in 600 seconds is 3.33 milliseconds per second. With sensor readings 50 milliseconds apart, and corrected clock readings not more than 3.33 milliseconds out of whack, non-monotonicity won't occur. However, this does not meet the "logged time needs to be exact down to ms" criterion. To meet that, drift of more than a half millisecond must be prevented. That requires drift correction at least 6.67 times per second. You could accomplish that by making a clock correction compute at every third sensor cycle.
It should be clear from the example calculations of awake cycles that the major contributor to the count is ISR cycles, here taken as 100*1000, or 100000 cycles per second. You could set up timer 1 to interrupt 20 times per second and turn timer 0 off (which would disable millis()
and require a different time = ...
formula). If each timer 1 interrupt took 1000 cycles, that would contribute 20000 cycles instead of 100000 per second.
For example, suppose the total number of cycles awake per second is given by the following equation:
a = 20s + 1000t + k * r
where s is the number of cycles used per sensor-set reading and recording; t is the number of cycles used per clock interrupt; k is the number of clock correction computes per second; and r is the number of cycles used per clock correction compute.
For example, if s is 2000, t is 100, and r is 200, the equation becomes
a = 202000 + 1000100 + k * 200 = 140000 + k * 200
Now consider three cases: k equal to 20, or 1, or 1/600, corresponding to a clock correction compute 20 times per second, or once a second, or every 10 minutes:
k a
20 144000
1 144200
1/600 144000.3
As you can see, under the assumptions s is 2000, t is 100, and r is 200, there is no compelling reason to prefer 1/600 corrections per second to one correction per second.
If your RTC can be read reliably and quickly, reading it either every time (ie 20 per second) or every second has other advantages: you compensate more quickly for MCU clock drift (ie, every second, rather than every 10 minutes) and strongly decrease the risk of out-of-order times.
For example, if your MCU clock drifts 2 seconds fast per 10 minutes, readings taken during the first two seconds of each new 10-minute interval would show smaller times than those taken during the last two seconds of the previous interval. With secondly corrections, no such non-monotonic readings will occur.
Here is a slightly more correct analysis of the 2 seconds fast per 10 minutes case: 2 seconds error in 600 seconds is 3.33 milliseconds per second. With sensor readings 50 milliseconds apart, and corrected clock readings not more than 3.33 milliseconds out of whack, non-monotonicity won't occur. However, this does not meet the "logged time needs to be exact down to ms" criterion. To meet that, drift of more than a half millisecond must be prevented. That requires drift correction at least 6.67 times per second. You could accomplish that by making a clock correction compute at every third sensor cycle.
It should be clear from the example calculations of awake cycles that the major contributor to the count is ISR cycles, here taken as 100*1000, or 100000 cycles per second. You could set up timer 1 to interrupt 20 times per second and turn timer 0 off (which would disable millis()
and require a different time = ...
formula). If each timer 1 interrupt took 1000 cycles, that would contribute 20000 cycles instead of 100000 per second.