I have been working on a data logger project that requires very accurate timestamps and intervals for reading data. I have been able to generate a timestamp by syncing a timer with a pulse per second output of the GPS module.
I have a second timer triggering an interrupt every 100ms to read sensor values. Currently these values drift by around 100ms per day (according to the GPS synced timestamp). How would I keep this interrupt in sync with GPS time without having the possibility to miss any interrupts?
I correct the timestamp timer by just resetting it to the nearest second whenever a pulse from the GPS module is detected. This appears to work but a similar approach for the sensor reading timer could result in missing a sensor reading or potentially reading twice.
-
2\$\begingroup\$ The first thing you should mention here is what oscillator you use for MCU clock. Internal RC? External quartz? External TCXO? \$\endgroup\$Lundin– Lundin2021年08月24日 07:49:22 +00:00Commented Aug 24, 2021 at 7:49
2 Answers 2
I'm not sure I fully understood how you have your interrupts setup up, but your best bet would be to hook up an interrupt to the GPS line, make your first sample, and then enable your 100 mSec interrupt service to start at that point. On the 9th subsequent interrupt (all of which will now be generated by your internal clock), disable it, and then wait for the GPS to start the cycle again.
That way, your timing error will not accumulate, instead it will be reset every second.
Yiannis's answer has the advantage of simplicity, but suffers from a potentially large amount of jitter on that tenth interval of each second, between the last 100 ms interrupt and the next GPS interrupt.
I once had a similar requirement. I ended up creating a "software PLL" in which I let the 100 ms interrupt1 run continuously, and sample the state of that counter using the GPS interrupt. I then computed a correction factor for the numerical period of the 100 ms timer to drive it into phase alignment with the GPS interrupt. I even "dithered" the settings on each 100 ms interrupt for minimum instantaneous error.
This is considerably more complex to implement, but it makes all of your 100 ms interrupts fall within a clock cycle or two of their "ideal" locations, even as your CPU clock drifts with temperature, etc.
1 Actually, in my case, it was 100 μs. We had some tasks that had to run at a 10 kHz rate.