I was struggling with timer interrupts in my project. I couldn't make it work properly. So I decided writing a simple code and I saw a very interesting case.
ISR(TIMER1_COMPB_vect)
{
PORTB ^= (1 << PORTB5);
}
int main(void)
{
cli(); // disable global interrupts
TCCR1A = 0; // set entire TCCR1A register to 0
TCCR1B = 0; // same for TCCR1B
OCR1A = 10000;
OCR1B = 100;
TCCR1B |= (1 << WGM12);
TCCR1B |= (1 << CS10);
TCCR1B |= (1 << CS12);
TIMSK1 |= (1 << OCIE1A);
TIMSK1 |= (1 << OCIE1B);
DDRB= 0xFF;
#define F_CPU 16000000
sei();
while (1)
{
}
}
Here is my code. When I change OCR1B value, nothing happens, but if I change OCR1A value then blinking gets faster. Is there a logical explanation for this?
-
@Gerben No. OCR1A is 2 byte. OCR1AH and OCR1AL are single byte.Zgrkpnr__– Zgrkpnr__2015年05月04日 18:33:34 +00:00Commented May 4, 2015 at 18:33
1 Answer 1
By setting WGM12, you set the time to CTC mode. In this mode the timer will restart when it reaches that value set to OCR1A. So the lower the value, the faster it restarts, so the higher the frequency.
The OCR1B value only changes where in this cycle the interrupt occurs. It will however still only be called once per timer1 overflow.