0
\$\begingroup\$

When an interrupt occurs, the current program context is pushed onto the stack and the interrupt starts running. After the interrupt finishes its execution, the hardware knows how many addresses to pop to get back the context & PC of the previous program. As it knows how many addresses to POP, the stack pointer will eventually point to the right address for the previous program context.

In that case, why is there a need to explicity save the stack pointer on the stack while context switching for interrupts?

In the case of RTOS, when there are multiple tasks with different stack memory regions, storing stack pointer makes sense, but my question above is strictly for bare-metal case.

JRE
75k10 gold badges115 silver badges197 bronze badges
asked Jan 9 at 17:01
\$\endgroup\$
3
  • \$\begingroup\$ You are mixing up hardware and software roles. Reality is a lot more nuanced than you seem to imagine. But let's get straight to your idea of saving "the stack pointer on the stack while switching for interrupts." That's what's required for context switches, which is a matter left to the operating system and not the MCU (except in the case of the protected-mode on the x86, where the hardware manages tasks as an internal concept.) You say your question is about "bare-metal", but in general it's not (except in certain cases like protected-mode on the x86.) \$\endgroup\$ Commented Jan 9 at 23:45
  • 1
    \$\begingroup\$ Also, it's not necessary to save the SP during a context switch. One can conceive of less desirable ways, without that. It's just vastly easier and more bullet-proof and no sane person would do otherwise unless there was no option (such as with some Microchip PIC MCUs where the stack pointers are kept in inaccessible hardware.) \$\endgroup\$ Commented Jan 9 at 23:52
  • 1
    \$\begingroup\$ Which hardware you are talking about? Some specific architecture or CPU? \$\endgroup\$ Commented Jan 10 at 5:28

3 Answers 3

1
\$\begingroup\$

This is going to be very compiler specific. Some compilers store all the info in all the necessary registers, run the interrupt routine, and then restore all the registers. Others may not. Sometimes, the canned interrupt handler is too slow (or, more specifically, has too long of a latency before getting around to actually running the interrupt routine), and people need to write streamlined versions for specific situations, where much of the copying and restoring can be left out.

answered Jan 9 at 17:40
\$\endgroup\$
1
  • 1
    \$\begingroup\$ in addition, it's also very hardware-dependent. There's ISAs where you get two stack pointers, and an ISR can just switch between them; some even with larger sets of independent register files. \$\endgroup\$ Commented Jan 9 at 18:32
1
\$\begingroup\$

Let's start with your last point for your desired answer-context:

my question above is strictly for bare-metal case.

Then let's look at the central question:

why is there a need to explicity save the stack pointer on the stack while context switching for interrupts.

Within your context, bare-metal, the answer is that there isn't such an explicit need, in part (but not the only part) because there isn't a strict need for a context switch when handling interrupts.

I generally think you've got the basic idea.

But you are asking a bit of a loaded question because a context switch isn't usually a bare-metal concept (I do discuss the only except I know about: the protected-mode x86.) Usually, that concept lies at the operating system level.

If a context switch is required (and it most definitely is not always required), then the stack pointer must be saved. But it isn't always required and at the end I will mention XINU that illustrates such a case.

When an interrupt occurs, the current program context is pushed onto the stack and the interrupt start running.

There are some processors that don't even have interrupts, in the traditional sense. (The RS08 MCU won't interrupt the normal flow of instructions. It only wakes itself up from wait and stop modes.) Others, such as the DEC Alpha simply suspend the pipeline and allow the pipeline to be saved/restored (memory serving, as it was early-mid 1990's and I only skimmed the docs before considering writing operating system code and decided against the idea.)

But in many simpler cases, such as processors like the PDP-11 or even the later real-mode 8088, then you can really only be assured that the hardware system pushes a "return address" and some kind of "status word" onto the stack.

A few processors, certain Microchip ones come immediately to mind, put their return addresses into a small internal memory that is inaccessible, directly. Interrupt handling works just fine and there isn't really a need or much ability to save the stack pointer anywhere let alone on a memory stack somewhere.

Some processors do store more. The x86 running in protected mode treats pretty much everything as a task switch, including but not limited to interrupting events. So there is at least one case (if not more) where the hardware does a lot of work.

After the interrupt finished its execution, the hardware knows how many addresses to pop to get back the context & PC of the previous program. As it knows how many addresses to POP, the stack pointer will eventually point to the right address for the previous program context.

The idea of interrupts has both a hardware and a software context. But the idea of a task switch is almost always an operating system and/or software context and isn't at all about bare-metal, except in the case where the MCU supports a full task switch (the only case I know about from personal experience being the protected mode x86.)

Let's return to your last point:

my question above is strictly for bare-metal case.

Hardware varies quite a bit. But in general and for simpler cases, you do not need to save the stack pointer.

So I think you are right and the answer is that you don't need to do that.

You may just do the following:

  • In cases where paging is used, ensure that the appropriate interrupt handler has been placed in memory before allowing the interrupt event to proceed. (How this is handled is a matter for the operating system and associated hardware.)
  • In simpler cases (but not in the simplest cases), only take note of the fact that the appropriate program counter and status register have already been pushed/saved onto the current stack before the interrupt code starts. (Sometimes, the interrupt system itself is also turned off by the hardware. Sometimes, it is left enabled. And in more complex cases such as the protected-mode x86, the entire task state is preserved. In the simplest of cases you may be required to explicitly save the status word.)
  • The interrupt code's prologue-section executes and saves all remaining necessary state of the processor to the current stack. What qualifies here depends upon what state may be modified by the interrupt support code.
  • The main part of the appropriate interrupt handler executes.
  • The interrupt code's epilogue-section executes and restores the state saved by the prologue-section.
  • A return from interrupt instruction is executed that restores any remaining state (that was hardware-supported before initiating the interrupt code) and returns to the next instruction to execute in the interrupted code.

Note that the stack pointer isn't mentioned, above, because the stack isn't changed. The interrupt code runs using the same stack used by the running thread/process/task.

I think the case where the stack pointer must be saved arises only when a rescheduling event can be triggered by the interrupt code. For example, a timer interrupt may cause the current thread/process/task to exhaust its time slice and therefore permit the next available-to-run thread/process/task to begin execution. Or, for another example, perhaps a buffered packet of data has been fully received and made available for a higher-priority task waiting on that event, which must now be started to handle it.

Those things are usually at the level of an operating system environment. So, as you say, for things like an RTOS (or O/S, more generally.)

But even in the case of an operating system, it's not always necessary to save the stack pointer. For example, in the first released edition of the XINU operating system, developed by Douglas Comer as part of an operating systems course he taught, interrupts could be serviced within the context of the currently running process. (See page 119 of the 1st edition, 1984, "Operating System Design: The XINU Approach" for more details.)

There are nuances to your question and context matters. But I think you have the broad strokes right.

answered Jan 10 at 4:10
\$\endgroup\$
1
\$\begingroup\$

In that case, why is there a need to explicity save the stack pointer on the stack while context switching for interrupts.

The premises of the question don't make any sense. There is no such need on a bare metal system since context switching is a RTOS concept.

A RTOS will keep a separate stack per process/thread, hence the need to switch between them by changing the stack pointer.

Furthermore, "save the stack pointer on the stack" doesn't make much sense, you probably mean the program counter? A context switching RTOS will need to know where to get back when switching between processes so the program counter per process needs to be stored. A bare metal interrupt also (usually) stores the program counter for return address on the stack, which is then done through hardware.

answered Jan 10 at 7:47
\$\endgroup\$

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.