I understand that if a system consists of multiple hardware-threads the scheduler assigns software-threads to hardware-threads.
However, hypothetically, let's imagine a system that does only consist of a single-hardware-thread. Is the execution of multiple software-threads forbidden or does the program execute sequentially?
-
2Pre-emptive multi-tasking was common long before hardware multi-processing was common. E.g. Windows 95 introduced such a scheduler, and it was the standard approach in Unix since forever. A pre-emptive scheduler occasionally suspends the running task and lets another thread execute for the next timeslice. This just requires a hardware timer or interrupt that causes the scheduler to run when the timeslice is over.amon– amon2021年02月01日 13:00:36 +00:00Commented Feb 1, 2021 at 13:00
-
en.wikipedia.org/wiki/Preemption_(computing)#Time_sliceErik Eidt– Erik Eidt2021年02月01日 16:36:08 +00:00Commented Feb 1, 2021 at 16:36
-
A system does not consist of hardware or software threads, that whole statement is misleading. An (operating) system may provide multiple threads to a process, and there is usually hardware support like interrupts used, even if the CPU has only a single core. So please clarify what you mean by "single hardware-threads" - only single core CPUs? Or a CPU with no interrupts? Or something else?Doc Brown– Doc Brown2021年02月01日 20:42:40 +00:00Commented Feb 1, 2021 at 20:42
3 Answers 3
This is far from hypothetical:
- Before multithreading CPUs, running multiple threads and processes on a single threaded CPU was a common practice, supported by many OSes.
- There a still a lot of microcontrollers around that work with a single single threaded core. So it’s still a relevant question.
The way multiple threads are run on a single CPU execution thread depends a lot on the OS / execution environment / library that you are using and the underlying threading principle:
- preemptive multithreading is done in a similar way than multiprocessing, but it is much lighter and faster: the execution of threads is performed in small slices that are each executed sequentially. The fact that there is a frequent switch creates the illusion of concurrency at the cost of performance.
- cooperative multithreading let each thread decide when the switch to another thread is to be done. In the worst case, two threads may just be executed one after the other in a sequential way. The impression of concurrency is less convincing, but performance is be better (less switching overhead).
- usually I/O operations are associated with some kind of waiting. I/O calls therefore often lead - in both models - to a potential thread switch. Since the I/O waiting time is an order of magnitude longer (milliseconds) than thread switching (nanoseconds), this kind of switch has little impact on perceived performance but significantly increases system throughput in I/O intensive applications.
More information:
- This article about realtime scheduling provides some more explanation and has some nice figures about how threads are scheduled for sequential execution (including how priorities can be managed).
- Introduction about preemptive threading in Windows .
- C++ standard library is available on all C++implementations starting from c++11. Note the function that provides a hint about hardware threading limits. It is not guaranteed to provide meaningful info but does on MSVC if you’re looking for some experimentation.
-
I can buy an expensive computer supporting 16 hardware threads. Then I start 17 software threads, and we have basically the same situation as with a single hardware threadsgnasher729– gnasher7292024年09月12日 10:00:03 +00:00Commented Sep 12, 2024 at 10:00
-
@gnasher729 indeed! It'll anyway generally go via the OS that abstracts the hardware layers. However, the overhead % of running 17 threads on a 16 threads CPU is not the same than 17 threads on a single threaded CPU ;-) Moreover, if half of the threads are in waiting mode, the OS may not necessary load them into an active CPU thread. If number of physical threads are important to dynamically optimise the set-up, C++ standard offers
std::thread::hardware_concurrency
as a hint.Christophe– Christophe2024年09月12日 11:30:01 +00:00Commented Sep 12, 2024 at 11:30
First consider that if a single core system allowed only one thread, then it would be logical that a processor with 16 cores would allow only sixteen threads - my Mac runs a few hundred right now.
What happens is that you can have as many threads as you want (within reason). Typically each core starts running one thread, and either when the thread needs to wait for something or enough time has passed, the thread will be paused and another starts running.
-
To slightly extend this to what OP presumably wants to know: it's the machine's (not the application's) prerogative to decide which thread to run at which time on which core. So the machine might interleave your two threads on its single core, or it could choose to do one after the other.Flater– Flater2021年02月01日 13:46:49 +00:00Commented Feb 1, 2021 at 13:46
-
Worth mention that threads can be set with different levels of priority. No need to wait for a thread to stop, threads with higher priority will be scheduled first, what could break any "feeling" of sequentialityLaiv– Laiv2021年02月01日 13:57:03 +00:00Commented Feb 1, 2021 at 13:57
-
@Flater Actually it is the operating system's prerogative. (Those are not the same thing)Stack Exchange Broke The Law– Stack Exchange Broke The Law2021年02月02日 10:40:54 +00:00Commented Feb 2, 2021 at 10:40
-
@user253751: From the perspective we're discussing, "machine" is sufficiently descriptive (and commonly used). You're taking it too literal. It's not referring to actual physical hardware, it's referring to whatever environment on which this application will run. Whether that's an actual physical hardware device, which OS it is, whether it's a VM, ... is irrelevant for this topic.Flater– Flater2021年02月02日 10:42:57 +00:00Commented Feb 2, 2021 at 10:42
Let me now try to explain:
So far as I am aware (and I've been around these parts a long time), there is no such thing as a "hardware," nor a "software," thread. "Threading" is a purely-software concept – agnostic to whatever-is the underlying hardware that runs it.
The software concept is that "the computer's workload" consists of one-or-more independent "processes," each of which owns such resources as memory-segments and file-handles. Then, within each process, we have one-or-more independent "threads," all of which share the owning process's resources.
The hardware concept, then, is that: "the operating system must now find a way to run it, on whatever hardware it finds that it has." If there is only one CPU/core, then quite-necessarily only one [thread of one ...] process can execute at a time; otherwise they will indeed be "physically simultaneous."
The programmer concept therefore must be that: "it is entirely unpredictable." You must never write software that is dependent upon an operating-system's decisions: "indeed, exactly the opposite."
Explore related questions
See similar questions with these tags.