Parallel Computing
Parallel computing is the execution of a computer program utilizing multiple computer processors (CPU) concurrently instead of using one processor exclusively. Let T(n,1) be the run-time of the fastest known sequential algorithm and let T(n,p) be the run-time of the parallel algorithm executed on p processors, where n is the size of the input.
The speedup is then defined as
i.e., the ratio of the sequential execution time to the parallel execution time. Ideally, one would like S(p)=p, which is called perfect speedup, although in practice this is rarely achieved. (There are some situations where superlinear speedup is achieved due to memory hierarchy effects.)
Another metric to measure the performance of a parallel algorithm is efficiency, E(p), defined as
One can use speedup and efficiency to analyze algorithms either theoretically using asymptotic run-time complexity or in practice by measuring the time of program execution. When p is fixed, Speedup and Efficiency are equivalent measures, differing only by the constant factor p.
See also
Queuing TheoryThis entry contributed by Jonathan Bentz
Explore with Wolfram|Alpha
More things to try:
References
Scott, L. R.; Clark, T.; and Bagheri, B. Scientific Parallel Computing. Princeton, NJ: Princeton University Press, 2005.Referenced on Wolfram|Alpha
Parallel ComputingCite this as:
Bentz, Jonathan. "Parallel Computing." From MathWorld--A Wolfram Resource, created by Eric W. Weisstein. https://mathworld.wolfram.com/ParallelComputing.html