Sunday, September 16, 2012

Understanding Parallel Computing

Some time ago I posted  a blogpost named "The future of computing is parallelism". Here I was discussing the future of computing and the role of parallelism that will be a large player in future computing and system design. Below you can find 2 video's on Understanding Parallel Computing and the Amdahl’s Law.


Amdahl's law, also known as Amdahl's argument, is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors. It was presented at the AFIPS Spring Joint Computer Conference in 1967.

The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour cannot be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speedup is limited up to 20×, as the diagram illustrates.

source: wikipedia






No comments: