Two fundamental issues in multiprocessing. Technical report
A general-purpose multiprocessor should be scalable, i.e., show higher performance when more hardware resources are added to the machine. Architects of such multiprocessors must address the loss in processor efficiency due to two fundamental issues: long memory latencies and waits due to synchronization events. It is argued that a well-designed processor can overcome these losses provided there is sufficient parallelism in the program being executed. The detrimental effect of long latency can be reduced by instruction pipelining, however, the restriction of a single tread of computation in von Neumann processors severely limits their ability to have more than a few instructions in the pipeline. Furthermore, techniques to reduce the memory latency tend to increase the cost of task switching. The cost of synchronization events in von Neumann machines makes decomposing a program into very small tasks counter-productive. Dataflow machines, on the other hand, treat each instruction as a task, and by paying a small synchronization cost for each instruction executed, offer the ultimate flexibility in scheduling instructions to reduce processor idle time.
- Research Organization:
- Massachusetts Inst. of Tech., Cambridge (USA). Lab. for Computer Science
- OSTI ID:
- 7043380
- Report Number(s):
- AD-A-191029/8/XAB; MIT/LCS/TM-330
- Resource Relation:
- Other Information: Supersedes AD-A134 239
- Country of Publication:
- United States
- Language:
- English
Similar Records
Critique of multiprocessing Von Neumann style
Assessing the benefits of fine-grain parallelism in dataflow programs