Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Two fundamental issues in multiprocessing. Technical report

Technical Report ·
OSTI ID:7043380

A general-purpose multiprocessor should be scalable, i.e., show higher performance when more hardware resources are added to the machine. Architects of such multiprocessors must address the loss in processor efficiency due to two fundamental issues: long memory latencies and waits due to synchronization events. It is argued that a well-designed processor can overcome these losses provided there is sufficient parallelism in the program being executed. The detrimental effect of long latency can be reduced by instruction pipelining, however, the restriction of a single tread of computation in von Neumann processors severely limits their ability to have more than a few instructions in the pipeline. Furthermore, techniques to reduce the memory latency tend to increase the cost of task switching. The cost of synchronization events in von Neumann machines makes decomposing a program into very small tasks counter-productive. Dataflow machines, on the other hand, treat each instruction as a task, and by paying a small synchronization cost for each instruction executed, offer the ultimate flexibility in scheduling instructions to reduce processor idle time.

Research Organization:
Massachusetts Inst. of Tech., Cambridge (USA). Lab. for Computer Science
OSTI ID:
7043380
Report Number(s):
AD-A-191029/8/XAB; MIT/LCS/TM-330
Country of Publication:
United States
Language:
English

Similar Records

Critique of multiprocessing Von Neumann style
Conference · Fri Dec 31 23:00:00 EST 1982 · OSTI ID:5257916

Eps'88: Combining the best features of von Neumann and dataflow computing
Technical Report · Sat Dec 31 23:00:00 EST 1988 · OSTI ID:6432716

Assessing the benefits of fine-grain parallelism in dataflow programs
Journal Article · Thu Dec 31 23:00:00 EST 1987 · Int. J. Supercomput. Appl.; (United States) · OSTI ID:6524930