Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Why TCP will not scale for the next-generation internet.

Conference ·
OSTI ID:975146

Until recently, the average desktop computer has been powerful enough to saturate any available network technology; but this situation is rapidly changing with the advent of Gigabit Ethernet (GigE) and similar technologies. While CPU speeds have improved by over 50% per year since the mid-1980s (or roughly doubling every 1.6 years) [3], network speeds have improved by nearly 100% per year from 10-Mb/s Ethernet in 1988 to 6.4 Gb/s HiPPI-6400/GSN [10] in 1998! Network speeds have finally surpassed the ability of the computer to fill the network pipe, and this situation will get dramatically worse due to the above trends as well as slowly increasing I/O bus speeds. While the average I/O bandwidth of a PC is expected to increase from 1.056 Gb/s (32-bit, 33-MHz PCI bus) to 4.224 Gb/s (64-bit, 66-MHz PCI bus) over the next 12-18 months, the widespread availability of HiPPI-6400/GSN (6.4 Gb/s) [10] this year and 10GigE (10 Gb/s) [6] next year will far outstrip the ability of a computer to fill the network. Our experiments already demonstrate that a PC can no longer fully utilize available network bandwidth. With the default Red Hat Linux 6.2 OS running on dual processor 400-MHz PCs with Alteon AceNIC GigE cards on a 32-bit, 33-MHz PCI bus, the peak bandwidth achieved by TCP is only 335 Mb/s. With an 83% increase in CPU speed to 733 MHz, the peak bandwidth only increases by 25% to 420 Mbls. These bandwidth numbers can be improved by about 5% by increasing the default send-receive buffer sizes from 64 KB to 512 KB, by another 10% by using interrupt coalescing, and by slightly more with further enhancements to the system set-up as described below. Unfortunately, TCP still utilizes only half the available bandwidth in the best case between two machines. This implies that file or web servers wishing to fully utilize their available gigabit-per-second bandwidth will have trouble doing so. Furthermore, remote visualization projects of large data sets will be bound by TCPDP stack performance, as described.

Research Organization:
Los Alamos National Laboratory
Sponsoring Organization:
DOE
OSTI ID:
975146
Report Number(s):
LA-UR-01-1039
Country of Publication:
United States
Language:
English

Similar Records

HIPPI-6400 -- Designing for speed
Conference · Sat Feb 28 23:00:00 EST 1998 · OSTI ID:650333

Message Passing for Linux Clusters with Gigabit Ethernet Mesh Connections
Conference · Thu Mar 31 23:00:00 EST 2005 · OSTI ID:840531

Super-speed computer interfaces and networks
Technical Report · Wed Oct 01 00:00:00 EDT 1997 · OSTI ID:534509