Networking Solutions

High Performance Computing (HPC) and Cluster Computing

IBM System Networking Solutions

10 Gigabit Ethernet is Ready for HPC

With costs going up, power at a premium, and manageability critical, it's no surprise that converging and consolidating have become central themes in networking. Converged voice and data networks are becoming pervasive. And virtualisation is fast becoming a critical technology to support server and storage consolidation in the data centre.

When it comes to reducing capital and operating expenses, one infrastructure is simply better than two or more and the HPC environment is no exception. High-performance computing clusters that use an InfiniBand interconnect also use Ethernet. Ethernet is necessary for user and storage connectivity, and for the management network that orchestrates the cluster. Relying on 10 Gigabit Ethernet to create a single, all-inclusive infrastructure will cut hardware and power costs, and simplify manageability. And, that infrastructure combines high performance with low power needs and a sufficiently low latency for many HPC applications, making it an excellent fit for technical and budget requirements.

Prices are Plummeting

As with many technologies, 10GbE was not initially cost-effective for widespread use. In fact, at one point a 10GbE connection cost more than the server. But that ship has sailed. Now 10GbE is so cost-effective that server vendors are starting to include the technology as a built-in standard feature. And switch prices are falling too with a list price less than $500 per port.

Stable Network Interface

Some early adopters of 10GbE were discouraged by problems with network interface cards (NICs). These problems were related to immature hardware and software drivers and have since been corrected. NIC vendors that could not adapt have dropped out of the market, and 10GbE now has a stable network interface environment.

Physical Layer Selected

Many users expected that 10GBase-T would provide a simple, cost-effective solution, but were disappointed with the high cost, high power requirements and 2.6 µsec latency per cable hop. Multiple optics standards also led to some customer confusion, with XenPak, X2, XFP, and now SFP+. It took a while for 10GbE to converge on a single type of attachment, but many users today believe that SFP+ Direct Attach Cable (also known as twinax) is the right solution. SFP+ Direct Attach Cable is a low-cost, low-latency, interoperable solution that uses existing SFP+ sockets and addresses most 10GE challenges for distances up to around ten meters.

Broad Vendor Support

Every networking vendor supports Ethernet, and that support is extending to 40GbE now and 100GbE in the future.

It's Time

New products and advancing technologies have overcome the last hurdles that prevented 10GbE from addressing HPC needs:

It's time to bring the benefits of ubiquitous 10GbE to the HPC community. For most clusters and most applications, Ethernet brings the advantages of better pricing, higher reliability, plenty of performance, and lower operating costs. A holistic approach with a single infrastructure will also contribute to reduced costs, while widespread Ethernet expertise will reduce management headaches and support a more efficient environment.

Networking Devices:

Is 10G ready for HPC?

This page contains also video presentation which requires a newer version of Adobe Flash Player.

Movie This page contains also video presentation which requires a newer version of Adobe Flash Player.

Is 10G ready for HPC? 

Contact Lenovo to order these System Networking products for x86 systems

Lenovo acquires IBM x86 systems

These IBM x86 products are now products of Lenovo in the Australia and other countries. IBM will host x86-related content on ibm.com until migrated to Lenovo. During the transition, please interpret references to IBM in relation to transitioned products as Lenovo.

Lenovo FOR THOSE WHO DO

Browse System Networking

Industry Solutions