Skip to main content

Techdocs Library > Presentations & Tools >

Ethernet on POWER: Physical, Shared, Virtual



Document Author:

Steven Knudson


Document ID:

PRS5353


Doc. Organization:

Advanced Technical Sales


Document Revised:

02/13/2018


Product(s) covered:

Power; Power5; POWER6; POWER6 570; POWER7; Power 750; Power 770; Power 780; Power 795; POWER8; Power Systems I/O; PowerVM; pSeries; pSeries 520; pSeries 550; pSeries 570; pSeries 590; RS/6000; VIOS







Abstract: New cpu consumption and throughput results for SR-IOV Virtual Function VF, versus SR-IOV vNIC, in POWER servers. Also continuing the focus on Linux on POWER tuning for Shared Ethernet Adapter (SEA).


The updated attached pdf contains our tests of CPU consumption and throughput for SR-IOV, Virtual Function (VF), versus vNIC in POWER, pp 73-80.

Broadly speaking, you will save CPU if you can use VF, and dispense with Live Partition Mobility (LPM). If you must have LPM, you will use SR-IOV vNIC, and your CPU consumption will be similar to that experienced with virtual Ethernet and SEA.

Also, tuning of Linux for Shared Ethernet Adapter remains an area of interest, pp 81-83.

The full "cheatsheet" for PowerVM, AIX and SLES 11 virtual Ethernet performance, over SEA and thru the hypervisor, from slides 72-74 in the attached slide deck:
1) before SEA is configured, put dcbflush_local=yes on the trunked virtual adapters.  If SEA is already configured, you may skip this
$ chdev -dev entX -attr dcbflush_local=yes


2) configure SEA.  largesend is on the SEA by default, put large_receive on also
$ chdev -dev entY -attr large_receive=yes


3) In AIX client LPARs, before IP is configured, put dcbflush_local on virtual
Ethernet adapters.  If IP is already configured, you may skip this
# chdev -l ent0 -a dcbflush_local=yes


4) Also in AIX, put thread and mtu_bypass on the interface en0
# chdev -l en0 -a thread=on
# chdev -l en0 -a mtu_bypass=on


5) For client partitions running SLES 11, start with SP4 - ( uname -r 3.0.101-68-ppc64) then update to at least 77, and reboot.  Current testing is at 3.0.101-80-ppc64


6) on the SLES partition console
# rmmod ibmveth
# modprobe ibmveth old_large_send=1
# ethtool -K eth0 tso on  (Do this for every virtual Ethernet adapter in the partition)


7) SLES - Verify tso is on
# ethtool -k eth0
Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on

tcp-segmentation-offload: on

udp-fragmentation-offload: off
generic-segmentation-offload: on


8) Assure you have enough CPU in sending client partition, sending VIO, receiving VIO, and receiving partition


9) SLES 11 changes for network configuration to persist through a reboot


# echo "options ibmveth old_large_send=1" >> /etc/modprobe.d/50-ibmveth.conf
# echo "ETHTOOL_OPTIONS_tso='-K iface tso on' " >> /etc/sysconfig/network/ifcfg-eth0.cfg
# echo "ETHTOOL_OPTIONS_tso='-K iface tso on' " >> /etc/sysconfig/network/ifcfg-eth1.cfg


10) What we observed:

SLES 11   ---> hypervisor  ---> SLES 11  23.0 Gb/sec iperf, 8 TCP connect, 30 sec
SLES 11   <--- hypervisor  <--- SLES 11  23.0 Gb/sec 

SLES 11 ---> SEA -- 10Gb net -- SEA ---> SLES 11  8.95 Gb/sec
SLES 11 <--- SEA -- 10Gb net -- SEA <--- SLES 11  8.95 Gb/sec


network_performance_sea_components_20170714.pdfnetwork_performance_sea_components_20170714.pdf



Classification:

Hardware; Software

Category:

Performance




Platform(s):

IBM Power Systems; IBM System p Family



O/S:

AIX; Linux

Keywords:

SEA virtual Ethernet POWER SR-IOV vNIC VF VIO Shared

The Techdocs Library
Is this your first visit to Techdocs (the Technical Sales Library)?

Learn more


Techdocs QuickSearch

: