testing network transfer speed and hardware limitations
1
vote
1
answer
1301
views
This is somewhat of a computer engineering and network engineering question, but it is linux based since *infiband* is largely utilized via the linux operating system. And I am interested in understanding the speed limits that would happen while using the linux operating system on my hardware.
*Infiniband* (mellanox) has had many data rate releases: QDR, FRD, EDR, HDR.
Currently HDR is at either 100 gbps or 200 gbps (see mellanox advertising as of 2021).
In using
iperf
under rhel 7.9
via **IPoIB** iperf reported a maximum transfer speed of 24.0 gbps (over TCP I believe). On a 100 gbps that is 24%, versus a reported 942 mbps on a 1gbps network which is 94%.
My question is: as infinband goes from HDR to a future NDR and XDR and whatever those gbps may be can someone report on or calculate where the choke point could possibly be? For example i did and SSH scp
test and ssh cipher choice mattered resulting in either a 1.600 hbps to a maximum of 5.6 gbps (200 MB/s or 700 MB/s) transfer speed which I believe was the limitation of the cpu's ability to process and do the chosen encryption scheme.
What are numbers regarding cpu / DDR4 ram / chipset / PCIe lane limitations in relation to a 100gbps inifiband network speed?
How fast can the current Intel or AMD [chipsets?] with LGA1200 or LGA3647 and with PCIe 3.0 sling bits to a mellanox infiniband card?
And how in linux [RHEL 7.9] can I test this sort of thing, is iperf the *best* way?
Asked by ron
(8647 rep)
Mar 10, 2021, 09:41 PM
Last activity: Mar 26, 2021, 12:21 AM
Last activity: Mar 26, 2021, 12:21 AM