[dpdk-users] RDMA over DPDK

Xueming(Steven) Li xuemingl at mellanox.com
Sun Mar 1 12:33:04 CET 2020


 With a quick hack on mlx5 pmd, it's possible to send RDMA operation with few changes. Performance result between 25Gb back to back connected NICs:

    - Continues 1MB RDMA write on 256 different memory target of remote peer: line speed, 2.6Mpps, MTU 1024
    - Continues 8B RDMA write to remote peer: line speed, 29.4Mpps, RoCE2(74B+8B) 

Currently, dpdk usage focus on network scenario: ovs, firewall, load balance...
With hw acceleration, RDMA is application level api with more capability than sockets, 2GB xfer, less latency and atomic operations, it will enable dpdk bypass stack to application server - another huge market I believe.

Why RDMA over dpdk:

- performance , dpdk style batch/burst xfer, less i-cache miss
- easy to prefetch - no linked list
- reuse mbuf data structure with some modification
- able to send rdma request with eth mbuf data
- virtualization support, with rte_flow, able to do hw encap/decap for VF RDMA traffic
 
Potential application:

- rdma <-> rdma application in DC/HPC
- eth <-> rdma application
- device power saving. If pc/mobile support rdma, playing video or download file, most networking xfer happens with few cpu involvement.

Interested?

Xueming Li


More information about the users mailing list