[dpdk-dev] [PATCH v5 0/3] Support TCP/IPv4 GRO in DPDK

Jiayu Hu jiayu.hu at intel.com
Mon Jun 19 07:12:44 CEST 2017


Hi Jianfeng,

Sorry for some typo errors. I have correct them below.

On Mon, Jun 19, 2017 at 11:07:34AM +0800, Jiayu Hu wrote:
> On Mon, Jun 19, 2017 at 09:39:11AM +0800, Tan, Jianfeng wrote:
> > Hi Jiayu,
> > 
> > You need to update the document:
> > - Release note file: release_17_08.rst.
> > - A howto doc is welcomed.
> 
> Thanks. I will update them in next patch.
> 
> > 
> > 
> > On 6/18/2017 3:21 PM, Jiayu Hu wrote:
> > > Generic Receive Offload (GRO) is a widely used SW-based offloading
> > > technique to reduce per-packet processing overhead. It gains performance
> > > by reassembling small packets into large ones. Therefore, we propose to
> > > support GRO in DPDK.
> > > 
> > > To enable more flexibility to applications, DPDK GRO is implemented as
> > > a user library. Applications explicitly use the GRO library to merge
> > > small packets into large ones. DPDK GRO provides two reassembly modes.
> > > One is called lightweigth mode, the other is called heavyweight mode.
> > > If applications want merge packets in a simple way, they can use
> > > lightweigth mode. If applications need more fine-grained controls,
> > > they can choose heavyweigth mode.
> > 
> > So what's the real difference between the two modes? Might be an example is
> > good way to clarify.
> 
> The heavyweight mode merges packets in a burst-mode. Applications just need

Sorry for typo error. It should be 'lightweight mode'.

> to give N packets to the heavyweight mode API, rte_gro_reassemble_burst.

Sorry for typo error. It should be 'lightweight mode'.

> After rte_gro_reassemble_burst returns, packets are merged together. For
> applications, to use the heavyweight mode is very simple and they don't

Sorry for typo error. It should be 'lightweight mode'.

> need to allocate any GRO tables before. The lightweight mode enables more

Sorry for typo error. It should be 'heavyweight mode'.

> flexibility to applications. Applications need to create a GRO table before
> invoking the lightweight mode API, rte_gro_reassemble, to merge packets.

Sorry for typo error. It should be 'heavyweight mode'.

> Besides, rte_gro_reassemble just processes one packet at a time. No matter
> if the packet is merged successfully or not, it's stored in the GRO table.
> When applications want these processed packets, they need to manually flush
> them from the GRO table. You can see more detaileds in the next patch
> 'add Generic Receive Offload API framework'.
> 
> > 
> > > 
> > > This patchset is to support TCP/IPv4 GRO in DPDK. The first patch is to
> > > provide a GRO API framework. The second patch is to support TCP/IPv4 GRO.
> > > The last patch demonstrates how to use GRO library in app/testpmd.
> > 
> > In which mode?
> 
> Testpmd just demonstrate the usage of the lightweight mode.
> 
> > 
> > > 
> > > We perform two iperf tests (with DPDK GRO and without DPDK GRO) to see
> > > the performance gains from DPDK GRO. Specifically, the experiment
> > > environment is:
> > > a. Two 10Gbps physical ports (p0 and p1) on one host are linked together;
> > > b. p0 is in networking namespace ns1, whose IP is 1.1.2.3. Iperf client
> > > runs on p0, which sends TCP/IPv4 packets. The OS in VM is ubuntu 14.04;
> > > c. testpmd runs on p1. Besides, testpmd has a vdev which connects to a
> > > VM via vhost-user and virtio-net. The VM runs iperf server, whose IP
> > > is 1.1.2.4;
> > > d. p0 turns on TSO; VM turns off kernel GRO; testpmd runs in iofwd mode.
> > > iperf client and server use the following commands:
> > > 	- client: ip netns exec ns1 iperf -c 1.1.2.4 -i2 -t 60 -f g -m
> > > 	- server: iperf -s -f g
> > > Two test cases are:
> > > a. w/o DPDK GRO: run testpmd without GRO
> > > b. w DPDK GRO: testpmd enables GRO for p1
> > > Result:
> > > With GRO, the throughput improvement is around 40%.
> > 
> > Do you try running several pairs of iperf-s and iperf-c tests (on 40Gb
> > NICs)? It can not only prove the performance, but also the functionality
> > correctness.
> 
> Besides the one pair scenario, I just tried two pairs of iperf-s and iperf-c.
> Thanks for your advices, amd I will do more testes in next patch.
> 
> 
> Thanks,
> Jiayu
> 
> > 
> > Thanks,
> > Jianfeng
> > 
> > > 
> > > Change log
> > > ==========
> > > v5:
> > > - fix some bugs
> > > - fix coding style issues
> > > v4:
> > > - implement DPDK GRO as an application-used library
> > > - introduce lightweight and heavyweight working modes to enable
> > > 	fine-grained controls to applications
> > > - replace cuckoo hash tables with simpler table structure
> > > v3:
> > > - fix compilation issues.
> > > v2:
> > > - provide generic reassembly function;
> > > - implement GRO as a device ability:
> > > add APIs for devices to support GRO;
> > > add APIs for applications to enable/disable GRO;
> > > - update testpmd example.
> > > 
> > > Jiayu Hu (3):
> > >    lib: add Generic Receive Offload API framework
> > >    lib/gro: add TCP/IPv4 GRO support
> > >    app/testpmd: enable TCP/IPv4 GRO
> > > 
> > >   app/test-pmd/cmdline.c       |  45 ++++
> > >   app/test-pmd/config.c        |  29 +++
> > >   app/test-pmd/iofwd.c         |   6 +
> > >   app/test-pmd/testpmd.c       |   3 +
> > >   app/test-pmd/testpmd.h       |  11 +
> > >   config/common_base           |   5 +
> > >   lib/Makefile                 |   1 +
> > >   lib/librte_gro/Makefile      |  51 +++++
> > >   lib/librte_gro/rte_gro.c     | 248 ++++++++++++++++++++
> > >   lib/librte_gro/rte_gro.h     | 217 ++++++++++++++++++
> > >   lib/librte_gro/rte_gro_tcp.c | 527 +++++++++++++++++++++++++++++++++++++++++++
> > >   lib/librte_gro/rte_gro_tcp.h | 210 +++++++++++++++++
> > >   mk/rte.app.mk                |   1 +
> > >   13 files changed, 1354 insertions(+)
> > >   create mode 100644 lib/librte_gro/Makefile
> > >   create mode 100644 lib/librte_gro/rte_gro.c
> > >   create mode 100644 lib/librte_gro/rte_gro.h
> > >   create mode 100644 lib/librte_gro/rte_gro_tcp.c
> > >   create mode 100644 lib/librte_gro/rte_gro_tcp.h
> > > 


More information about the dev mailing list