[dpdk-users] Unable to merge packets using GRO feature

Wisam Monther wisamm at mellanox.com
Thu Sep 7 09:19:13 CEST 2017


Hi Jiayu,

First of all, I appreciate your efforts with me :) 
The thing that the packets didn't dropped when I used a packet with 1500 in total as a three segments of 500,
So, even when enable gro, testpmd will fwd the received segments as it.
"So according to this behavior we conclude that it's didn't reach the limit, and also the GRO didn't work"

Could be there some constraints for MLX driver or MLX NICs regards GRO? 

Best regards,
Wisam Jaddo

-----Original Message-----
From: Hu, Jiayu [mailto:jiayu.hu at intel.com] 
Sent: Thursday, September 7, 2017 4:15 AM
To: Wisam Monther
Cc: users at dpdk.org; Raslan Darawsheh; Shahaf Shuler
Subject: RE: Unable to merge packets using GRO feature

Hi Wisam,

When we perform a similar test on i40e, we find an important thing:
there is a max number limit for mbuf->nb_segs. When the number of MBUF segments of the packet is larger than this limit, the packet will be dropped.

The GROed packet has multiple MBUF segments. For vhost-dpdk, it doesn't have limit for the number of segments. But it's not the same for all NIC drivers. So I suggest you to check if the MLX driver has this limit too.

Thanks,
Jiayu

> -----Original Message-----
> From: Wisam Monther [mailto:wisamm at mellanox.com]
> Sent: Wednesday, September 6, 2017 7:18 PM
> To: Hu, Jiayu <jiayu.hu at intel.com>
> Cc: users at dpdk.org; Raslan Darawsheh <rasland at mellanox.com>; Shahaf 
> Shuler <shahafs at mellanox.com>
> Subject: RE: Unable to merge packets using GRO feature
> 
> Hi Jiayu,
> 
> Any comment regard what I described?
> 
> BRs,
> Wisam Jaddo
> 
> -----Original Message-----
> From: Wisam Monther
> Sent: Tuesday, August 29, 2017 10:49 AM
> To: 'Hu, Jiayu'
> Cc: users at dpdk.org; Raslan Darawsheh; Shahaf Shuler
> Subject: RE: Unable to merge packets using GRO feature
> 
> Hi Jiayu,
> 
> Let me re-describe my topology in a better way, see attached image.
> And it is connected physically as described, and I'm using it with MTU
> size=1500 in order to make sure Every port can handle it.
> 
> And this is the exact connection between the interfaces, so I'm sure 
> the packets received in NIC B machine B is coming from the testpmd 
> with GRO enabled.
> 
> But even so, I intentionally made the packet size 1500, to be divided 
> into 3 * ( 500) fragments using the TSO, and then merged into 1500 
> packet again in the GRO.
> So the every machine can manage this mtu size.
> 
> The thing that I need to test GRO with each setups types, "VMs with 
> Baremetal with pass through with VFs etc...".
> So I need to see the feature works on least work, it would be nice to 
> send IPV4_TCP fragmented packet using "scapy, Iperf"
> And check the fwd packet from GRO directly.
> 
> BRs,
> Wisam Jaddo
> 
> -----Original Message-----
> From: Hu, Jiayu [mailto:jiayu.hu at intel.com]
> Sent: Tuesday, August 29, 2017 4:51 AM
> To: Wisam Monther
> Cc: users at dpdk.org; Raslan Darawsheh; Shahaf Shuler
> Subject: RE: Unable to merge packets using GRO feature
> 
> Hi Wisam,
> 
> In the picture of your experiment topology, I guess NIC B in the 
> machine B is connected with another interface physically. If so, the 
> packets that tcpdump captures are from the physical link. However, 
> packets after GROed are multi- segment large packets, which are larger 
> than MTU. I am not sure if you have make other configurations to 
> enable these large packets to pass the physical link, and there is 
> jumbo frame size limit for different NICs. So would you please to 
> check this? BTW, you can use the VM to test this GRO feature, since large packets can always pass to the VM.
> 
> Thanks,
> Jiayu
> 
> > -----Original Message-----
> > From: Wisam Monther [mailto:wisamm at mellanox.com]
> > Sent: Monday, August 28, 2017 3:37 PM
> > To: Hu, Jiayu <jiayu.hu at intel.com>
> > Cc: users at dpdk.org; Raslan Darawsheh <rasland at mellanox.com>; Shahaf 
> > Shuler <shahafs at mellanox.com>
> > Subject: RE: Unable to merge packets using GRO feature
> >
> > Hi Jiayu,
> >
> > I'm sorry for bothering you, but could you conform that the feature 
> > is working probably Because, what I ever  did, I couldn't get the 
> > merged packets.
> >
> > BRs,
> > Wisam Jaddo
> >
> > -----Original Message-----
> > From: Wisam Monther
> > Sent: Thursday, August 24, 2017 9:15 AM
> > To: 'Hu, Jiayu'
> > Cc: users at dpdk.org; Raslan Darawsheh; Shahaf Shuler
> > Subject: RE: Unable to merge packets using GRO feature
> >
> > Hi,
> >
> > I'm using Mellanox NICs, and it is supporting parse packet types.
> >
> > Best regards,
> > Wisam Jaddo
> >
> > -----Original Message-----
> > From: Hu, Jiayu [mailto:jiayu.hu at intel.com]
> > Sent: Thursday, August 24, 2017 8:47 AM
> > To: Wisam Monther
> > Cc: users at dpdk.org; Raslan Darawsheh; Shahaf Shuler
> > Subject: RE: Unable to merge packets using GRO feature
> >
> > Hi,
> >
> > Can you tell me what's the NIC type of the GRO-enabled port?
> >
> > Since GRO library uses mbuf->packet_type to parse packet headers, 
> > applications need to fill this value before calling GRO reassembly APIs.
> > Otherwise, the GRO can't work correctly.
> >
> > In csum forwarding engine of testpmd, packet_type is filled by NIC drivers.
> > The csum forwarding engine won't set this value. So if your NIC 
> > doesn’t support to parse packet types, the value of packet_type is 0 
> > and GRO can't work correctly.
> >
> > BRs,
> > Jiayu
> >
> > > -----Original Message-----
> > > From: Wisam Monther [mailto:wisamm at mellanox.com]
> > > Sent: Tuesday, August 22, 2017 9:25 PM
> > > To: Hu, Jiayu <jiayu.hu at intel.com>
> > > Cc: users at dpdk.org; Raslan Darawsheh <rasland at mellanox.com>;
> Shahaf
> > > Shuler <shahafs at mellanox.com>
> > > Subject: RE: Unable to merge packets using GRO feature
> > >
> > > Yes it is,
> > > The fragmented packets comes from port1 / NIC b from machine A to 
> > > port
> > > 1 in NIC A for machine b So it's received on the port '1', which 
> > > is configured gro active on this port.
> > >
> > > Best regards,
> > > Wisam Jaddo
> > >
> > > -----Original Message-----
> > > From: Hu, Jiayu [mailto:jiayu.hu at intel.com]
> > > Sent: Tuesday, August 22, 2017 4:21 PM
> > > To: Wisam Monther
> > > Cc: users at dpdk.org; Raslan Darawsheh; Shahaf Shuler
> > > Subject: RE: Unable to merge packets using GRO feature
> > >
> > > Hi,
> > >
> > > > -----Original Message-----
> > > > From: Wisam Monther [mailto:wisamm at mellanox.com]
> > > > Sent: Tuesday, August 22, 2017 7:07 PM
> > > > To: Hu, Jiayu <jiayu.hu at intel.com>
> > > > Cc: users at dpdk.org; Raslan Darawsheh <rasland at mellanox.com>;
> > Shahaf
> > > > Shuler <shahafs at mellanox.com>
> > > > Subject: RE: Unable to merge packets using GRO feature
> > > >
> > > > Hey Jiayu,
> > > >
> > > > Thank you for your reply.
> > > > I tried what you said with the csum at fwd mode.
> > > > Even so the GRO didn't works fine.
> > > >
> > > > I even tested with a new methodology.
> > > > Two machines with two different nic for each.
> > > > The methodology that I used to test it is described in the attached file.
> > > >
> > > > What I did from gro side:
> > > > """
> > > > testpmd>gro on 1
> > >
> > > Does the port number of NIC A in machine B is '1'? When you enable 
> > > GRO for port '1', Testpmd only tries to merge packets received 
> > > from port
> '1'.
> > >
> > > BRs,
> > > Jiayu
> > >
> > > > Testpmd>set fwd csum
> > > > Testpmd>start
> > > > """
> > > > And the packet with correct dst mac.
> > > >
> > > > Best regards,
> > > > Wisam Jaddo
> > > > -----Original Message-----
> > > > From: Jiayu Hu [mailto:jiayu.hu at intel.com]
> > > > Sent: Monday, August 21, 2017 12:14 PM
> > > > To: Wisam Monther
> > > > Cc: Thomas Monjalon; users at dpdk.org; Raslan Darawsheh; Shahaf 
> > > > Shuler
> > > > Subject: Re: Unable to merge packets using GRO feature
> > > >
> > > > Hi,
> > > >
> > > > On Mon, Aug 21, 2017 at 07:25:23AM +0000, Wisam Monther wrote:
> > > > > Hello Guys,
> > > > >
> > > > >
> > > > >
> > > > > I hope this finds you well, I’m trying to test the GRO feature.
> > > > > But I’m stuck with this scenario.
> > > > >
> > > > > As you know, GRO is only support TCP_IPV4 packet until now.
> > > > >
> > > > > So I’m trying to test the basic functionality of the feature, 
> > > > > as
> following:
> > > > >
> > > > > Start testpmd:
> > > > >
> > > > > “””
> > > > >
> > > > > ./x86_64-native-linuxapp-gcc/build/app/test-pmd/testpmd -n 4  
> > > > > -w
> > > > > 00:0a.0  -w
> > > > > 00:09.0 --  --burst=64 --mbcache=512 --portmask 0xf -i
> > > > > --txd=512
> > > > > --rxd=512
> > > > > --nb-cores=9  --rxq=2 --txq=2 --txqflags=0
> > > > >
> > > > > “””
> > > > >
> > > > >
> > > > >
> > > > > Then enable GRO at the two ports:
> > > > >
> > > > > “””
> > > > >
> > > > > Testpmd>gro on 0
> > > > >
> > > > > Testpmd>gro on 1
> > > >
> > > > When use GRO in testpmd, there are following things to notice:
> > > >
> > > > 1. In testpmd, GRO is supported by csum forwarding engine.
> > > > Therefore, please use 'set fwd csum' to switch forwarding engine.
> > > >
> > > > 2. By default, csum forwarding engine compulsorily changes 
> > > > ethernet addresses. So please make sure that MAC addresses are
> correct.
> > > >
> > > > 3. When enable GRO for port0, csum forwarding engine will merge 
> > > > packets received from port0. If there are no packets from port1 
> > > > to port0, you don't need to enable GRO for port1.
> > > >
> > > > 4. GRO library doesn't re-calculate checksums for merged packets.
> > > > If you want merged packets have correct checksum, please select 
> > > > HW IP and HW TCP checksum calculation for the port which the 
> > > > merged packets are transmitted to in csum forwarding engine.
> > > > This is because the merged packets are multi-segment mbufs, but 
> > > > csum forwarding engine doesn't support to calculate checksums 
> > > > for multi-segment mbufs in SW. So we need to select HW checksum
> > offloading.
> > > >
> > > > e.g. If data flow is "packets -> port0 -> port1", commands used 
> > > > in
> > testpmd:
> > > > 	gro on port0
> > > > 	set fwd csum
> > > > 	csum set ip hw port1
> > > > 	csum set tcp hw port1
> > > >
> > > >
> > > > Besides, you need to make sure that your PMD driver doesn't use 
> > > > vector TX function, since vector function doesn't support 
> > > > checksum
> > offloading.
> > > >
> > > > >
> > > > > “””
> > > > >
> > > > >
> > > > >
> > > > > And trying to send TCP_IPV4 fragmented packet “packet with 
> > > > > length
> > > > > 1500 fragmented to three packets of 500”
> > > > >
> > > > > “””
> > > > >
> > > > > p=Ether(src=get_if_hwaddr('ens10'), dst=
> > > > > '24:8A:07:88:26:6B')/IP()/TCP()
> > > > >
> > > > > p.add_payload('F'*(1500 - len(p)))
> > > > >
> > > > > frags=fragment(p,fragsize=500)
> > > > >
> > > > > for fragment in frags:
> > > > >
> > > > >      sendp(fragment, iface='ens10')
> > > > >
> > > > > “””
> > > > >
> > > > >
> > > > >
> > > > > But the testpmd forward the packets as it is, “ doesn’t do any merge”
> > > > >
> > > > >
> > > > >
> > > > > Tcpdump at the TG side,
> > > > >
> > > > > The sending fragmets using ens10:
> > > > >
> > > > > #tcpdump –I ens10 –vvven
> > > > >
> > > > > 15:45:29.083514 24:8a:07:88:26:5b > 24:8a:07:88:26:6b, 
> > > > > ethertype
> > > > > IPv4 (0x0800), length 538: (tos 0x0, ttl 64, id 1, offset 0, 
> > > > > flags [+], proto Options (0), length 524)
> > > > >
> > > > >     127.0.0.1 > 127.0.0.1:  ip-proto-0 504
> > > > >
> > > > > 15:45:29.115266 24:8a:07:88:26:5b > 24:8a:07:88:26:6b, 
> > > > > ethertype
> > > > > IPv4 (0x0800), length 538: (tos 0x0, ttl 64, id 1, offset 504, 
> > > > > flags [+], proto Options (0), length 524)
> > > > >
> > > > >     127.0.0.1 > 127.0.0.1: ip-proto-0
> > > > >
> > > > > 15:45:29.147258 24:8a:07:88:26:5b > 24:8a:07:88:26:6b, 
> > > > > ethertype
> > > > > IPv4 (0x0800), length 492: (tos 0x0, ttl 64, id 1, offset 
> > > > > 1008, flags [none], proto Options (0), length 478)
> > > > >
> > > > >     127.0.0.1 > 127.0.0.1: ip-proto-0
> > > > >
> > > > >
> > > > >
> > > > > #tcpdump -i ens9 –vvven  /// here will be received the 
> > > > > forwarded packets from
> > > > > testpmd:
> > > > >
> > > > > 15:45:29.083996 24:8a:07:88:26:5b > 24:8a:07:88:26:6b, 
> > > > > ethertype
> > > > > IPv4 (0x0800), length 538: (tos 0x0, ttl 64, id 1, offset 0, 
> > > > > flags [+], proto Options (0), length 524)
> > > > >
> > > > >     127.0.0.1 > 127.0.0.1:  ip-proto-0 504
> > > > >
> > > > > 15:45:29.115425 24:8a:07:88:26:5b > 24:8a:07:88:26:6b, 
> > > > > ethertype
> > > > > IPv4 (0x0800), length 538: (tos 0x0, ttl 64, id 1, offset 504, 
> > > > > flags [+], proto Options (0), length 524)
> > > > >
> > > > >     127.0.0.1 > 127.0.0.1: ip-proto-0
> > > > >
> > > > > 15:45:29.147492 24:8a:07:88:26:5b > 24:8a:07:88:26:6b, 
> > > > > ethertype
> > > > > IPv4 (0x0800), length 492: (tos 0x0, ttl 64, id 1, offset 
> > > > > 1008, flags [none], proto Options (0), length 478)
> > > > >
> > > > >     127.0.0.1 > 127.0.0.1: ip-proto-0
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Am I doing something wrong?! Or it is a bug.
> > > > >
> > > > > è As you see the tcpdump shows the offset of each fragment, 
> > > > > and testpmd prints L4_FRAG, so the both are recognizing that 
> > > > > this is a
> > > > fragmented packet.
> > > >
> > > > GRO library merges TSOed/GSOed packets, whose IP IDs and TCP
> > > sequences
> > > > are both consecutive. If input packets have same IP IDs, no 
> > > > packets will be merged.
> > > >
> > > > BTW, you can use iperf to test GRO feature.
> > > >
> > > > Best Regards,
> > > > Jiayu
> > > >
> > > > >
> > > > >
> > > > >
> > > > > Best regards,
> > > > >
> > > > > Wisam Jaddo
> > > > >
> > > > >
> > > > >


More information about the users mailing list