[dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710 NICs for some RX mbuf sizes

Take Ceara dumitru.ceara at gmail.com
Tue Jul 19 16:58:54 CEST 2016


Hi Beilei,

On Tue, Jul 19, 2016 at 11:31 AM, Xing, Beilei <beilei.xing at intel.com> wrote:
> Hi Ceara,
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Take Ceara
>> Sent: Tuesday, July 19, 2016 12:14 AM
>> To: Zhang, Helin <helin.zhang at intel.com>
>> Cc: Wu, Jingjing <jingjing.wu at intel.com>; dev at dpdk.org
>> Subject: Re: [dpdk-dev] [dpdk-users] RSS Hash not working for XL710/X710
>> NICs for some RX mbuf sizes
>>
>> Hi Helin,
>>
>> On Mon, Jul 18, 2016 at 5:15 PM, Zhang, Helin <helin.zhang at intel.com>
>> wrote:
>> > Hi Ceara
>> >
>> > Could you help to let me know your firmware version?
>>
>> # ethtool -i p7p1 | grep firmware
>> firmware-version: f4.40.35115 a1.4 n4.53 e2021
>>
>> > And could you help to try with the standard DPDK example application,
>> such as testpmd, to see if there is the same issue?
>> > Basically we always set the same size for both rx and tx buffer, like the
>> default one of 2048 for a lot of applications.
>>
>> I'm a bit lost in the testpmd CLI. I enabled RSS, configured 2 RX queues per
>> port and started sending traffic with single segmnet packets of size 2K but I
>> didn't figure out how to actually verify that the RSS hash is correctly set..
>> Please let me know if I should do it in a different way.
>>
>> testpmd -c 0x331 -w 0000:82:00.0 -w 0000:83:00.0 -- --mbuf-size 2048 -i [...]
>>
>> testpmd> port stop all
>> Stopping ports...
>> Checking link statuses...
>> Port 0 Link Up - speed 40000 Mbps - full-duplex Port 1 Link Up - speed 40000
>> Mbps - full-duplex Done
>>
>> testpmd> port config all txq 2
>>
>> testpmd> port config all rss all
>>
>> testpmd> port config all max-pkt-len 2048 port start all
>> Configuring Port 0 (socket 0)
>> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
>> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
>> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
>> satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
>> PMD: i40e_set_tx_function(): Vector tx finally be used.
>> PMD: i40e_set_rx_function(): Using Vector Scattered Rx callback (port=0).
>> Port 0: 3C:FD:FE:9D:BE:F0
>> Configuring Port 1 (socket 0)
>> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
>> PMD: i40e_set_tx_function_flag(): Vector tx can be enabled on this txq.
>> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
>> satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
>> PMD: i40e_set_tx_function(): Vector tx finally be used.
>> PMD: i40e_set_rx_function(): Using Vector Scattered Rx callback (port=1).
>> Port 1: 3C:FD:FE:9D:BF:30
>> Checking link statuses...
>> Port 0 Link Up - speed 40000 Mbps - full-duplex Port 1 Link Up - speed 40000
>> Mbps - full-duplex Done
>>
>> testpmd> set txpkts 2048
>> testpmd> show config txpkts
>> Number of segments: 1
>> Segment sizes: 2048
>> Split packet: off
>>
>>
>> testpmd> start tx_first
>>   io packet forwarding - CRC stripping disabled - packets/burst=32
>>   nb forwarding cores=1 - nb forwarding ports=2
>>   RX queues=1 - RX desc=128 - RX free threshold=32
>
> In testpmd, when RX queues=1, RSS will be disabled, so could you re-configure rx queue(>1) and try again with testpmd?

I changed the way I run testmpd to:

testpmd -c 0x331 -w 0000:82:00.0 -w 0000:83:00.0 -- --mbuf-size 1152
--rss-ip --rxq=2 --txpkts 1024 -i

As far as I understand this will allocate mbufs with the same size I
was using in my test (--mbuf-size seems to include the mbuf headroom
therefore 1152 = 1024 + 128 headroom).

testpmd> start tx_first
  io packet forwarding - CRC stripping disabled - packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  RX queues=2 - RX desc=128 - RX free threshold=32
  RX threshold registers: pthresh=8 hthresh=8 wthresh=0
  TX queues=1 - TX desc=512 - TX free threshold=32
  TX threshold registers: pthresh=32 hthresh=0 wthresh=0
  TX RS bit threshold=32 - TXQ flags=0xf01
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 18817613   RX-missed: 5          RX-bytes:  19269115888
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 18818064   TX-errors: 0          TX-bytes:  19269567464
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 18818392   RX-missed: 5          RX-bytes:  19269903360
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 18817979   TX-errors: 0          TX-bytes:  19269479424
  ############################################################################

Ttraffic is sent/received. However, I couldn't find any way to verify
that the incoming mbufs actually have the mbuf->hash.rss field set
except for starting test-pmd with gdb and setting a breakpoint in the
io fwd engine. After doing that I noticed that none of the incoming
packets has the PKT_RX_RSS_HASH flag set in ol_flags... I guess for
some reason test-pmd doesn't actually configure RSS in this case but I
fail to see where.

Thanks,
Dumitru

>
> Regards,
> Beilei
>
>>   RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>>   TX queues=2 - TX desc=512 - TX free threshold=32
>>   TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>>   TX RS bit threshold=32 - TXQ flags=0xf01
>> testpmd> stop
>> Telling cores to stop...
>> Waiting for lcores to finish...
>>
>>   ---------------------- Forward statistics for port 0  ----------------------
>>   RX-packets: 32             RX-dropped: 0             RX-total: 32
>>   TX-packets: 32             TX-dropped: 0             TX-total: 32
>>   ----------------------------------------------------------------------------
>>
>>   ---------------------- Forward statistics for port 1  ----------------------
>>   RX-packets: 32             RX-dropped: 0             RX-total: 32
>>   TX-packets: 32             TX-dropped: 0             TX-total: 32
>>   ----------------------------------------------------------------------------
>>
>>   +++++++++++++++ Accumulated forward statistics for all
>> ports+++++++++++++++
>>   RX-packets: 64             RX-dropped: 0             RX-total: 64
>>   TX-packets: 64             TX-dropped: 0             TX-total: 64
>>
>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>> ++++++++++++++++++
>>
>> Done.
>> testpmd>
>>
>>
>> >
>> > Definitely we will try to reproduce that issue with testpmd, with using 2K
>> mbufs. Hopefully we can find the root cause, or tell you that's not an issue.
>> >
>>
>> I forgot to mention that in my test code the TX/RX_MBUF_SIZE macros also
>> include the mbuf headroom and the size of the mbuf structure.
>> Therefore testing with 2K mbufs in my scenario actually creates mempools of
>> objects of size 2K + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM.
>>
>> > Thank you very much for your reporting!
>> >
>> > BTW, dev at dpdk.org should be the right one to replace users at dpdk.org,
>> for sending questions/issues like this.
>>
>> Thanks, I'll keep that in mind.
>>
>> >
>> > Regards,
>> > Helin
>>
>> Regards,
>> Dumitru
>>



-- 
Dumitru Ceara


More information about the dev mailing list