[dpdk-dev] i40e: problem with rx packet drops not accounted in statistics

Zhang, Helin helin.zhang at intel.com
Mon Oct 26 02:57:44 CET 2015


Hi Arnon

Sorry for any inconvenience!
Yes, we knew that there are statistics issues there, and now in being fixed. Hopefully we can have some progress soon.
Thank you very much for reporting out that!

Regards,
Helin

From: Arnon Warshavsky [mailto:arnon at qwilt.com]
Sent: Monday, October 26, 2015 2:51 AM
To: Zhang, Helin
Cc: Martin Weiser; dev at dpdk.org; Eimear Morrissey
Subject: Re: [dpdk-dev] i40e: problem with rx packet drops not accounted in statistics

Hi Helin
I would like to add my input for this as well.
I encountered the same issue, and as you suggested I updated to the latest fw and changed rx and tx ring sizes to 1024.
Drop counters still do not increment as they should.
I Inject 10mpps into an x710 nic  (a 4 ports card, 10mpps on each port) read by a simple rx-only dpdk app.
I read 10mpps from the in-packets counter ,  not getting any drop counters incrementing , while my application counts only 8 mpps per port that are actually arriving to the app.
Running the same on x520 I get 8 mpps from the in-packets counter and 2 mpps from dropped packets as it should.
Something seems to be broken in the error/discard accounting.

/Arnon


On Fri, Oct 23, 2015 at 3:42 AM, Zhang, Helin <helin.zhang at intel.com<mailto:helin.zhang at intel.com>> wrote:
Hi Martin

Could you help to try bigger size of rx/tx ring, but not the default sizes?
For example, could you help to try 1024 for RX ring size, and 512 or 1024 for TX ring size.

In addition, please make sure you are using the latest version of NIC firmware.

Regards,
Helin

> -----Original Message-----
> From: Martin Weiser [mailto:martin.weiser at allegro-packets.com<mailto:martin.weiser at allegro-packets.com>]
> Sent: Thursday, October 22, 2015 3:59 PM
> To: Zhang, Helin
> Cc: dev at dpdk.org<mailto:dev at dpdk.org>
> Subject: Re: i40e: problem with rx packet drops not accounted in statistics
>
> Hi Helin,
>
> good to know that there is work being done on that issue.
> By performance problem I mean that theses packet discards start to appear at
> low bandwidths where I would not expect any packets to be dropped. On the
> same system we can reach higher bandwidths using ixgbe NICs without loosing a
> single packet so seeing packets being lost at only ~5GBit/s and ~1.5Mpps on a
> 40Gb adapter worries me a bit.
>
> Best regards,
> Martin
>
>
> On 22.10.15 02:16, Zhang, Helin wrote:
> > Hi Martin
> >
> > Yes, we have a developer working on it now, and hopefully he will have
> something soon later on this fix.
> > But what do you mean the performance problem? Did you mean the
> performance number is not good as expected, or else?
> >
> > Regards,
> > Helin
> >
> >> -----Original Message-----
> >> From: Martin Weiser [mailto:martin.weiser at allegro-packets.com<mailto:martin.weiser at allegro-packets.com>]
> >> Sent: Wednesday, October 21, 2015 4:44 PM
> >> To: Zhang, Helin
> >> Cc: dev at dpdk.org<mailto:dev at dpdk.org>
> >> Subject: Re: i40e: problem with rx packet drops not accounted in
> >> statistics
> >>
> >> Hi Helin,
> >>
> >> any news on this issue? By the way this is not just a problem with
> >> statistics for us but also a performance problem since these packet
> >> discards start appearing at a relatively low bandwidth (~5GBit/s and
> ~1.5Mpps).
> >>
> >> Best regards,
> >> Martin
> >>
> >> On 10.09.15 03:09, Zhang, Helin wrote:
> >>> Hi Martin
> >>>
> >>> Yes, the statistics issue has been reported several times recently.
> >>> We will check the issue and try to fix it or get a workaround soon.
> >>> Thank you
> >> very much!
> >>> Regards,
> >>> Helin
> >>>
> >>>> -----Original Message-----
> >>>> From: Martin Weiser [mailto:martin.weiser at allegro-packets.com<mailto:martin.weiser at allegro-packets.com>]
> >>>> Sent: Wednesday, September 9, 2015 7:58 PM
> >>>> To: Zhang, Helin
> >>>> Cc: dev at dpdk.org<mailto:dev at dpdk.org>
> >>>> Subject: i40e: problem with rx packet drops not accounted in
> >>>> statistics
> >>>>
> >>>> Hi Helin,
> >>>>
> >>>> in one of our test setups involving i40e adapters we are
> >>>> experiencing packet drops which are not reflected in the interfaces
> statistics.
> >>>> The call to rte_eth_stats_get suggests that all packets were
> >>>> properly received but the total number of packets received through
> >>>> rte_eth_rx_burst is less than the ipackets counter.
> >>>> When for example running the l2fwd application (l2fwd -c 0xfe -n 4
> >>>> -- -p
> >>>> 0x3) and having driver debug messages enabled the following output
> >>>> is generated for the interface in question:
> >>>>
> >>>> ...
> >>>> PMD: i40e_update_vsi_stats(): ***************** VSI[6] stats start
> >>>> *******************
> >>>> PMD: i40e_update_vsi_stats(): rx_bytes:            242624340000
> >>>> PMD: i40e_update_vsi_stats(): rx_unicast:          167790000
> >>>> PMD: i40e_update_vsi_stats(): rx_multicast:        0
> >>>> PMD: i40e_update_vsi_stats(): rx_broadcast:        0
> >>>> PMD: i40e_update_vsi_stats(): rx_discards:         1192557
> >>>> PMD: i40e_update_vsi_stats(): rx_unknown_protocol: 0
> >>>> PMD: i40e_update_vsi_stats(): tx_bytes:            0
> >>>> PMD: i40e_update_vsi_stats(): tx_unicast:          0
> >>>> PMD: i40e_update_vsi_stats(): tx_multicast:        0
> >>>> PMD: i40e_update_vsi_stats(): tx_broadcast:        0
> >>>> PMD: i40e_update_vsi_stats(): tx_discards:         0
> >>>> PMD: i40e_update_vsi_stats(): tx_errors:           0
> >>>> PMD: i40e_update_vsi_stats(): ***************** VSI[6] stats end
> >>>> *******************
> >>>> PMD: i40e_dev_stats_get(): ***************** PF stats start
> >>>> *******************
> >>>> PMD: i40e_dev_stats_get(): rx_bytes:            242624340000
> >>>> PMD: i40e_dev_stats_get(): rx_unicast:          167790000
> >>>> PMD: i40e_dev_stats_get(): rx_multicast:        0
> >>>> PMD: i40e_dev_stats_get(): rx_broadcast:        0
> >>>> PMD: i40e_dev_stats_get(): rx_discards:         0
> >>>> PMD: i40e_dev_stats_get(): rx_unknown_protocol: 167790000
> >>>> PMD: i40e_dev_stats_get(): tx_bytes:            0
> >>>> PMD: i40e_dev_stats_get(): tx_unicast:          0
> >>>> PMD: i40e_dev_stats_get(): tx_multicast:        0
> >>>> PMD: i40e_dev_stats_get(): tx_broadcast:        0
> >>>> PMD: i40e_dev_stats_get(): tx_discards:         0
> >>>> PMD: i40e_dev_stats_get(): tx_errors:           0
> >>>> PMD: i40e_dev_stats_get(): tx_dropped_link_down:     0
> >>>> PMD: i40e_dev_stats_get(): crc_errors:               0
> >>>> PMD: i40e_dev_stats_get(): illegal_bytes:            0
> >>>> PMD: i40e_dev_stats_get(): error_bytes:              0
> >>>> PMD: i40e_dev_stats_get(): mac_local_faults:         1
> >>>> PMD: i40e_dev_stats_get(): mac_remote_faults:        1
> >>>> PMD: i40e_dev_stats_get(): rx_length_errors:         0
> >>>> PMD: i40e_dev_stats_get(): link_xon_rx:              0
> >>>> PMD: i40e_dev_stats_get(): link_xoff_rx:             0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[0]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[0]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[1]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[1]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[2]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[2]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[3]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[3]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[4]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[4]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[5]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[5]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[6]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[6]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[7]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[7]:     0
> >>>> PMD: i40e_dev_stats_get(): link_xon_tx:              0
> >>>> PMD: i40e_dev_stats_get(): link_xoff_tx:             0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[0]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[0]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[0]:  0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[1]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[1]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[1]:  0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[2]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[2]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[2]:  0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[3]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[3]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[3]:  0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[4]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[4]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[4]:  0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[5]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[5]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[5]:  0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[6]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[6]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[6]:  0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[7]:      0
> >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[7]:     0
> >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[7]:  0
> >>>> PMD: i40e_dev_stats_get(): rx_size_64:               0
> >>>> PMD: i40e_dev_stats_get(): rx_size_127:              0
> >>>> PMD: i40e_dev_stats_get(): rx_size_255:              0
> >>>> PMD: i40e_dev_stats_get(): rx_size_511:              0
> >>>> PMD: i40e_dev_stats_get(): rx_size_1023:             0
> >>>> PMD: i40e_dev_stats_get(): rx_size_1522:             167790000
> >>>> PMD: i40e_dev_stats_get(): rx_size_big:              0
> >>>> PMD: i40e_dev_stats_get(): rx_undersize:             0
> >>>> PMD: i40e_dev_stats_get(): rx_fragments:             0
> >>>> PMD: i40e_dev_stats_get(): rx_oversize:              0
> >>>> PMD: i40e_dev_stats_get(): rx_jabber:                0
> >>>> PMD: i40e_dev_stats_get(): tx_size_64:               0
> >>>> PMD: i40e_dev_stats_get(): tx_size_127:              0
> >>>> PMD: i40e_dev_stats_get(): tx_size_255:              0
> >>>> PMD: i40e_dev_stats_get(): tx_size_511:              0
> >>>> PMD: i40e_dev_stats_get(): tx_size_1023:             0
> >>>> PMD: i40e_dev_stats_get(): tx_size_1522:             0
> >>>> PMD: i40e_dev_stats_get(): tx_size_big:              0
> >>>> PMD: i40e_dev_stats_get(): mac_short_packet_dropped: 0
> >>>> PMD: i40e_dev_stats_get(): checksum_error:           0
> >>>> PMD: i40e_dev_stats_get(): fdir_match:               0
> >>>> PMD: i40e_dev_stats_get(): ***************** PF stats end
> >>>> ********************
> >>>> ...
> >>>>
> >>>> The count for rx_unicast is exactly the number of packets we would
> >>>> have expected and the count for rx_discards in the VSI stats is
> >>>> exactly the number of packets we are missing.
> >>>> The question is why this number shows up only in the VSI stats and
> >>>> not in the PF stats and of course why the packets which were
> >>>> obviously discarded are still counted in the rx_unicast stats.
> >>>> This test was performed using DPDK 2.1 and the firmware of the
> >>>> XL710 is the latest one (FW 4.40 API 1.4 NVM 04.05.03).
> >>>> Do you have an idea what might be going on?
> >>>>
> >>>> Best regards,
> >>>> Martin
> >>>>
> >>>>
>



--

Arnon Warshavsky
Qwilt | work: +972-72-2221634 | mobile: +972-50-8583058 | arnon at qwilt.com<mailto:arnon at qwilt.com>


More information about the dev mailing list