[dpdk-users] how can i know the reason why the rx_nombuf couner increase?

Stephen Hemminger stephen at networkplumber.org
Tue Sep 8 17:43:48 CEST 2020


On Tue, 8 Sep 2020 07:40:58 +0000
"平岡 冠二" <hiraoka737 at oki.com> wrote:

> Hi All,
> 
> This is my first post here. I've an issue, and I would like your help.
> 
> HW Environment:
>  HP DL360 Gen10 (Xeon Silver 4208 @ 2.1GHz)
>  HPE Ethernet 10Gb 2Port NIC 562T (Intel X550T, driver=5.1.0-k-rh7.6, firmware=10.51.3)
> 
> SW Environment:
>  DPDK 20.08
>  Red Hat Enterprise Linux Server release 7.6 (Maipo)
>  Number of mbuf is 1024 * 1024 (plenty!)
> 
> I create the packet capture with DPDK 20.08 and received 1.2Gbps(1Mfps) packets on my 
> server described above, but, rarely(about once per 100 hours) encountered the packet loss
> problem.
> 
> To find the reason, I use the rte_eth_stats_get() and found imissed and rx_nombuf counter 
> was increased when packet loss was happened.
> 
> Is this means that the CPU power was insufficient? or my application rarely slow down? or
> none of them?  Does anyone know how can I know that?

The counter imissed means the application can not keep up with the packet rate.
The counter rx_nombuf means the mbuf pool was not big enough (or your application is
leaking mbufs).

> on a side note...
> 
> When I wrote the captured packet to SSD drive, rarely packet loss was happen, but, 
> when I wrote them to SAS HDD drive (it is slower than SSD) the packet loss was not 
> occurred(at least one week). it was very mysterious...

Not at all surprising, packet capture is limited by speed of writing to file.


More information about the users mailing list