[dpdk-users] scheduler issue

Alex Kiselev alex at therouter.net
Wed Nov 25 16:04:48 CET 2020


On 2020-11-24 16:34, Alex Kiselev wrote:
> Hello,
> 
> I am facing a problem with the scheduler library DPDK 18.11.10 with 
> default
> scheduler settings (RED is off).
> It seems like some of the pipes (last time it was 4 out of 600 pipes)
> start incorrectly dropping most of the traffic after a couple of days
> of successful work.
> 
> So far I've checked that there are no mbuf leaks or any
> other errors in my code and I am sure that traffic enters problematic 
> pipes.
> Also switching a traffic in the runtime to pipes of another port
> restores the traffic flow.
> 
> Ho do I approach debugging this issue?
> 
> I've added using rte_sched_queue_read_stats(), but it doesn't give
> me counters that accumulate values (packet drops for example),
> it gives me some kind of current values and after a couple of seconds
> those values are reset to zero, so I can say nothing based on that API.
> 
> I would appreciate any ideas and help.
> Thanks.

Problematic pipes had very low bandwidth limit (1 Mbit/s) and
also there is an oversubscription configuration event at subport 0
of port 13 to which those pipes belongs and
CONFIG_RTE_SCHED_SUBPORT_TC_OV is disabled.

Could a congestion at that subport be the reason of the problem?

How much overhead and performance degradation will add enabling
CONFIG_RTE_SCHED_SUBPORT_TC_OV feature?

Configuration:

   #
   # QoS Scheduler Profiles
   #
   hqos add profile  1 rate    8 K size 1000000 tc period 40
   hqos add profile  2 rate  400 K size 1000000 tc period 40
   hqos add profile  3 rate  600 K size 1000000 tc period 40
   hqos add profile  4 rate  800 K size 1000000 tc period 40
   hqos add profile  5 rate    1 M size 1000000 tc period 40
   hqos add profile  6 rate 1500 K size 1000000 tc period 40
   hqos add profile  7 rate    2 M size 1000000 tc period 40
   hqos add profile  8 rate    3 M size 1000000 tc period 40
   hqos add profile  9 rate    4 M size 1000000 tc period 40
   hqos add profile 10 rate    5 M size 1000000 tc period 40
   hqos add profile 11 rate    6 M size 1000000 tc period 40
   hqos add profile 12 rate    8 M size 1000000 tc period 40
   hqos add profile 13 rate   10 M size 1000000 tc period 40
   hqos add profile 14 rate   12 M size 1000000 tc period 40
   hqos add profile 15 rate   15 M size 1000000 tc period 40
   hqos add profile 16 rate   16 M size 1000000 tc period 40
   hqos add profile 17 rate   20 M size 1000000 tc period 40
   hqos add profile 18 rate   30 M size 1000000 tc period 40
   hqos add profile 19 rate   32 M size 1000000 tc period 40
   hqos add profile 20 rate   40 M size 1000000 tc period 40
   hqos add profile 21 rate   50 M size 1000000 tc period 40
   hqos add profile 22 rate   60 M size 1000000 tc period 40
   hqos add profile 23 rate  100 M size 1000000 tc period 40
   hqos add profile 24 rate 25 M size 1000000 tc period 40
   hqos add profile 25 rate 50 M size 1000000 tc period 40

   #
   # Port 13
   #
   hqos add port 13 rate 40 G mtu 1522 frame overhead 24 queue sizes 64 
64 64 64
   hqos add port 13 subport 0 rate 1500 M size 1000000 tc period 10
   hqos add port 13 subport 0 pipes 3000 profile 2
   hqos add port 13 subport 0 pipes 3000 profile 5
   hqos add port 13 subport 0 pipes 3000 profile 6
   hqos add port 13 subport 0 pipes 3000 profile 7
   hqos add port 13 subport 0 pipes 3000 profile 9
   hqos add port 13 subport 0 pipes 3000 profile 11
   hqos set port 13 lcore 5



More information about the users mailing list