[dpdk-users] Problem With Multi Queue

Freynet, Marc (Nokia - FR) marc.freynet at nokia.com
Wed Jan 27 13:32:59 CET 2016

Normally, you should be able to configure your 10G NIC interface with several RX queues,
Each core reads 1 queue.

The NIC should offer several hashing function to ensure that the same "flow" is sent on the same queue and therefore processed by the same Core to avoid changing the order of PDU within the same "flow".

​"Bowl of rice will raise a benefactor, a bucket of rice will raise a enemy.", Chinese proverb.

Alcatel-Lucent France
Centre de Villarceaux
Route de Villejust
91620 NOZAY France

Tel:  +33 (0)1 6040 1960
Intranet: 2103 1960

marc.freynet at nokia.com

-----Original Message-----
From: users [mailto:users-bounces at dpdk.org] On Behalf Of EXT Hamed Zaghaghi
Sent: mercredi 27 janvier 2016 12:05
To: users at dpdk.org
Subject: [dpdk-users] Problem With Multi Queue


I'm implementing an offline packet feature extraction using DPDK. There is a NIC as described bellow:

0000:0b:00.0 '82599EB 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe

10-Gigabit (about 2.5Milion packets) data is receiving per second and my application extract features from each packet. Because of time consuming nature of feature extraction this traffic can't be handled by one core.

So the problem arise when I distribute traffic between 4 queues and assign one core to each queue to handle the traffic. But results show that multi-queue does not help.

Case 1:
Number of cores and queues: 1
Traffic: 10-Gigabit (2.2M packets)
Dropped packets: almost 25% of traffic

Case 2:
Number of cores and queues: 4
Traffic: 10-Gigabit (2.2M packets)
Dropped packets: almost 25% of traffic

I'm using ETH_MQ_RX_RSS for mq_mode of rxmode and each core receives packets and processes them but output of rte_eth_xstats_get shows something weird.

Totoal packets: 10,000,000
 - rx_good_packets, 7,831,965
 - rx_good_bytes, 3,999,612,189
 - rx_errors, 2,168,035
 - rx_mbuf_allocation_errors, 0
 - rx_q0_packets, 7,831,965
 - rx_q0_bytes, 3,999,612,189
 - rx_q0_errors, 0
 - rx_q1_packets, 0
 - rx_q1_bytes, 0
 - rx_q1_errors, 0
 - rx_q2_packets, 0
 - rx_q2_bytes, 0
 - rx_q2_errors, 0
 - rx_q3_packets, 0
 - rx_q3_bytes, 0
 - rx_q3_errors, 0

Does this behaviour is normal? Do I configure ports incorrectly?

Thanks in advance for your attention
Best regards,
-- Hamed Zaghaghi

More information about the users mailing list