[dpdk-dev] rte_kni_rx_burst issues

Olivier Deme odeme at druidsoftware.com
Tue Feb 24 13:59:59 CET 2015


All,

I know that an issue was already raised with regards to the efficiency 
of rte_kni_rx_burst but I think that there is more to it than previously 
discussed.

As previously pointed out rte_kni_rx_burst invokes kni_allocate_mbufs 
every single time.
In turn, it looks like kni_allocate_mbufs allocates 32 mbufs 
(MAX_MBUF_BURST_NUM) and attempt to enqueue these to the alloc_q.
If alloc_q is full (1024 buffers: KNI_FIFO_COUNT_MAX), 
kni_allocate_mbufs frees all buffers that couldn't be enqueued.

Further to the fact that this is very inefficient, it looks like 
invoking rte_kni_rx_burst in a loop is guaranteed to fill the alloc_q to 
its maximum capacity (1024) unless packets are read from the kernel 
faster than they are enqueued by rte_kni_rx_burst.

In my application, I hit the "Out of memory" error in kni_allocate_mbufs 
almost straight away because there is very little egress traffic from 
the kernel and my memory pool wasn't big enough to cater for the kni 
thread and other dpdk queues.

I would think the kni_allocate_mbufs should take a "buffer_count" 
parameter which is the number of desired mbufs to allocate and add to 
the alloc_q.
With this, rte_kni_rx_burst would be able to request allocating as many 
mbufs that are dequeued from the tx_q, so that the total number of 
buffers across alloc_q and tx_q remains constant.

I also noticed that none of these functions are declared inline.
This is not great as the thread that forwards packets between a NIC and 
the kernel may also be the same thread that forwards packets between 2 
NICs.
As such it would be better to avoid too many function calls to forward 
packets between NICs and kernel.

Kind Regards,
Olivier.

	
	

	
	



More information about the dev mailing list