[dpdk-users] users Digest, Vol 155, Issue 7

Wajeeha Javed wajeeha.javed123 at gmail.com
Tue Oct 16 06:42:08 CEST 2018


Hi,

Thanks, everyone for your reply. Please find below my comments.

*I've failed to find explicit limitations from the first glance.*
* NB_MBUF define is typically internal to examples/apps.*
* The question I'd like to double-check if the host has enought*
* RAM and hugepages allocated? 5 million mbufs already require about*
* 10G.*

Total Ram = 128 GB
Available Memory = 23GB free

Total Huge Pages = 80

Free Huge Page = 38
Huge Page Size = 1GB

*The mempool uses uint32_t for most sizes and the number of mempool items
is uint32_t so ok with the number of entries in a can be ~4G as stated be
make sure you have enough *

*memory as the over head for mbufs is not just the header + the packet size*

Right. Currently, there are total of 80 huge pages, 40 for each numa node
(Numa node 0 and Numa node 1). I observed that I was using only 16 huge
pages while the other 16

huge pages were used by other dpdk  application. By running only my dpdk
application on numa node 0, I was able to increase the mempool size to 14M
that uses all the

 huge pages of Numa node 0.

*My question is why are you copying the mbuf and not just linking the mbufs
into a link list? Maybe I do not understand the reason. I would try to make
sure you do not do a copy of the *

*data and just link the mbufs together using the next pointer in the mbuf
header unless you have chained mbufs already.*

The reason for copying the Mbuf is due to the NIC limitations, I cannot
have more than 16384 Rx descriptors, whereas  I want to withhold all the
packets coming at a line rate of 10GBits/sec for each port. I created a
circular queue running on a FIFO basis. Initially, I thought of using
rte_mbuf* packet burst for a delay of 2 secs. Now at line rate, we receive
14Million

Packet/s, so descriptor get full and I don't have other option left than
copying the mbuf to the circular queue rather than using a rte_mbuf*
pointer. I know I have to make a

compromise on performance to achieve a delay for packets. So for copying
mbufs, I allocate memory from Mempool to copy the mbuf received and then
free it. Please find the

code snippet below.

How we can chain different mbufs together? According to my understanding
chained mbufs in the API are used for storing segments of the fragmented
packets that are greater

than MTU. Even If we chain the mbufs together using next pointer we need to
free the mbufs received, otherwise we will not be able to get free Rx
descriptors at a line rate of

10GBits/sec and eventually all the Rx descriptors will be filled and NIC
will not receive any more packets.

<Code>

for( j = 0; j < nb_rx; j++) {
m = pkts_burst[j];
struct rte_mbuf* copy_mbuf = pktmbuf_copy(m, pktmbuf_pool[sockid]);
....
rte_pktmbuf_free(m);
}

</Code>

*The other question is can you drop any packets if not then you only have
the linking option IMO. If you can drop packets then you can just start
dropping them when the ring is getting full. Holding onto 28m packets for
two seconds can cause other protocol related problems and TCP could be
sending retransmitted packets and now you have caused a bunch of work on
the RX side *

*at **the end point.*
I would like my DPDK application to have zero packet loss, it only delays
all the received packet for 2 secs than transmitted them as it is without
any change or processing to packets.
Moreover, DPDK application is receiving tap traffic(monitoring traffic)
rather than real-time traffic. So there will not be any TCP or any other
protocol-related problems.

Looking forward to your reply.


Best Regards,

Wajeeha Javed


More information about the users mailing list