<div dir="ltr">That's showing you have 0 hugepages free. Maybe they weren't passed through to the VM properly?</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 1, 2022 at 7:50 PM Lombardo, Ed <<a href="mailto:Ed.Lombardo@netscout.com">Ed.Lombardo@netscout.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div lang="EN-US" style="overflow-wrap: break-word;">
<div class="gmail-m_9183261180078617717WordSection1">
<p class="MsoNormal">[root@vSTREAM_632 ~]# cat /proc/meminfo<u></u><u></u></p>
<p class="MsoNormal">MemTotal: 32778372 kB<u></u><u></u></p>
<p class="MsoNormal">MemFree: 15724124 kB<u></u><u></u></p>
<p class="MsoNormal">MemAvailable: 15897392 kB<u></u><u></u></p>
<p class="MsoNormal">Buffers: 18384 kB<u></u><u></u></p>
<p class="MsoNormal">Cached: 526768 kB<u></u><u></u></p>
<p class="MsoNormal">SwapCached: 0 kB<u></u><u></u></p>
<p class="MsoNormal">Active: 355140 kB<u></u><u></u></p>
<p class="MsoNormal">Inactive: 173360 kB<u></u><u></u></p>
<p class="MsoNormal">Active(anon): 62472 kB<u></u><u></u></p>
<p class="MsoNormal">Inactive(anon): 12484 kB<u></u><u></u></p>
<p class="MsoNormal">Active(file): 292668 kB<u></u><u></u></p>
<p class="MsoNormal">Inactive(file): 160876 kB<u></u><u></u></p>
<p class="MsoNormal">Unevictable: 13998696 kB<u></u><u></u></p>
<p class="MsoNormal">Mlocked: 13998696 kB<u></u><u></u></p>
<p class="MsoNormal">SwapTotal: 3906556 kB<u></u><u></u></p>
<p class="MsoNormal">SwapFree: 3906556 kB<u></u><u></u></p>
<p class="MsoNormal">Dirty: 76 kB<u></u><u></u></p>
<p class="MsoNormal">Writeback: 0 kB<u></u><u></u></p>
<p class="MsoNormal">AnonPages: 13986156 kB<u></u><u></u></p>
<p class="MsoNormal">Mapped: 95500 kB<u></u><u></u></p>
<p class="MsoNormal">Shmem: 16864 kB<u></u><u></u></p>
<p class="MsoNormal">Slab: 121952 kB<u></u><u></u></p>
<p class="MsoNormal">SReclaimable: 71128 kB<u></u><u></u></p>
<p class="MsoNormal">SUnreclaim: 50824 kB<u></u><u></u></p>
<p class="MsoNormal">KernelStack: 4608 kB<u></u><u></u></p>
<p class="MsoNormal">PageTables: 31524 kB<u></u><u></u></p>
<p class="MsoNormal">NFS_Unstable: 0 kB<u></u><u></u></p>
<p class="MsoNormal">Bounce: 0 kB<u></u><u></u></p>
<p class="MsoNormal">WritebackTmp: 0 kB<u></u><u></u></p>
<p class="MsoNormal">CommitLimit: 19247164 kB<u></u><u></u></p>
<p class="MsoNormal">Committed_AS: 14170424 kB<u></u><u></u></p>
<p class="MsoNormal">VmallocTotal: 34359738367 kB<u></u><u></u></p>
<p class="MsoNormal">VmallocUsed: 212012 kB<u></u><u></u></p>
<p class="MsoNormal">VmallocChunk: 34342301692 kB<u></u><u></u></p>
<p class="MsoNormal">Percpu: 2816 kB<u></u><u></u></p>
<p class="MsoNormal">HardwareCorrupted: 0 kB<u></u><u></u></p>
<p class="MsoNormal">AnonHugePages: 13228032 kB<u></u><u></u></p>
<p class="MsoNormal">CmaTotal: 0 kB<u></u><u></u></p>
<p class="MsoNormal">CmaFree: 0 kB<u></u><u></u></p>
<p class="MsoNormal">HugePages_Total: 1024<u></u><u></u></p>
<p class="MsoNormal">HugePages_Free: 0<u></u><u></u></p>
<p class="MsoNormal">HugePages_Rsvd: 0<u></u><u></u></p>
<p class="MsoNormal">HugePages_Surp: 0<u></u><u></u></p>
<p class="MsoNormal">Hugepagesize: 2048 kB<u></u><u></u></p>
<p class="MsoNormal">DirectMap4k: 104320 kB<u></u><u></u></p>
<p class="MsoNormal">DirectMap2M: 33449984 kB<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<div style="border-right:none;border-bottom:none;border-left:none;border-top:1pt solid rgb(225,225,225);padding:3pt 0in 0in">
<p class="MsoNormal"><b>From:</b> Cliff Burdick <<a href="mailto:shaklee3@gmail.com" target="_blank">shaklee3@gmail.com</a>> <br>
<b>Sent:</b> Tuesday, March 1, 2022 10:45 PM<br>
<b>To:</b> Lombardo, Ed <<a href="mailto:Ed.Lombardo@netscout.com" target="_blank">Ed.Lombardo@netscout.com</a>><br>
<b>Cc:</b> Stephen Hemminger <<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a>>; <a href="mailto:users@dpdk.org" target="_blank">users@dpdk.org</a><br>
<b>Subject:</b> Re: How to increase mbuf size in dpdk version 17.11<u></u><u></u></p>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<table border="0" cellspacing="0" cellpadding="0" width="100%" style="width:100%;background:rgb(0,96,104)">
<tbody>
<tr>
<td style="padding:3.75pt">
<p class="MsoNormal" align="center" style="text-align:center">
<strong><span style="font-size:12pt;font-family:Arial,sans-serif;color:white">External Email:</span></strong><span style="font-size:12pt;font-family:Arial,sans-serif;color:white"> This message originated outside of NETSCOUT. Do not click links or open
attachments unless you recognize the sender and know the content is safe.</span><span style="font-size:12pt;color:white"><u></u><u></u></span></p>
</td>
</tr>
</tbody>
</table>
</div>
<div>
<p class="MsoNormal"><span style="color:black">Can you paste the output of "cat /proc/meminfo"?</span><u></u><u></u></p>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal">On Tue, Mar 1, 2022 at 5:37 PM Lombardo, Ed <<a href="mailto:Ed.Lombardo@netscout.com" target="_blank">Ed.Lombardo@netscout.com</a>> wrote:<u></u><u></u></p>
</div>
<blockquote style="border-top:none;border-right:none;border-bottom:none;border-left:1pt solid rgb(204,204,204);padding:0in 0in 0in 6pt;margin-left:4.8pt;margin-right:0in">
<p class="MsoNormal" style="margin-bottom:12pt">Here is the output from rte_mempool_dump() after creating the mbuf " mbuf_pool_create (mbuf_seg_size=16512, nb_mbuf=32768, socket_id=0)":<br>
nb_mbuf_per_pool = 32768<br>
mb_size = 16640<br>
16512 * 32768 = 541,065,216<br>
<br>
mempool <mbuf_pool_socket_0>@0x17f811400<br>
flags=10<br>
pool=0x17f791180<br>
iova=0x80fe11400<br>
nb_mem_chunks=1<br>
size=32768<br>
populated_size=32768<br>
header_size=64<br>
elt_size=16640<br>
trailer_size=0<br>
total_obj_size=16704<br>
private_data_size=64<br>
avg bytes/object=16704.000000<br>
internal cache infos:<br>
cache_size=250<br>
cache_count[0]=0<br>
...<br>
cache_count[126]=0<br>
cache_count[127]=0<br>
total_cache_count=0<br>
common_pool_count=32768<br>
no statistics available<br>
<br>
-----Original Message-----<br>
From: Stephen Hemminger <<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a>>
<br>
Sent: Tuesday, March 1, 2022 5:46 PM<br>
To: Cliff Burdick <<a href="mailto:shaklee3@gmail.com" target="_blank">shaklee3@gmail.com</a>><br>
Cc: Lombardo, Ed <<a href="mailto:Ed.Lombardo@netscout.com" target="_blank">Ed.Lombardo@netscout.com</a>>;
<a href="mailto:users@dpdk.org" target="_blank">users@dpdk.org</a><br>
Subject: Re: How to increase mbuf size in dpdk version 17.11<br>
<br>
External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe.<br>
<br>
On Tue, 1 Mar 2022 13:37:07 -0800<br>
Cliff Burdick <<a href="mailto:shaklee3@gmail.com" target="_blank">shaklee3@gmail.com</a>> wrote:<br>
<br>
> Can you verify how many buffers you're allocating? I don't see how <br>
> many you're allocating in this thread.<br>
> <br>
> On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <<a href="mailto:Ed.Lombardo@netscout.com" target="_blank">Ed.Lombardo@netscout.com</a>><br>
> wrote:<br>
> <br>
> > Hi Stephen,<br>
> > The VM is configured to have 32 GB of memory.<br>
> > Will dpdk consume the 2GB of hugepage memory for the mbufs?<br>
> > I don't mind having less mbufs with mbuf size of 16K vs original <br>
> > mbuf size of 2K.<br>
> ><br>
> > Thanks,<br>
> > Ed<br>
> ><br>
> > -----Original Message-----<br>
> > From: Stephen Hemminger <<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a>><br>
> > Sent: Tuesday, March 1, 2022 2:57 PM<br>
> > To: Lombardo, Ed <<a href="mailto:Ed.Lombardo@netscout.com" target="_blank">Ed.Lombardo@netscout.com</a>><br>
> > Cc: <a href="mailto:users@dpdk.org" target="_blank">users@dpdk.org</a><br>
> > Subject: Re: How to increase mbuf size in dpdk version 17.11<br>
> ><br>
> > External Email: This message originated outside of NETSCOUT. Do not <br>
> > click links or open attachments unless you recognize the sender and <br>
> > know the content is safe.<br>
> ><br>
> > On Tue, 1 Mar 2022 18:34:22 +0000<br>
> > "Lombardo, Ed" <<a href="mailto:Ed.Lombardo@netscout.com" target="_blank">Ed.Lombardo@netscout.com</a>> wrote:<br>
> > <br>
> > > Hi,<br>
> > > I have an application built with dpdk 17.11.<br>
> > > During initialization I want to change the mbuf size from 2K to 16K.<br>
> > > I want to receive packet sizes of 8K or more in one mbuf.<br>
> > ><br>
> > > The VM running the application is configured to have 2G hugepages.<br>
> > ><br>
> > > I tried many things and I get an error when a packet arrives.<br>
> > ><br>
> > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I<br>
> > changed from 2176 to ((2048*8)+128), where 128 is for headroom. <br>
> > > The call to rte_pktmbuf_pool_create() returns success with my changes.<br>
> > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx <br>
> > > mbuf<br>
> > allocation failures. This value increments each time a packet arrives. <br>
> > ><br>
> > > Is there any reference document explaining what causes this error?<br>
> > > Is there a user guide I should follow to make the mbuf size <br>
> > > change,<br>
> > starting with the hugepage value? <br>
> > ><br>
> > > Thanks,<br>
> > > Ed<br>
> ><br>
> > Did you check that you have enough memory in the system for the <br>
> > larger footprint?<br>
> > Using 16K per mbuf is going to cause lots of memory to be consumed.<br>
<br>
A little maths you can fill in your own values.<br>
<br>
Assuming you want 16K of data.<br>
<br>
You need at a minimum [1]<br>
num_rxq := total number of receive queues<br>
num_rxd := number of receive descriptors per receive queue<br>
num_txq := total number of transmit queues (assume all can be full)<br>
num_txd := number of transmit descriptors<br>
num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores * burst_size<br>
<br>
Assuming you are using code copy/pasted from some example like l3fwd.<br>
With 4 Rxq<br>
<br>
num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320<br>
<br>
Each mbuf element requires [2]<br>
elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size<br>
= 128 + 128 + 16K = 16640<br>
<br>
obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL)<br>
= 16832<br>
<br>
So total pool is<br>
num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M<br>
<br>
<br>
[1] Some devices line bnxt need multiple buffers per packet.<br>
[2] Often applications want additional space per mbuf for meta-data.<br>
<br>
<br>
<u></u><u></u></p>
</blockquote>
</div>
</div>
</div>
</blockquote></div>