[dpdk-dev] rte_mbuf size for jumbo frame
macintyrelp at ornl.gov
Tue Jan 26 15:23:31 CET 2016
Raising the mbuf size will make the packet handling for large packets
slightly more efficient, but it will use much more memory unless the
great majority of the packets you are handling are of the jumbo size.
Using more memory has its own costs. In order to evaluate this design
choice, it is necessary to understand the behavior of the memory
subsystem, which is VERY complicated.
Before you go down this path, at least benchmark your application using
the regular sized mbufs and the large ones and see what the effect is.
This one time (01/26/2016 09:01 AM), at band camp, Polehn, Mike A wrote:
> Jumbo frames are generally handled by link lists (but called something else) of mbufs.
> Enabling jumbo frames for the device driver should enable the right portion of the driver which handles the linked lists.
> Don't make the mbufs huge.
> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Masaru OKI
> Sent: Monday, January 25, 2016 2:41 PM
> To: Saurabh Mishra; users at dpdk.org; dev at dpdk.org
> Subject: Re: [dpdk-dev] rte_mbuf size for jumbo frame
> 1. Take care of unit size of mempool for mbuf.
> 2. Call rte_eth_dev_set_mtu() for each interface.
> Note that some PMDs does not supported change MTU.
> On 2016/01/26 6:02, Saurabh Mishra wrote:
>> We wanted to use 10400 bytes size of each rte_mbuf to enable Jumbo frames.
>> Do you guys see any problem with that? Would all the drivers like
>> ixgbe, i40e, vmxnet3, virtio and bnx2x work with larger rte_mbuf size?
>> We would want to avoid detailing with chained mbufs.
Lawrence MacIntyre macintyrelp at ornl.gov Oak Ridge National Laboratory
865.574.7401 Cyber Space and Information Intelligence Research Group
More information about the dev