[dpdk-dev] Question about jumbo frame support on ixgbe

Zhao1, Wei wei.zhao1 at intel.com
Mon Nov 5 10:47:59 CET 2018


Hi,Hideyuki Yamashita

> -----Original Message-----
> From: Hideyuki Yamashita [mailto:yamashita.hideyuki at po.ntt-tx.co.jp]
> Sent: Friday, November 2, 2018 9:38 AM
> To: Zhao1, Wei <wei.zhao1 at intel.com>
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] Question about jumbo frame support on ixgbe
> 
> Hi
> 
> Thanks for your answering to my question.
> Please see inline.
> > Hi,  Hideyuki Yamashita
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Hideyuki
> > > Yamashita
> > > Sent: Wednesday, October 31, 2018 4:22 PM
> > > To: dev at dpdk.org
> > > Subject: [dpdk-dev] Question about jumbo frame support on ixgbe
> > >
> > > Hi,
> > >
> > > I have a very basic question about jumbo frame support for ixgbe.
> > >
> > > I understand that some drivers support jumbo frame and if it receive
> > > jumbo packet (greater than 1500 byte), it creates mbuf chains and
> > > pass it to DPDK application through e.g. rte_eth_rx_burst.
> > >
> > > However it looks that ixgbe driver does not support jumbo frame.
> > >
> > > Q1. Is my understanding above correct?
> > > Q2. If A1 equals YES, then are there any future plan to support
> > > jumbo frame on ixgbe?
> >
> > Your understanding above correct, but 82599 and x550 has support jumbo
> frame receive by now!
> > In order to use this feature on ixgbe, you need do the following steps:
> >
> > 1. you must set dev_conf.rxmode. max_rx_pkt_len to a big number, eg.
> 9500, when doing port start, for example when start port in API
> rte_eth_dev_start().
> > ixgbe_dev_rx_init() will chose a scatter receive function if the
> > max_rx_pkt_len is larger than the mbuf size, you do not need to set
> DEV_RX_OFFLOAD_SCATTER bit in dev_conf.rxmode.offloads, this is the
> work of PMD driver when it detect jumbo frame is needed to be supported.
> Thanks for your info.
> 
> > 2. set dev_conf.txmode.offloads bit of  DEV_TX_OFFLOAD_MULTI_SEGS to
> 1 when doing rte_eth_tx_queue_setup() or , this is very important!!
> > If you not do this, you may only receive JUMBO frame but not forward
> > out. Because as you say, the receive packets maybe has a mbuf
> chains(depending on the size relationship of mbuf size and the
> max_rx_pkt_len).
> > But in ixgbe PMD for setting tx function, it is confusing  in
> ixgbe_set_tx_function(), you need  to take care of it, it is based on queue
> offloads bit!
> What will happen if DEV_TX_OFFLOAD_MULTI_SEGS is set to 1.
> Packets are sent fragmented or re-built as a Jumbo frame and sent to
> network?
> (My guess is former(packet will be sent fragmented though))

Ixgbe will store the jumbo frame into several segments in mbuf when receiving from network,
It will not be rebuild into one jumbo packets in PMD, so ixgbe will do that work when transmitting it.
PMD need the flag for indication of this packet is Jumbo and do specific work,  DEV_TX_OFFLOAD_MULTI_SEGS
Can tell PMD chose a purpose tx function for doing these work.
 
> 
> > 3. enable it using CLI "port config mtu <port_id> <value>" if you are using
> testpmd or using API rte_eth_dev_set_mtu() for your own APP.
> > The mtu number is just what you need to update for a large number.
> I want to know the relationship between 1,2 and 3.
> Do I have to do all 3 steps to send/receive jumbo frame?
> Or, when I relalize jumbo frame support programatically execute 1 and 2, if I
> do not modify program and just change setting via CLI then execute 3?

If you are use testpmd APP, you can excute (3). If not, you can set dev_conf.rxmode.offloads bits of DEV_RX_OFFLOAD_JUMBO_FRAME 
to 1 when you start ixgbe PMD code. 1-2-3 are all steps for configuration of registers for JUMBO frame enable.


> 
> Thanks and BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> > And all my discussion is based on pf port, if you are using vf, we can have a
> further discussion.
> > Please feel free to contact me if necessary.
> >
> > >
> > > BR,
> > > Hideyuki Yamashita
> > > NTT TechnoCross
> >
> 
> 



More information about the dev mailing list