[dpdk-users] jumbo frame support..(more than 1500 bytes)

Chris Paquin cpaquin at redhat.com
Fri Sep 1 17:41:29 CEST 2017


First off I apologize as I am very new to dpdk.

I have a very similar issue, however, I have found that the packet drops (
RX-nombuf:) that I am seeing when setting MTU over 1500 is related to the
size of the mbuffer.

For example, the larger the mbuf the larger the packet I can receive.

However, I cannot set my mbuf larger than 6016 (kb I assume?)

Sep  1 11:25:56 rhel7 testpmd[3016]: USER1: create a new mbuf pool
<mbuf_pool_socket_0>: n=171456, size=6016, socket=0

Anything larger, and I cannot launch testpdm.

Sep  1 11:25:30 rhel7 testpmd[3008]: USER1: create a new mbuf pool
<mbuf_pool_socket_0>: n=171456, size=6017, socket=0
Sep  1 11:25:30 rhel7 testpmd[3008]: RING: Cannot reserve memory for tailq
Sep  1 11:25:30 rhel7 testpmd[3008]: EAL: Error - exiting with code: 1#012
 Cause:
Sep  1 11:25:30 rhel7 testpmd[3008]: Creation of mbuf pool for socket 0
failed: Cannot allocate memory

So in short, I can receive larger packets without dropping them by
increasing my mbuffer size, you might be able to try the same.  However, I
cannot get close to the desired MTU of 9000. Would love to know if you get
it working.


CHRISTOPHER PAQUIN

SENIOR CLOUD CONSULTANT, RHCE, RHCSA-OSP

Red Hat  <https://www.redhat.com/>

M: 770-906-7646
<https://red.ht/sig>

On Fri, Sep 1, 2017 at 8:55 AM, Kyle Larose <klarose at sandvine.com> wrote:

> How is it failing? Is it dropping with a frame too long counter? Are you
> sure it's not dropping before your device? Have you made sure the max frame
> size of every hop in between is large enough?
>
> -----Original Message-----
> From: users [mailto:users-bounces at dpdk.org] On Behalf Of Dharmesh Mehta
> Sent: Friday, September 01, 2017 2:10 AM
> To: users at dpdk.org
> Subject: [dpdk-users] jumbo frame support..(more than 1500 bytes)
>
> Sorry for resubmission,
> Still I am stuck at receiving any packet more than 1500+bytes. is it
> related to driver?I can send packet larger than 1500bytes so I am not
> suspecting anything wrong with my mbuf initialization.
>
> In my application I am using following code...
> #define MBUF_CACHE_SIZE 128#define MBUF_DATA_SIZE
> RTE_MBUF_DEFAULT_BUF_SIZE#define JUMBO_FRAME_MAX_SIZE    0x2600 //(9728
> bytes) .rxmode = { .mq_mode        = ETH_MQ_RX_VMDQ_ONLY, .split_hdr_size =
> 0, .header_split   = 0, /**< Header Split disabled */ .hw_ip_checksum = 0,
> /**< IP checksum offload disabled */ .hw_vlan_filter = 0, /**< VLAN
> filtering disabled */ .hw_vlan_strip  = 1, /**< VLAN strip enabled. */
> .jumbo_frame    = 1, /**< Jumbo Frame Support enabled */ .hw_strip_crc   =
> 1, /**< CRC stripped by hardware */ .enable_scatter = 1, //required for
> jumbofram + 1500. .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,//ETHER_MAX_LEN
> }, create_mbuf_pool(valid_num_ports, rte_lcore_count() - 1,
> MBUF_DATA_SIZE, MAX_QUEUES, RTE_TEST_RX_DESC_DEFAULT, MBUF_CACHE_SIZE);
>
> I am also calling rte_eth_dev_set_mtu, to set MTU 9000, and verified
> with rte_eth_dev_get_mtu.
> Below is my system info / logs from dpdk (17.05.1).
> Yours help is really appreciated.
>
> Thanks.DM.
> uname -aLinux 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC
> 2017 x86_64 x86_64 x86_64 GNU/Linux modinfo uio_pci_genericfilename:
> /lib/modules/3.10.0-514.10.2.el7.x86_64/kernel/drivers/uio/uio_pci_generic.kodescription:
>    Generic UIO driver for PCI 2.3 devicesauthor:         Michael S. Tsirkin
> <mst at redhat.com>license:        GPL v2version:        0.01.0rhelversion:
>    7.3srcversion:     10714380C2025655D980132depends:        uiointree:
>       Yvermagic:       3.10.0-514.10.2.el7.x86_64 SMP mod_unload
> modversions signer:         CentOS Linux kernel signing keysig_key:
>  27:F2:04:85:EB:EB:3B:2D:54:AD:D6:1E:57:B3:08:FA:E0:70:F4:1Fsig_hashalgo:
>   sha256
> dpdk-17.05.1
> $ ./bind2dpdk_status.sh Checking Ethernet port binding with DPDK
>
> Network devices using DPDK-compatible driver========================
> ====================0000:01:00.1 'I350 Gigabit Network Connection 1521'
> drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.2 'I350 Gigabit
> Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.3
> 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:03:00.0 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.1 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:03:00.2 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.3 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:04:00.0 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.1 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:04:00.2 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.3 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:05:00.0 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.1 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:05:00.2 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.3 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:82:00.0 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.1 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:82:00.2 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.3 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci Network devices using kernel
> driver===================================0000:01:00.0 'I350 Gigabit
> Network Connection 1521' if=eth0 drv=igb unused=igb_uio,vfio-pci,uio_
> pci_generic
> Other Network devices=====================<none>
> Crypto devices using DPDK-compatible driver========================
> ===================<none>
> Crypto devices using kernel driver==================================<none>
> Other Crypto devices====================<none> Eventdev devices using
> DPDK-compatible driver=============================================<none>
> Eventdev devices using kernel driver========================
> ============<none>
> Other Eventdev devices======================<none>
> Mempool devices using DPDK-compatible driver========================
> ====================<none>
> Mempool devices using kernel driver========================
> ===========<none>
> Other Mempool devices=====================<none>
>
>
>
> EAL: Detected 72 lcore(s)EAL: Auto-detected process type: PRIMARYEAL:
> Probing VFIO support...EAL: VFIO support initializedEAL: PCI device
> 0000:01:00.0 on NUMA socket 0EAL:   probe driver: 8086:1521
> net_e1000_igbEAL: PCI device 0000:01:00.1 on NUMA socket 0EAL:   Device is
> blacklisted, not initializingEAL: PCI device 0000:01:00.2 on NUMA socket
> 0EAL:   Device is blacklisted, not initializingEAL: PCI device 0000:01:00.3
> on NUMA socket 0EAL:   Device is blacklisted, not initializingEAL: PCI
> device 0000:03:00.0 on NUMA socket 0EAL:   probe driver: 8086:1521
> net_e1000_igbEAL: PCI device 0000:03:00.1 on NUMA socket 0EAL:   Device is
> blacklisted, not initializingEAL: PCI device 0000:03:00.2 on NUMA socket
> 0EAL:   Device is blacklisted, not initializingEAL: PCI device 0000:03:00.3
> on NUMA socket 0EAL:   Device is blacklisted, not initializingEAL: PCI
> device 0000:04:00.0 on NUMA socket 0EAL:   Device is blacklisted, not
> initializingEAL: PCI device 0000:04:00.1 on NUMA socket 0EAL:   Device is
> blacklisted, not initializingEAL: PCI device 0000:04:00.2 on NUMA socket
> 0EAL:   Device is blacklisted, not initializingEAL: PCI device 0000:04:00.3
> on NUMA socket 0EAL:   Device is blacklisted, not initializingEAL: PCI
> device 0000:05:00.0 on NUMA socket 0EAL:   Device is blacklisted, not
> initializingEAL: PCI device 0000:05:00.1 on NUMA socket 0EAL:   Device is
> blacklisted, not initializingEAL: PCI device 0000:05:00.2 on NUMA socket
> 0EAL:   Device is blacklisted, not initializingEAL: PCI device 0000:05:00.3
> on NUMA socket 0EAL:   Device is blacklisted, not initializingEAL: PCI
> device 0000:82:00.0 on NUMA socket 1EAL:   Device is blacklisted, not
> initializingEAL: PCI device 0000:82:00.1 on NUMA socket 1EAL:   Device is
> blacklisted, not initializingEAL: PCI device 0000:82:00.2 on NUMA socket
> 1EAL:   Device is blacklisted, not initializingEAL: PCI device 0000:82:00.3
> on NUMA socket 1EAL:   Device is blacklisted, not
> initializingnb_ports=1valid_num_ports=1MBUF_DATA_SIZE=2176 ,
> MAX_QUEUES=8, RTE_TEST_RX_DESC_DEFAULT=1024 , MBUF_CACHE_SIZE=128Waiting
> for data...portid=0enabled_port_mask=1**** MTU is programmed successfully
> to 9000port_init port - 0 Device supports maximum rx queues are 8MAX_QUEUES
> defined as 8max_no_tx_queue = 8 , max_no_rx_queue = 8pf queue num: 0,
> configured vmdq pool num: 8, each vmdq pool has 1
> queuesport=0,rx_rings=8,tx_rings=3rx-queue setup successfully q=0rx-queue
> setup successfully q=1rx-queue setup successfully q=2rx-queue setup
> successfully q=3rx-queue setup successfully q=4rx-queue setup successfully
> q=5rx-queue setup successfully q=6rx-queue setup successfully q=7tx-queue
> setup successfully q=0tx-queue setup successfully q=1tx-queue setup
> successfully q=2Port 0: Enabling HW FCVHOST_PORT: Max virtio devices
> supported: 8VHOST_PORT: Port 0 MAC: a0 36 9f cb ba 34Dump Flow Control
> 0HighWater Martk=33828LowWater Martk=32328PauseTime=1664Send XON
> Martk=1Mode=1MAC Control Frame forward=0Setting Flow Control = FULLDump
> Flow Control 0HighWater Martk=33828LowWater Martk=32328PauseTime=1664Send
> XON Martk=1Mode=1MAC Control Frame forward=0**** MTU is programmed
> successfully to 9000VHOST_DATA: ********************* TX - Procesing on
> Core 40 started********************* TX - Procesing on Core 40
> startedVHOST_DATA: ***************** RX Procesing on Core 41
> started***************** RX Procesing on Core 41 startedvmdq_conf_default.
> rxmode.mq_mode=4vmdq_conf_default.rxmode.max_rx_pkt_len=
> 9728vmdq_conf_default.rxmode.split_hdr_size=0vmdq_conf_
> default.rxmode.header_split=0vmdq_conf_default.rxmode.hw_
> ip_checksum=0vmdq_conf_default.rxmode.hw_vlan_filter=
> 0vmdq_conf_default.rxmode.hw_vlan_strip=1vmdq_conf_default.
> rxmode.hw_vlan_extend=0vmdq_conf_default.rxmode.jumbo_
> frame=1vmdq_conf_default.rxmode.hw_strip_crc=1vmdq_
> conf_default.rxmode.enable_scatter=1vmdq_conf_default.
> rxmode.enable_lro=0VHOST_CONFIG: vhost-user server: socket created, fd:
> 23VHOST_CONFIG: bind to /tmp/vubr0
>
>


More information about the users mailing list