[dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

Mike DeVico mdevico at xcom-labs.com
Wed Sep 18 06:20:46 CEST 2019


I as understand it, RSS and DCB are two completely different things. DCB uses the PCP field to 
map the packet to a given queue, whereas RSS computes a key by hashing the IP address and port
and then uses the key to map the packet to a given queue.

--Mike

On 9/17/19, 8:33 PM, "Zhang, Xiao" <xiao.zhang at intel.com> wrote:

    [EXTERNAL SENDER]
    
    Hi Mike,
    
    You need add --enable-rss option when start up the process like:
    sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3 --enable-rss
    
    Thanks,
    Xiao
    
    > -----Original Message-----
    > From: Mike DeVico [mailto:mdevico at xcom-labs.com]
    > Sent: Wednesday, September 18, 2019 2:55 AM
    > To: Thomas Monjalon <thomas at monjalon.net>
    > Cc: users at dpdk.org; Xing, Beilei <beilei.xing at intel.com>; Zhang, Qi Z
    > <qi.z.zhang at intel.com>; Richardson, Bruce <bruce.richardson at intel.com>;
    > Ananyev, Konstantin <konstantin.ananyev at intel.com>; Yigit, Ferruh
    > <ferruh.yigit at intel.com>; Christensen, ChadX M
    > <chadx.m.christensen at intel.com>; Tia Cassett <tiac at xcom-labs.com>
    > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >
    > Hello,
    >
    > So far I haven't heard back from anyone regarding this issue and I would like to
    > know what the status is at this point. Also, if you have any recommendations or
    > require additional information from me, please let me know.
    >
    > Thank you in advance,
    > --Mike DeVico
    >
    > On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas at monjalon.net> wrote:
    >
    >     [EXTERNAL SENDER]
    >
    >     Adding i40e maintainers and few more.
    >
    >     07/09/2019 01:11, Mike DeVico:
    >     > Hello,
    >     >
    >     > I am having an issue getting the DCB feature to work with an Intel
    >     > X710 Quad SFP+ NIC.
    >     >
    >     > Here’s my setup:
    >     >
    >     > 1.      DPDK 18.08 built with the following I40E configs:
    >     >
    >     > CONFIG_RTE_LIBRTE_I40E_PMD=y
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
    >     > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
    >     > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
    >     > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
    >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
    >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
    >     >
    >     > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >     >
    >     > Network devices using DPDK-compatible driver
    >     > ============================================
    >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     >
    >     >        Network devices using kernel driver
    >     >        ===================================
    >     >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0
    > drv=igb unused=igb_uio *Active*
    >     >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1
    > drv=igb unused=igb_uio *Active*
    >     >
    >     >        Other Network devices
    >     >        =====================
    >     >        <none>
    >     >
    >     > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC
    > that’s broadcasting
    >     > a packet tagged with VLAN 1 and PCP 2.
    >     >
    >     > 4.      I use the vmdq_dcb example app and configure the card with 16
    > pools/8 queue each
    >     > as follows:
    >     >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     >
    >     >
    >     > The apps starts up fine and successfully probes the card as shown below:
    >     >
    >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     > EAL: Detected 80 lcore(s)
    >     > EAL: Detected 2 NUMA nodes
    >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    >     > EAL: Probing VFIO support...
    >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > vmdq queue base: 64 pool base 1
    >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    >     > Port 0 MAC: e8 ea 6a 27 b5 4d
    >     > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
    >     > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
    >     > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
    >     > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
    >     > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
    >     > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
    >     > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
    >     > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
    >     > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
    >     > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
    >     > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
    >     > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
    >     > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
    >     > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
    >     > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
    >     > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
    >     > vmdq queue base: 64 pool base 1
    >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    >     > Port 1 MAC: e8 ea 6a 27 b5 4e
    >     > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
    >     > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
    >     > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
    >     > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
    >     > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
    >     > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
    >     > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
    >     > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
    >     > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
    >     > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
    >     > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
    >     > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
    >     > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
    >     > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
    >     > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
    >     > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
    >     >
    >     > Skipping disabled port 2
    >     >
    >     > Skipping disabled port 3
    >     > Core 0(lcore 1) reading queues 64-191
    >     >
    >     > However, when I issue the SIGHUP I see that the packets
    >     > are being put into the first queue of Pool 1 as follows:
    >     >
    >     > Pool 0: 0 0 0 0 0 0 0 0
    >     > Pool 1: 10 0 0 0 0 0 0 0
    >     > Pool 2: 0 0 0 0 0 0 0 0
    >     > Pool 3: 0 0 0 0 0 0 0 0
    >     > Pool 4: 0 0 0 0 0 0 0 0
    >     > Pool 5: 0 0 0 0 0 0 0 0
    >     > Pool 6: 0 0 0 0 0 0 0 0
    >     > Pool 7: 0 0 0 0 0 0 0 0
    >     > Pool 8: 0 0 0 0 0 0 0 0
    >     > Pool 9: 0 0 0 0 0 0 0 0
    >     > Pool 10: 0 0 0 0 0 0 0 0
    >     > Pool 11: 0 0 0 0 0 0 0 0
    >     > Pool 12: 0 0 0 0 0 0 0 0
    >     > Pool 13: 0 0 0 0 0 0 0 0
    >     > Pool 14: 0 0 0 0 0 0 0 0
    >     > Pool 15: 0 0 0 0 0 0 0 0
    >     > Finished handling signal 1
    >     >
    >     > Since the packets are being tagged with PCP 2 they should be getting
    >     > mapped to 3rd queue of Pool 1, right?
    >     >
    >     > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC
    > and
    >     > the packets show up in the expected queue. (Note, to get it to work I had
    >     > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
    >     >
    >     > Here’s that setup:
    >     >
    >     > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >     >
    >     > Network devices using DPDK-compatible driver
    >     > ============================================
    >     > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
    > drv=igb_uio unused=ixgbe
    >     > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
    > drv=igb_uio unused=ixgbe
    >     >
    >     > Network devices using kernel driver
    >     > ===================================
    >     > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb
    > unused=igb_uio *Active*
    >     > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb
    > unused=igb_uio *Active*
    >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f0
    > drv=i40e unused=igb_uio
    >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f1
    > drv=i40e unused=igb_uio
    >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f2
    > drv=i40e unused=igb_uio
    >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f3
    > drv=i40e unused=igb_uio
    >     >
    >     > Other Network devices
    >     > =====================
    >     > <none>
    >     >
    >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     > EAL: Detected 80 lcore(s)
    >     > EAL: Detected 2 NUMA nodes
    >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    >     > EAL: Probing VFIO support...
    >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:af:00.0 on NUMA socket 1
    >     > EAL:   probe driver: 8086:10fb net_ixgbe
    >     > EAL: PCI device 0000:af:00.1 on NUMA socket 1
    >     > EAL:   probe driver: 8086:10fb net_ixgbe
    >     > vmdq queue base: 0 pool base 0
    >     > Port 0 MAC: 00 1b 21 bf 71 24
    >     > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >     > vmdq queue base: 0 pool base 0
    >     > Port 1 MAC: 00 1b 21 bf 71 26
    >     > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >     >
    >     > Now when I send the SIGHUP, I see the packets being routed to
    >     > the expected queue:
    >     >
    >     > Pool 0: 0 0 0 0 0 0 0 0
    >     > Pool 1: 0 0 58 0 0 0 0 0
    >     > Pool 2: 0 0 0 0 0 0 0 0
    >     > Pool 3: 0 0 0 0 0 0 0 0
    >     > Pool 4: 0 0 0 0 0 0 0 0
    >     > Pool 5: 0 0 0 0 0 0 0 0
    >     > Pool 6: 0 0 0 0 0 0 0 0
    >     > Pool 7: 0 0 0 0 0 0 0 0
    >     > Pool 8: 0 0 0 0 0 0 0 0
    >     > Pool 9: 0 0 0 0 0 0 0 0
    >     > Pool 10: 0 0 0 0 0 0 0 0
    >     > Pool 11: 0 0 0 0 0 0 0 0
    >     > Pool 12: 0 0 0 0 0 0 0 0
    >     > Pool 13: 0 0 0 0 0 0 0 0
    >     > Pool 14: 0 0 0 0 0 0 0 0
    >     > Pool 15: 0 0 0 0 0 0 0 0
    >     > Finished handling signal 1
    >     >
    >     > What am I missing?
    >     >
    >     > Thankyou in advance,
    >     > --Mike
    >     >
    >     >
    >
    >
    >
    >
    >
    >
    
    



More information about the users mailing list