[dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement

Cao, Waterman waterman.cao at intel.com
Wed Oct 29 01:33:39 CET 2014


Hi Yong,

Let us recheck it again with your instruction.
I will response your questions once we get result.

Thanks
Waterman 


>-----Original Message-----
>From: Yong Wang [mailto:yongwang at vmware.com] 
>Sent: Wednesday, October 29, 2014 3:59 AM
>To: Thomas Monjalon
>Cc: dev at dpdk.org; Cao, Waterman
>Subject: RE: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>
>Thomas/Waterman,
>
>I couldn't reproduce the reported issue on v1.8.0-rc1 and both l2fwd and l3fwd works fine using the same command posted.
>
># dpdk_nic_bind.py --status
>
>Network devices using DPDK-compatible driver ============================================
>0000:0b:00.0 'VMXNET3 Ethernet Controller' drv=igb_uio unused=
>0000:13:00.0 'VMXNET3 Ethernet Controller' drv=igb_uio unused=
>
>Network devices using kernel driver
>===================================
>0000:02:00.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio *Active*
>
>Other network devices
>=====================
><none>
>
>#  ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)"
>...
>EAL: TSC frequency is ~2800101 KHz
>EAL: Master core 1 is ready (tid=ee3c6840)
>EAL: Core 2 is ready (tid=de1ff700)
>EAL: PCI device 0000:02:00.0 on NUMA socket -1
>EAL:   probe driver: 8086:100f rte_em_pmd
>EAL:   0000:02:00.0 not managed by UIO driver, skipping
>EAL: PCI device 0000:0b:00.0 on NUMA socket -1
>EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>EAL:   PCI memory mapped at 0x7f8bee3dd000
>EAL:   PCI memory mapped at 0x7f8bee3dc000
>EAL:   PCI memory mapped at 0x7f8bee3da000
>EAL: PCI device 0000:13:00.0 on NUMA socket -1
>EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>EAL:   PCI memory mapped at 0x7f8bee3d9000
>EAL:   PCI memory mapped at 0x7f8bee3d8000
>EAL:   PCI memory mapped at 0x7f8bee3d6000
>Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:00:0C:29:72:C6:7E, Allocated mbuf pool on socket 0
>LPM: Adding route 0x01010100 / 24 (0)
>LPM: Adding route 0x02010100 / 24 (1)
>LPM: Adding route 0x03010100 / 24 (2)
>LPM: Adding route 0x04010100 / 24 (3)
>LPM: Adding route 0x05010100 / 24 (4)
>LPM: Adding route 0x06010100 / 24 (5)
>LPM: Adding route 0x07010100 / 24 (6)
>LPM: Adding route 0x08010100 / 24 (7)
>txq=0,0,0
>Initializing port 1 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:00:0C:29:72:C6:88, txq=1,0,0 
>
>Initializing rx queues on lcore 1 ... rxq=0,0,0 Initializing rx queues on lcore 2 ... rxq=1,0,0
>done: Port 0
>done: Port 1
>L3FWD: entering main loop on lcore 2
>L3FWD:  -- lcoreid=2 portid=1 rxqueueid=0
>L3FWD: entering main loop on lcore 1
>L3FWD:  -- lcoreid=1 portid=0 rxqueueid=0
>
>I don't have the exact setup but I suspect this is related as the errors looks like a tx queue param used is not supported by vmxnet3 backend.  The patchset does not touch the txq config path so it's not clear how it breaks rte_eth_tx_queue_setup().  So my question to Waterman:
>(1) Is this a regression on the same branch, i.e. running the unpatched build works but failed with the patch applied?
>(2) By any chance did you change the following struct in main.c for those sample programs to a different value, in particular txq_flags?
>
>static const struct rte_eth_txconf tx_conf = {
>        .tx_thresh = {
>                .pthresh = TX_PTHRESH,
>                .hthresh = TX_HTHRESH,
>                .wthresh = TX_WTHRESH,
>        },
>        .tx_free_thresh = 0, /* Use PMD default values */
>        .tx_rs_thresh = 0, /* Use PMD default values */
>        .txq_flags = (ETH_TXQ_FLAGS_NOMULTSEGS |   <== any changes here?
>                      ETH_TXQ_FLAGS_NOVLANOFFL |
>                      ETH_TXQ_FLAGS_NOXSUMSCTP |
>                      ETH_TXQ_FLAGS_NOXSUMUDP |
>                      ETH_TXQ_FLAGS_NOXSUMTCP) };
>
>Thanks,
>Yong
>________________________________________
>From: Thomas Monjalon <thomas.monjalon at 6wind.com>
>Sent: Tuesday, October 28, 2014 7:40 AM
>To: Yong Wang
>Cc: dev at dpdk.org; Cao, Waterman
>Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>
>Hi Yong,
>
>Is there any progress with this patchset?
>
>Thanks
>--
>Thomas
>
>2014-10-22 07:07, Cao, Waterman:
>> Hi Yong,
>>
>>       We verified your patch with VMWare ESXi 5.5 and found VMware L2fwd and L3fwd cmd can't run.
>>     But We use DPDK1.7_rc1 package to validate VMware regression, It works fine.
>> .
>> 1.[Test Environment]:
>>  - VMware ESXi 5.5;
>>  - 2 VM
>>  - FC20 on Host / FC20-64 on VM
>>  - Crown Pass server (E2680 v2 ivy bridge )
>>  - Niantic 82599
>>
>> 2. [Test Topology]:
>>       Create 2VMs (Fedora 18, 64bit) .
>>     We pass through one physical port(Niantic 82599) to each VM, and also create one virtual device: vmxnet3 in each VM.
>>       To connect with two VMs, we use one vswitch to connect two vmxnet3 interface.
>>     Then, PF1 and vmxnet3A are in VM1; PF2 and vmxnet3B are in VM2.
>>       The traffic flow for l2fwd/l3fwd is as below::
>>       Ixia -> PF1 -> vmxnet3A -> vswitch -> vmxnet3B -> PF2 -> Ixia. 
>> (traffic generator)
>>
>> 3.[ Test Step]:
>>
>> tar dpdk1.8.rc1 ,compile and run;
>>
>> L2fwd:  ./build/l2fwd -c f -n 4 -- -p 0x3
>> L3fwd:  ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)"
>>
>> 4.[Error log]:
>>
>> ---VMware L2fwd:---
>>
>> EAL:   0000:0b:00.0 not managed by UIO driver, skipping
>> EAL: PCI device 0000:13:00.0 on NUMA socket -1
>> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
>> EAL:   PCI memory mapped at 0x7f678ae6e000
>> EAL:   PCI memory mapped at 0x7f678af34000
>> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
>> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
>> EAL: PCI device 0000:1b:00.0 on NUMA socket -1
>> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>> EAL:   PCI memory mapped at 0x7f678af33000
>> EAL:   PCI memory mapped at 0x7f678af32000
>> EAL:   PCI memory mapped at 0x7f678af30000
>> Lcore 0: RX port 0
>> Lcore 1: RX port 1
>> Initializing port 0... PMD: ixgbe_dev_rx_queue_setup(): 
>> sw_ring=0x7f670b0f5580 hw_ring=0x7f6789fe5280 dma_addr=0x373e5280
>> PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
>> PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
>> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f670b0f3480 
>> hw_ring=0x7f671b820080 dma_addr=0x100020080
>> PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path
>> PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled.
>> done:
>> Port 0, MAC address: 90:E2:BA:4A:33:78
>>
>> Initializing port 1... EAL: Error - exiting with code: 1
>>   Cause: rte_eth_tx_queue_setup:err=-22, port=1
>>
>> ---VMware L3fwd:---
>>
>> EAL: TSC frequency is ~2793265 KHz
>> EAL: Master core 1 is ready (tid=9f49a880)
>> EAL: Core 2 is ready (tid=1d7f2700)
>> EAL: PCI device 0000:0b:00.0 on NUMA socket -1
>> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>> EAL:   0000:0b:00.0 not managed by UIO driver, skipping
>> EAL: PCI device 0000:13:00.0 on NUMA socket -1
>> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
>> EAL:   PCI memory mapped at 0x7f079f3e4000
>> EAL:   PCI memory mapped at 0x7f079f4aa000
>> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
>> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
>> EAL: PCI device 0000:1b:00.0 on NUMA socket -1
>> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>> EAL:   PCI memory mapped at 0x7f079f4a9000
>> EAL:   PCI memory mapped at 0x7f079f4a8000
>> EAL:   PCI memory mapped at 0x7f079f4a6000
>> Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=1...  
>> Address:90:E2:BA:4A:33:78, Allocated mbuf pool on socket 0
>> LPM: Adding route 0x01010100 / 24 (0)
>> LPM: Adding route 0x02010100 / 24 (1)
>> LPM: Adding route 0x03010100 / 24 (2)
>> LPM: Adding route 0x04010100 / 24 (3)
>> LPM: Adding route 0x05010100 / 24 (4)
>> LPM: Adding route 0x06010100 / 24 (5)
>> LPM: Adding route 0x07010100 / 24 (6)
>> LPM: Adding route 0x08010100 / 24 (7)
>> txq=0,0,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f071f6f3c80 
>> hw_ring=0x7f079e5e5280 dma_addr=0x373e5280
>> PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path
>> PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled.
>>
>> Initializing port 1 ... Creating queues: nb_rxq=1 nb_txq=1...  Address:00:0C:29:F0:90:41, txq=1,0,0 EAL: Error - exiting with code: 1
>>   Cause: rte_eth_tx_queue_setup: err=-22, port=1
>>
>>
>> Can you help to recheck this patch with latest DPDK code?
>>
>> Regards
>> Waterman
>>
>> -----Original Message-----
>> >From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Yong Wang
>> >Sent: Wednesday, October 22, 2014 6:10 AM
>> >To: Patel, Rashmin N; Stephen Hemminger
>> >Cc: dev at dpdk.org
>> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>> >
>> >Rashmin/Stephen,
>> >
>> >Since you have worked on vmxnet3 pmd drivers, I wonder if you can help review this set of patches.  Any other reviews/test verifications are welcome of course.  We have reviewed/tested all patches internally.
>> >
>> >Yong
>> >________________________________________
>> >From: dev <dev-bounces at dpdk.org> on behalf of Yong Wang 
>> ><yongwang at vmware.com>
>> >Sent: Monday, October 13, 2014 2:00 PM
>> >To: Thomas Monjalon
>> >Cc: dev at dpdk.org
>> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>> >
>> >Only the last one is performance related and it merely tries to give hints to the compiler to hopefully make branch prediction more efficient.  It also moves a constant assignment out of the pkt polling loop.
>> >
>> >We did performance evaluation on a Nehalem box with 4cores at 2.8GHz x 2 socket:
>> >On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with one core assigned for polling.  The client side is pktgen/dpdk, pumping 64B tcp packets at line rate.  Before the patch, we are seeing ~900K PPS with 65% cpu of a core used for DPDK.  After the patch, we are seeing the same pkt rate with only 45% of a core used.  CPU usage is collected factoring our the idle loop cost.  The packet rate is a result of the mode we used for vmxnet3 (pure emulation mode running default number of hypervisor contexts).  I can add these info in the review request.
>> >
>> >Yong
>> >________________________________________
>> >From: Thomas Monjalon <thomas.monjalon at 6wind.com>
>> >Sent: Monday, October 13, 2014 1:29 PM
>> >To: Yong Wang
>> >Cc: dev at dpdk.org
>> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement
>> >
>> >Hi,
>> >
>> >2014-10-12 23:23, Yong Wang:
>> >> This patch series include various fixes and improvement to the
>> >> vmxnet3 pmd driver.
>> >>
>> >> Yong Wang (5):
>> >>   vmxnet3: Fix VLAN Rx stripping
>> >>   vmxnet3: Add VLAN Tx offload
>> >>   vmxnet3: Fix dev stop/restart bug
>> >>   vmxnet3: Add rx pkt check offloads
>> >>   vmxnet3: Some perf improvement on the rx path
>> >
>> >Please, could describe what is the performance gain for these patches?
>> >Benchmark numbers would be appreciated.
>> >
>> >Thanks
>> >--
>> >Thomas


More information about the dev mailing list