[dpdk-dev] L2fwd Performance issue with Virtual Machine

Patel, Rashmin N rashmin.n.patel at intel.com
Sat Oct 5 09:40:33 CEST 2013


Vincent, I agree with your explanation that when you move to PMD from regular interrupt based driver, you would definitely get better performance of a Linux guest until you reach the bottleneck of the vSwitch. But I said "optimal performance benefit having PMD" which is only possible if vmxnet3 backend driver has support for vmxnet3-PMD frontend driver inside guest and you never know if vmware helps adding support for the same or not but there is a way OPEN. 

The motive is of having PMD of para-virtual devices is to get performance close to shared memory solution while supporting standard devices, I believe and being at Intel, we strive for the optimal solution and pardon me if I created any confusion.

Thanks,
Rashmin
-----Original Message-----
From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Vincent JARDIN
Sent: Friday, October 04, 2013 11:33 PM
To: dev at dpdk.org
Subject: Re: [dpdk-dev] L2fwd Performance issue with Virtual Machine

I disagree Rashmin. We did measurements with 64 bytes packets: the Linux kernel of the guest is the bottleneck, so the vmxnet3 PMD helps to increase the packet rate of the Linux guests.

PMD helps within guest for packet rate until you rich (of course) the bottleneck of the host's vSwitch.

In order to accelerate the host's vSwitch, you have to run a fast path based vSwitch on the host too.

Best regards,
   Vincent

On 04/10/2013 23:36, Selvaganapathy Chidambaram wrote:
> Thanks Rashmin for your time and help!
>
> So it looks like with the given hardware config, we could probably 
> only achieve around 8 Gbps in VM without using SRIOV. Once DPDK is 
> used in vSwitch design, we could gain more performance.
>
>
> Thanks,
> Selvaganapathy.C.
>
>
> On Fri, Oct 4, 2013 at 11:02 AM, Patel, Rashmin N 
> <rashmin.n.patel at intel.com
>> wrote:
>
>> Correction: "you would NOT get optimal performance benefit having PMD"
>>
>> Thanks,
>> Rashmin
>> -----Original Message-----
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Patel, Rashmin N
>> Sent: Friday, October 04, 2013 10:47 AM
>> To: Selvaganapathy Chidambaram
>> Cc: dev at dpdk.org
>> Subject: Re: [dpdk-dev] L2fwd Performance issue with Virtual Machine
>>
>> Hi,
>>
>> If you are not using SRIOV or direct device assignment to VM, your 
>> traffic hits vSwitch(via vmware native ixgbe driver and network 
>> stack) in the ESX and switched to your E1000/VMXNET3 interface 
>> connected to a VM. The vSwitch is not optimized for PMD at present so 
>> you would get optimal performance benefit having PMD, I believe.
>>
>> For the RSS front, I would say you won't see much difference with RSS 
>> enabled for 1500 bytes frames. In fact, core is capable of handling 
>> such traffic in VM, but the bottleneck is in ESXi software switching 
>> layer, that's what my initial research shows across multiple hypervisors.
>>
>> Thanks,
>> Rashmin
>>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Selvaganapathy 
>> Chidambaram
>> Sent: Thursday, October 03, 2013 2:39 PM
>> To: dev at dpdk.org
>> Subject: [dpdk-dev] L2fwd Performance issue with Virtual Machine
>>
>> Hello Everyone,
>>
>> I have tried to run DPDK sample application l2fwd(modified to support 
>> multiple queues) in my ESX Virtual Machine. I see that performance is 
>> not scaling with cores. [My apologies for the long email]
>>
>> *Setup:*
>>
>> Connected VM to two ports of Spirent with 10Gig link. Sent 10 Gig 
>> traffic of L3 packet of length 1500 bytes (with four different flows) 
>> from Spirent through one port and received at the second port. Also 
>> sent traffic from reverse direction so that net traffic is 20 Gbps. Haven't enabled SR-IOV or
>>   Direct path I/O.
>>
>> *Emulated Driver:*
>>
>> With default emulated driver, I got 7.3 Gbps for 1 core. Adding 
>> multiple cores did not improve the performance. On debugging I 
>> noticed that function
>> eth_em_infos_get() says RSS is not supported.
>>
>> *vmxnet3_usermap:*
>>
>> Then I tried extension vmxnet3_usermap and got 8.7 Gbps for 1 core. 
>> Again adding another core did not help. On debugging, I noticed that 
>> in vmxnet3 kernel driver (in function vmxnet3_probe_device) , RSS is 
>> disabled if *
>> adapter->is_shm* is non zero. In our case, its 
>> adapter->VMXNET3_SHM_USERMAP_DRIVER
>> which is non zero.
>>
>> Before trying to enable it, I would like to know if there is any 
>> known limitation why RSS is not enabled in both the drivers. Please 
>> help me understand.
>>
>> *Hardware Configuration:*
>> Hardware          : Intel Xeon 2.4 Ghz 4 CPUs
>> Hyperthreading  : No
>> RAM                 : 16 GB
>> Hypervisor         : ESXi 5.1
>> Ethernet            : Intel 82599EB 10 Gig SFP
>>
>>
>> Guest VM         : 2 vCPU, 2 GB RAM
>> GuestOS          : Centos 6.2 32 bit
>>
>> Thanks in advance for your time and help!!!
>>
>> Thanks,
>> Selva.
>>



More information about the dev mailing list