[dpdk-dev] L2fwd Performance issue with Virtual Machine

Vincent JARDIN vincent.jardin at 6wind.com
Sat Oct 5 08:32:51 CEST 2013


I disagree Rashmin. We did measurements with 64 bytes packets: the Linux 
kernel of the guest is the bottleneck, so the vmxnet3 PMD helps to 
increase the packet rate of the Linux guests.

PMD helps within guest for packet rate until you rich (of course) the 
bottleneck of the host's vSwitch.

In order to accelerate the host's vSwitch, you have to run a fast path 
based vSwitch on the host too.

Best regards,
   Vincent

On 04/10/2013 23:36, Selvaganapathy Chidambaram wrote:
> Thanks Rashmin for your time and help!
>
> So it looks like with the given hardware config, we could probably only
> achieve around 8 Gbps in VM without using SRIOV. Once DPDK is used in
> vSwitch design, we could gain more performance.
>
>
> Thanks,
> Selvaganapathy.C.
>
>
> On Fri, Oct 4, 2013 at 11:02 AM, Patel, Rashmin N <rashmin.n.patel at intel.com
>> wrote:
>
>> Correction: "you would NOT get optimal performance benefit having PMD"
>>
>> Thanks,
>> Rashmin
>> -----Original Message-----
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Patel, Rashmin N
>> Sent: Friday, October 04, 2013 10:47 AM
>> To: Selvaganapathy Chidambaram
>> Cc: dev at dpdk.org
>> Subject: Re: [dpdk-dev] L2fwd Performance issue with Virtual Machine
>>
>> Hi,
>>
>> If you are not using SRIOV or direct device assignment to VM, your traffic
>> hits vSwitch(via vmware native ixgbe driver and network stack) in the ESX
>> and switched to your E1000/VMXNET3 interface connected to a VM. The vSwitch
>> is not optimized for PMD at present so you would get optimal performance
>> benefit having PMD, I believe.
>>
>> For the RSS front, I would say you won't see much difference with RSS
>> enabled for 1500 bytes frames. In fact, core is capable of handling such
>> traffic in VM, but the bottleneck is in ESXi software switching layer,
>> that's what my initial research shows across multiple hypervisors.
>>
>> Thanks,
>> Rashmin
>>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Selvaganapathy
>> Chidambaram
>> Sent: Thursday, October 03, 2013 2:39 PM
>> To: dev at dpdk.org
>> Subject: [dpdk-dev] L2fwd Performance issue with Virtual Machine
>>
>> Hello Everyone,
>>
>> I have tried to run DPDK sample application l2fwd(modified to support
>> multiple queues) in my ESX Virtual Machine. I see that performance is not
>> scaling with cores. [My apologies for the long email]
>>
>> *Setup:*
>>
>> Connected VM to two ports of Spirent with 10Gig link. Sent 10 Gig traffic
>> of L3 packet of length 1500 bytes (with four different flows) from Spirent
>> through one port and received at the second port. Also sent traffic from
>> reverse direction so that net traffic is 20 Gbps. Haven't enabled SR-IOV or
>>   Direct path I/O.
>>
>> *Emulated Driver:*
>>
>> With default emulated driver, I got 7.3 Gbps for 1 core. Adding multiple
>> cores did not improve the performance. On debugging I noticed that function
>> eth_em_infos_get() says RSS is not supported.
>>
>> *vmxnet3_usermap:*
>>
>> Then I tried extension vmxnet3_usermap and got 8.7 Gbps for 1 core. Again
>> adding another core did not help. On debugging, I noticed that in vmxnet3
>> kernel driver (in function vmxnet3_probe_device) , RSS is disabled if *
>> adapter->is_shm* is non zero. In our case, its
>> adapter->VMXNET3_SHM_USERMAP_DRIVER
>> which is non zero.
>>
>> Before trying to enable it, I would like to know if there is any known
>> limitation why RSS is not enabled in both the drivers. Please help me
>> understand.
>>
>> *Hardware Configuration:*
>> Hardware          : Intel Xeon 2.4 Ghz 4 CPUs
>> Hyperthreading  : No
>> RAM                 : 16 GB
>> Hypervisor         : ESXi 5.1
>> Ethernet            : Intel 82599EB 10 Gig SFP
>>
>>
>> Guest VM         : 2 vCPU, 2 GB RAM
>> GuestOS          : Centos 6.2 32 bit
>>
>> Thanks in advance for your time and help!!!
>>
>> Thanks,
>> Selva.
>>



More information about the dev mailing list