[dpdk-dev] VMXNET 3

Patel, Rashmin N rashmin.n.patel at intel.com
Tue Aug 26 19:11:10 CEST 2014


I forgot to mention that the performance expectation I gave was for "big packets (>512 bytes size) consumed by multiple VMs" because ESXi is optimized for enterprise kind of workloads. 

-----Original Message-----
From: Patel, Rashmin N 
Sent: Tuesday, August 26, 2014 9:30 AM
To: 'Alex Markuze'; dev at dpdk.org
Subject: RE: [dpdk-dev] VMXNET 3

As far as I know, some of the us (developers) are submitting patches for the VMXNET3 driver, which is inside the DPDK package in librte_pmd_vmxnet3. I have used the other version for benchmarking a long back but as it was limited to Linux kernel and I found the librte_pmd_vmxnet3 version more elegant, I moved on to that one.

As far as performance is concerned, it's limited by the ESXi vSwitch and vmxnet3 backend code, so if you're looking for line rate at 10G, it's far down the road if vmxnet3 backend processing is optimized, but today you can achieve ~55-60% of 10G in a single VM using VMXNET3, although you if you want to consume more of 10G, you can try increasing number of VMs.

Thanks,
Rashmin

-----Original Message-----
From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Alex Markuze
Sent: Tuesday, August 26, 2014 3:06 AM
To: dev at dpdk.org
Subject: [dpdk-dev] VMXNET 3

Hi
I'm looking for reasonable DPDK based solution in fully virtualised VMware environment.
From what I've seen there are several flavours of VMXNET 3 driver for dpdk not all of them seem to be alive - user map -last updated on may and doesn't compile on DPDK 1.7.

So to my question what is the state of the DPDK art in vmxnet drivers?
and what performance
could one expect over a 10/40/56G nic?

Thanks
Alex.


More information about the dev mailing list