[dpdk-dev] Performance issue with vmxnet3 pmd

Patel, Rashmin N rashmin.n.patel at intel.com
Tue Jul 8 02:07:29 CEST 2014


According to my experiments at moment, the bottleneck lies in backend in hypervisor for para-virtual devices including Vmxnet3 and hence different front-end drivers (stock Vmxnet3 driver or Vmxnet3-PMD) would performance equally well, I don’t have solid numbers to show at moment though. Will update you on that.

For now, the main advantage of having DPDK version of Vmxnet3 driver is having scalability across multiple hypervisors, and application portability keeping in mind that backend can be optimized and higher throughput can be achieved at later stage.

Thanks,
Rashmin

From: hyunseok.chang at gmail.com [mailto:hyunseok.chang at gmail.com] On Behalf Of Hyunseok
Sent: Monday, July 07, 2014 4:49 PM
To: Patel, Rashmin N; dev at dpdk.org
Subject: RE: [dpdk-dev] Performance issue with vmxnet3 pmd


Thanks for your response.

I am actually more interested in stock (non-dpdk) vmxnet3 driver vs. vmxnet3 pmd driver comparison.

When I forward pkts from stock vmxnet3 driver, I am able to achieve much higher throughput than with vmxnet3 pmd.  To make comparison fair, I did not leverage gro/gso.

Does any of the overheads you mentioned play a role in this comparison?  Here I am comparing different drivers for the same vmxnet3 interface...

Regards,
Hyunseok
On Jul 7, 2014 7:03 PM, "Patel, Rashmin N" <rashmin.n.patel at intel.com<mailto:rashmin.n.patel at intel.com>> wrote:
Hi Hyunseok,

We should not compare Vmxnet3-PMD with ixgbe-PMD performance as Vmxnet3 device is a para-virtual device and it's not similar to directly assigned device to a VM either.
There is VMEXIT/VMEXIT occurrence at burst-size boundary and that overhead can’t be eliminated unless the design of Vmxnet3 is updated in future. In addition to that the packets is being touched in ESXi hypervisor vSwitch layer between physical NIC and a virtual machine, which introduces extra overhead, which you won't have in case of using Niantic being used natively or passed through Vt-d to a virtual machine.

Feature wise, we can compare it to Virtio-PMD solution, but again there is a little different in device handling and backend driver support compared to Vmxnet3 device so performance comparison won’t to apple to apple.

Thanks,
Rashmin

-----Original Message-----
From: dev [mailto:dev-bounces at dpdk.org<mailto:dev-bounces at dpdk.org>] On Behalf Of Hyunseok
Sent: Monday, July 07, 2014 3:22 PM
To: dev at dpdk.org<mailto:dev at dpdk.org>
Subject: [dpdk-dev] Performance issue with vmxnet3 pmd

Hi,

I was testing l2-fwd with vmxnet3 pmd (included in dpdk).

The maximum forwarding rate I got from vmxnet3 pmd with l2fwd is only 2.5 to 2.8 Gbps.

This is in contrast with ixgbe pmd with which I could easily achieve 10 gbps forwarding rate.

With the original vmxnet3 driver (non pmd), I could also achieve close to
10 gpbs with multiple iperf.   But I can never achieve that rate with
vmxnet3 pmd...

So basically vmxnet3 pmd doesn't seem that fast.  Is this a known issue?

Thanks,
-Hyunseok


More information about the dev mailing list