[dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement

Yong Wang yongwang at vmware.com
Mon Oct 13 23:00:28 CEST 2014


Only the last one is performance related and it merely tries to give hints to the compiler to hopefully make branch prediction more efficient.  It also moves a constant assignment out of the pkt polling loop.

We did performance evaluation on a Nehalem box with 4cores at 2.8GHz x 2 socket:
On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with one core assigned for polling.  The client side is pktgen/dpdk, pumping 64B tcp packets at line rate.  Before the patch, we are seeing ~900K PPS with 65% cpu of a core used for DPDK.  After the patch, we are seeing the same pkt rate with only 45% of a core used.  CPU usage is collected factoring our the idle loop cost.  The packet rate is a result of the mode we used for vmxnet3 (pure emulation mode running default number of hypervisor contexts).  I can add these info in the review request.

Yong
________________________________________
From: Thomas Monjalon <thomas.monjalon at 6wind.com>
Sent: Monday, October 13, 2014 1:29 PM
To: Yong Wang
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement

Hi,

2014-10-12 23:23, Yong Wang:
> This patch series include various fixes and improvement to the
> vmxnet3 pmd driver.
>
> Yong Wang (5):
>   vmxnet3: Fix VLAN Rx stripping
>   vmxnet3: Add VLAN Tx offload
>   vmxnet3: Fix dev stop/restart bug
>   vmxnet3: Add rx pkt check offloads
>   vmxnet3: Some perf improvement on the rx path

Please, could describe what is the performance gain for these patches?
Benchmark numbers would be appreciated.

Thanks
--
Thomas


More information about the dev mailing list