[dpdk-dev] [memnic PATCH 0/7] MEMNIC PMD performance improvement

Tetsuya Mukawa mukawa at igel.co.jp
Thu Sep 11 11:11:27 CEST 2014


Hi Shimamoto-san,

(2014/09/11 17:36), Hiroshi Shimamoto wrote:
> Hi Mukawa-san,
>
>> Subject: Re: [dpdk-dev] [memnic PATCH 0/7] MEMNIC PMD performance improvement
>>
>> Hi Shimamoto-san,
>>
>>
>> (2014/09/11 16:45), Hiroshi Shimamoto wrote:
>>> From: Hiroshi Shimamoto <h-shimamoto at ct.jp.nec.com>
>>>
>>> This patchset improves MEMNIC PMD performance.
>>>
>>> The first patch introduces a new benchmark test run in guest,
>>> and will be used to evaluate the following patch effects.
>>>
>>> This patchset improves the throughput results of memnic-tester.
>>> Using Xeon E5-2697 v2 @ 2.70GHz, 4 vCPU.
>> How many cores are you actually using for sending and receiving?
> In this case, I use 4 dedicated cores pinned to each vCPU,
> so the answer is 4 cores, more precisely 2 cores for the test DPDK App.
>
>> I guess 1 dedicated core is used for sending on host or guest side, and
>> one more dedicated core is for receiving on the other side.
>> And you've got a following performance result.
>> Is this correct?
> I think you can see the test details in the first patch.

Thank you so much. I haven't checked it yet.
It seems it's very nice performance! I wanna compare it with vhost
example one.

Thanks,
Tetsuya Mukawa

> The test is done in guest only because I just want to know the
> PMD performance only. The host does nothing in the test.
> In guest 1 thread = 1 dedicated core emulates packet send/recv
> by turning flag on/off. On the other hand another thread, also
> pinned 1 dedicated core, does rx_burst and tx_burst.
> The test measures how much packets can be received and transmitted
> by MEMNIC PMD.
> This results means that if host can sends and receives packets in
> enough performance, how much throughput the guest application can
> achieve.
>
> thanks,
> Hiroshi
>
>> Thanks,
>> Tetsuya Mukawa
>>
>>>  size |  before  |  after
>>>    64 | 4.18Mpps | 5.83Mpps
>>>   128 | 3.85Mpps | 5.71Mpps
>>>   256 | 4.01Mpps | 5.40Mpps
>>>   512 | 3.52Mpps | 4.64Mpps
>>>  1024 | 3.18Mpps | 3.68Mpps
>>>  1280 | 2.86Mpps | 3.17Mpps
>>>  1518 | 2.59Mpps | 2.90Mpps
>>>
>>> Hiroshi Shimamoto (7):
>>>   guest: memnic-tester: PMD benchmark in guest
>>>   pmd: remove needless assignment
>>>   pmd: use helper macros
>>>   pmd: use compiler barrier
>>>   pmd: packet receiving optimization with prefetch
>>>   pmd: add branch hint in recv/xmit
>>>   pmd: split calling mbuf free
>>>
>>>  guest/Makefile        |  20 ++++
>>>  guest/README.rst      |  94 +++++++++++++++++
>>>  guest/memnic-tester.c | 281 ++++++++++++++++++++++++++++++++++++++++++++++++++
>>>  pmd/pmd_memnic.c      |  43 ++++----
>>>  4 files changed, 417 insertions(+), 21 deletions(-)
>>>  create mode 100644 guest/Makefile
>>>  create mode 100644 guest/README.rst
>>>  create mode 100644 guest/memnic-tester.c
>>>



More information about the dev mailing list