[dpdk-dev] [PATCH v1 2/2] Test cases for rte_memcmp functions
zhihong.wang at intel.com
Wed Jan 11 02:28:30 CET 2017
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com]
> Sent: Monday, January 9, 2017 7:09 PM
> To: Wang, Zhihong <zhihong.wang at intel.com>
> Cc: Ravi Kerur <rkerur at gmail.com>; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v1 2/2] Test cases for rte_memcmp
> 2017-01-09 05:29, Wang, Zhihong:
> > From: Thomas Monjalon [mailto:thomas.monjalon at 6wind.com]
> > > 2016-06-07 11:09, Wang, Zhihong:
> > > > From: Ravi Kerur [mailto:rkerur at gmail.com]
> > > > > Zhilong, Thomas,
> > > > >
> > > > > If there is enough interest within DPDK community I can work on
> > > support
> > > > > for 'unaligned access' and 'test cases' for it. Please let me know either
> > > way.
> > > >
> > > > Hi Ravi,
> > > >
> > > > This rte_memcmp is proved with better performance than glibc's in
> > > > cases, I think it has good value to DPDK lib.
> > > >
> > > > Though we don't have memcmp in critical pmd data path, it offers a
> > > > choice for applications who do.
> > >
> > > Re-thinking about this series, could it be some values to have a
> > > implementation?
> > I think this series (rte_memcmp included) could help:
> > 1. Potentially better performance in hot paths.
> > 2. Agile for tuning.
> > 3. Avoid performance complications -- unusual but possible,
> > like the glibc memset issue I met while working on vhost
> > enqueue.
> > > What is the value compared to glibc one? Why not working on glibc?
> > As to working on glibc, wider design consideration and test
> > coverage might be needed, and we'll face different release
> > cycles, can we have the same agility? Also working with old
> > glibc could be a problem.
> Probably we need both: add the optimized version in DPDK while working
> on a glibc optimization.
> This strategy could be applicable to memcpy, memcmp and memset.
This does help in the long run if turned out feasible.
More information about the dev