[dpdk-dev] [PATCH 0/2] dpdk: Allow for dynamic enablement of some isolated features

Neil Horman nhorman at tuxdriver.com
Fri Aug 1 17:06:29 CEST 2014

On Thu, Jul 31, 2014 at 01:25:06PM -0700, Bruce Richardson wrote:
> On Thu, Jul 31, 2014 at 04:10:18PM -0400, Neil Horman wrote:
> > On Thu, Jul 31, 2014 at 11:36:32AM -0700, Bruce Richardson wrote:
> > > Thu, Jul 31, 2014 at 02:10:32PM -0400, Neil Horman wrote:
> > > > On Thu, Jul 31, 2014 at 10:32:28AM -0400, Neil Horman wrote:
> > > > > On Thu, Jul 31, 2014 at 03:26:45PM +0200, Thomas Monjalon wrote:
> > > > > > 2014-07-31 09:13, Neil Horman:
> > > > > > > On Wed, Jul 30, 2014 at 02:09:20PM -0700, Bruce Richardson wrote:
> > > > > > > > On Wed, Jul 30, 2014 at 03:28:44PM -0400, Neil Horman wrote:
> > > > > > > > > On Wed, Jul 30, 2014 at 11:59:03AM -0700, Bruce Richardson wrote:
> > > > > > > > > > On Tue, Jul 29, 2014 at 04:24:24PM -0400, Neil Horman wrote:
> > > > > > > > > > > Hey all-
> > > 
> > > With regards to the general approach for runtime detection of software
> > > functions, I wonder if something like this can be handled by the
> > > packaging system? Is it possible to ship out a set of shared libs
> > > compiled up for different instruction sets, and then at rpm install
> > > time, symlink the appropriate library? This would push the whole issue
> > > of detection of code paths outside of code, work across all our
> > > libraries and ensure each user got the best performance they could get
> > > form a binary?
> > > Has something like this been done before? The building of all the
> > > libraries could be scripted easy enough, just do multiple builds using
> > > different EXTRA_CFLAGS each time, and move and rename the .so's after
> > > each run.
> > > 
> > 
> > Sorry, I missed this in my last reply.
> > 
> > In answer to your question, the short version is that such a thing is roughly
> > possible from a packaging standpoint, but completely unworkable from a
> > distribution standpoint.  We could certainly build the dpdk multiple times and
> > rename all the shared objects to some variant name representative of the
> > optimzations we build in for certain cpu flags, but then we woudl be shipping X
> > versions of the dpdk, and any appilcation (say OVS that made use of the dpdk
> > would need to provide a version linked against each variant to be useful when
> > making a product, and each end user would need to manually select (or run a
> > script to select) which variant is most optimized for the system at hand.  Its
> > just not a reasonable way to package a library.
> Sorry, perhaps I was not clear, having the user have to select the
> appropriate library was not what I was suggesting. Instead, I was
> suggesting that the rpm install "librte_pmd_ixgbe.so.generic",
> "librte_pmd_ixgbe.so.sse42" and "librte_pmd_ixgbe.so.avx". Then the rpm
> post-install script would look at the cpuflags in cpuinfo and then
> symlink librte_pmd_ixgbe.so to the best-match version. That way the user
> only has to link against "librte_pmd_ixgbe.so" and depending on the
> system its run on, the loader will automatically resolve the symbols
> from the appropriate instruction-set specific .so file.

This is an absolute packaging nightmare, it will potentially break all sorts of
corner cases, and support processes.  To cite a few examples:

1) Upgrade support - What if the minimum cpu requirements for dpdk are advanced
at some point in the future?  The above strategy has no way to know that a given
update has more advanced requirements than a previous update, and when the
update is installed, the previously linked library for the old base will
dissappear, leaving broken applications behind.

2) Debugging - Its going to be near impossible to support an application built
with a package put together this way, because you'll never be sure as to which
version of the library was running when the crash occured.  You can figure it
out for certain, but for support/development people to need to remember to
figure this out is going to be a major turn off for them, and the result will be
that they simply won't use the dpdk.  Its Anathema to the expectations of linux
user space.

3) QA - Building multiple versions of a library means needing to QA multiple
versions of a library.  If you have to have 4 builds to support different levels
of optimization, you've created a 4x increase in the amount of testing you need
to do to ensure consistent behavior.  You need to be aware of how many different
builds are available in the single rpm at all times, and find systems on which
to QA which will ensure that all of the builds get tested (as they are in fact,
unique builds).  While you may not hit all code paths in a single build, you
will at least test all the common paths.

The bottom line is that Distribution packaging is all about consistency and
commonality.  If you install something for an arch on multiple systems, its the
same thing on each system, and it works in the same way, all the time.  This
strategy breaks that.  Thats why we do run time checks for things.


> > 
> > When pacaging software, the only consideration given to code variance at pacakge
> > time is architecture (x86/x86_64/ppc/s390/etc).  If you install a package for
> > your a given architecture, its expected to run on that architecture.  Optional
> > code paths are just that, optional, and executed based on run time tests.  Its a
> > requirement that we build for the lowest common demoniator system that is
> > supported, and enable accelerative code paths optionally at run time when the
> > cpu indicates support for them.
> > 
> > Neil
> > 

More information about the dev mailing list