[dpdk-dev] [dpdk-dev, 01/17] build: add initial infrastructure for meson & ninja builds
nhorman at tuxdriver.com
Fri Sep 8 13:57:06 CEST 2017
On Fri, Sep 08, 2017 at 09:50:26AM +0100, Bruce Richardson wrote:
> On Thu, Sep 07, 2017 at 12:21:57PM -0400, Neil Horman wrote:
> > On Fri, Sep 01, 2017 at 11:04:00AM +0100, Bruce Richardson wrote:
> > > To build with meson and ninja, we need some initial infrastructure in
> > > place. The build files for meson always need to be called "meson.build",
> > > and options get placed in meson_options.txt
> > >
> > > This commit adds a top-level meson.build file, which sets up the global
> > > variables for tracking drivers, libraries, etc., and then includes other
> > > build files, before finishing by writing the global build configuration
> > > header file and a DPDK pkgconfig file at the end, using some of those same
> > > globals.
> > >
> > > >From the top level build file, the only include file thus far is for the
> > > config folder, which does some other setup of global configuration
> > > parameters, including pulling in architecture specific parameters from an
> > > architectural subdirectory. A number of configuration build options are
> > > provided for the project to tune a number of global variables which will be
> > > used later e.g. max numa nodes, max cores, etc. These settings all make
> > > their way to the global build config header "rte_build_config.h". There is
> > > also a file "rte_config.h", which includes "rte_build_config.h", and this
> > > file is meant to hold other build-time values which are present in our
> > > current static build configuration but are not normally meant for
> > > user-configuration. Ideally, over time, the values placed here should be
> > > moved to the individual libraries or drivers which want those values.
> > >
> > > Signed-off-by: Bruce Richardson <bruce.richardson at intel.com>
> > > Reviewed-by: Harry van Haaren <harry.van.haaren at intel.com>
> > I feel like I need to underscore my previous concern here. While I'm not
> > opposed per-se to a new build system, I am very concerned about the burden that
> > switching places on downstream consumers, in particular distributions (since I
> > represent one of them). Moving to a new build system with new tools means those
> > tools need to be packaged, tested and shipped, which is a significant work
> > effort. While it might be a net gain long term, its something you need to keep
> > in mind when making these changes.
> Understood. If there is anything we/I can do to make this transition
> easier, please flag it for consideration.
Thank you, I appreciate that.
> > I know you've said that we will be keepting the existing build system,
> > I just need to be sure everyone understands just how important that
> > is.
> What is your feeling here, in terms of timescale. After any new system
> reaches feature parity, how long would you estimate that we would need
> to support the existing makefile system before it would be safe to
> deprecate it? Should we start a deprecation plan, or is it best just to
> commit to support both until we get all - or almost all - downstream
> consumers switched over? While I wouldn't push for deprecating the old
> system any time soon, and I wouldn't consider maintaining the two
> unduly burdensome, it's not something we want to do in the long term.
I was hoping to avoid putting a specific time frame on it, but its a fair
question to ask. I feel like any particular timetable is somewhat arbitrary.
Keith suggested a year, which is likely as good as any in my mind. To put a bit
more detail behind it, a RHEL release cycle is anywhere from 6 to 18 months, so
a year fits well. If we assume starting a few weeks back when you first
proposed this change, that its going to be merged, that gives us time to package
the build components, build the new package using them, get it through a qa
cycle and fix anything that pops up as a result. That way, when the switch is
made, it can be done with an immediate deprecation of the old build system with
a level of confidence that some of the more esoteric build targets/configs will
> > Though perhaps the time frame for keeping the current build system as priarmy is
> > less concerning, as feature parity is even more critical. That is to say, the
> > new build system must be able to produce the same configurations that the
> > current build system does. Without it I don't think anyone will be able to use
> > it consistently, and that will leave a great number of users in a very poor
> > position. I think getting a little closer to parity with the current system is
> > warranted. I'd suggest as a gating factor:
> > 1) Building on all supported arches
> > 2) Cross building on all supported arches
> > 3) Proper identification of targeted machine (i.e. equivalent of the machine
> > component of the current build system)
> The question there is gating factor for what? Presumably not for merging
> into the staging tree. But for merging into the main tree for releases?
> I'd push back a little on that, as the new system does not interfere in
> any way with the old, and by keeping it in a staging tree until it
> reaches full feature parity will make the job considerably harder. For
> example, it means that anyone submitting a new driver or library has to
> submit the code and makefiles in one set and the meson patches in a
> separate one for a separate build tree. It also makes it less likely
> that people will try out the new system and find the issues with it, and
> help fill in the gaps. While I can understand us not recommending the
> new build system until it reaches feature parity, I think there are a
> lot of benefits to be got by making it widely available, even if it's
Yes, sorry, the implied "what" here is gating its introduction to mainline. I
have no problem with this going into a development or staging branch/tree, only
with it getting merged to mainline and becoming the primary build system today.
I get that it makes reaching feature parity harder, but to not do so relegates
anyone that hasn't had a chance to test the new build system to second class
citizen status (or at least potentially does so). To be a bit more specific, I
can see how energized you might be to get this in place now because you've
tested it on a wide array on intel hardware, but I'm guessing if it went in
today, people at IBM and Linaro would have to drop active developement to start
switching their build environments over to the new system lest they get left out
in the cold. I think its more about balacing where the hardship lies here.
As I'm writing this, I wonder if a reasonable compromise couldn't involve the
use of CI? That is to say, what if we integrated the build system now-ish, and
stood up an official CI instance, that both:
1) built the dpdk in all style configuration, which mandated the use of the old
build system (i.e. we implicitly mandate that the current build system stay
working, and is not forgotten), and gate patch merges on that result.
2) add a test that any change to a meson file in mainline also include a change
to the a Makefile
I'm just spitballing here, but I'm looking for ways to enforce the continued use
of the current build system above and beyond a verbal promise to do so. The
idea is to ensure that it stays operational and primary to the development of
dpdk until build system parity is reached.
> On a semi-related note, some folks here are chomping at the bit for it
> to get mainlined, as they want the improved build time for recompiles
> for speed up their development process. They can work off the posted
> patches, but it's more painful than having it locked in.
I'm sure they are, but they are just one segment of the community. While its
faster for them, it also potentially causes anyone not on that platform a
headache, because they have to figure out how to integrate the new system to
> > Specific notes inline
> > > ---
> > > config/meson.build | 69 +++++++++++++++++++++++++++++++++++++++++
> > > config/rte_config.h | 50 ++++++++++++++++++++++++++++++
> > > config/x86/meson.build | 70 ++++++++++++++++++++++++++++++++++++++++++
> > > meson.build | 83 ++++++++++++++++++++++++++++++++++++++++++++++++++
> > > meson_options.txt | 6 ++++
> > > 5 files changed, 278 insertions(+)
> > > create mode 100644 config/meson.build
> > > create mode 100644 config/rte_config.h
> > > create mode 100644 config/x86/meson.build
> > > create mode 100644 meson.build
> > > create mode 100644 meson_options.txt
> > >
> > > diff --git a/config/meson.build b/config/meson.build
> > > new file mode 100644
> > > index 000000000..3a6bcc58d
> > > --- /dev/null
> > > +++ b/config/meson.build
> > > @@ -0,0 +1,69 @@
> > > +# BSD LICENSE
> > > +#
> > > +# Copyright(c) 2017 Intel Corporation. All rights reserved.
> > > +# All rights reserved.
> > > +#
> > > +# Redistribution and use in source and binary forms, with or without
> > > +# modification, are permitted provided that the following conditions
> > > +# are met:
> > > +#
> > > +# * Redistributions of source code must retain the above copyright
> > > +# notice, this list of conditions and the following disclaimer.
> > > +# * Redistributions in binary form must reproduce the above copyright
> > > +# notice, this list of conditions and the following disclaimer in
> > > +# the documentation and/or other materials provided with the
> > > +# distribution.
> > > +# * Neither the name of Intel Corporation nor the names of its
> > > +# contributors may be used to endorse or promote products derived
> > > +# from this software without specific prior written permission.
> > > +#
> > > +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> > > +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> > > +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> > > +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> > > +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> > > +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> > > +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> > > +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> > > +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> > > +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> > > +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> > > +
> > > +# set the machine type and cflags for it
> > > +machine = get_option('machine')
> > > +dpdk_conf.set('RTE_MACHINE', machine)
> > > +add_project_arguments('-march=@0@'.format(machine), language: 'c')
> > So, in the current build system, arch defined the process architecture, while
> > 'machine' defined the specific processor family (nhm, ivb, etc). This seems
> > like you are merging those two concepts together. While that seems reasonable,
> > is that going to be workable with non-x86 architectures?
> I'm not sure I am, but I'm not familiar enough with other architectures
> to be sure. I'd appreciate some feedback from those more familiar with
> ARM/PPC to help support the effort to be sure.
> For now, in this set, the machine value is being used as now on IA, a
> tuning flag to be passed to the compiler. The actual architecture is
> pulled from host_machine.cpu_family() - which is the final target
> machine the case of a cross build.
> > Have you considered using the cross-script option in meson to define a per arch
> > build file? That I think would eliminate some of this top level parsing of arch
> > options
> No, I haven't looked into that yet, but I will do so shortly. I'd still
> look to get this in as a starting baseline and then modify it as
> > > +# some libs depend on maths lib
> > > +add_project_link_arguments('-lm', language: 'c')
> > > +
> > > +# add -include rte_config to cflags
> > > +add_project_arguments('-include', 'rte_config.h', language: 'c')
> > > +
> > > +# disable any unwanted warnings
> > > +unwanted_warnings = [
> > > + '-Wno-address-of-packed-member',
> > > + '-Wno-format-truncation'
> > > +]
> > > +foreach arg: unwanted_warnings
> > > + if cc.has_argument(arg)
> > > + add_project_arguments(arg, language: 'c')
> > > + endif
> > > +endforeach
> > > +
> > > +compile_time_cpuflags = 
> > > +if host_machine.cpu_family().startswith('x86')
> > > + arch_subdir = 'x86'
> > > + subdir(arch_subdir)
> > > +endif
> > > +dpdk_conf.set('RTE_COMPILE_TIME_CPUFLAGS', ','.join(compile_time_cpuflags))
> > > +
> > Likewise, I think if you use the --cross-script approach, this logic gets
> > eliminated in favor of a file pointer from the command line
> > <snip>
> I'd rather not force the use of a cross-script if not cross-compiling.
> > > +
> > > +# set up some global vars for compiler, platform, configuration, etc.
> > > +cc = meson.get_compiler('c')
> > > +dpdk_conf = configuration_data()
> > > +dpdk_libraries = 
> > > +dpdk_drivers = 
> > > +dpdk_extra_ldflags = 
> > > +
> > > +# for static libs, treat the drivers as regular libraries, otherwise
> > > +# for shared libs, put them in a driver folder
> > > +if get_option('default_library') == 'static'
> > > + driver_install_path = get_option('libdir')
> > > +else
> > > + driver_install_path = '@0@/dpdk/drivers'.format(get_option('prefix'))
> > > +endif
> > > +
> > So, I like this, as it appears to default to shared library builds, which is
> > great. Unfortunately, it doesn't seem to work for me when using this command:
> > meson -Ddefault_library=static -Dlibdir=./build/lib . build
> meson --default-library=static ...
> > If I do that and then run ninja in my build directory, I still get DSO's not
> > static libraries. I am assuming that I'm doing something subtly wrong in my
> > build, but I can't seem to see what it is.
> > On the other hand, if static builds don't work yet, thats going to be an issue.
> > <snip>
> > > +# configure the build, and make sure configs here and in config folder are
> > > +# able to be included in any file. We also store a global array of include dirs
> > > +# for passing to pmdinfogen scripts
> > > +global_inc = include_directories('.', 'config')
> > > +subdir('config')
> > > +
> > > +# TODO build libs and drivers
> > > +
> > > +# TODO build binaries and installable tools
> > > +
> > This seems outdated, but I think you remove it in a later patch
> Yep. I put them in as comment placeholders in the early patches to show
> where things would go in later ones to try and make the flow clearer
> i.e. they were deliberately put in, even though removed later.
> Thanks for the feedback.
More information about the dev