[dpdk-dev] Proposal for a new Committer model

Stephen Hemminger stephen at networkplumber.org
Sun Nov 20 05:17:16 CET 2016


why aren't some patches as marked trivial and accepted right away.

On Fri, Nov 18, 2016 at 11:06 AM, Jerin Jacob <
jerin.jacob at caviumnetworks.com> wrote:

> On Fri, Nov 18, 2016 at 01:09:35PM -0500, Neil Horman wrote:
> > On Thu, Nov 17, 2016 at 09:20:50AM +0000, Mcnamara, John wrote:
> > > Repost from the moving at dpdk.org mailing list to get a wider
> audience.
> > > Original thread: http://dpdk.org/ml/archives/
> moving/2016-November/000059.html
> > >
> > >
> > > Hi,
> > >
> > > I'd like to propose a change to the DPDK committer model. Currently we
> have one committer for the master branch of the DPDK project.
> > >
> > > One committer to master represents a single point of failure and at
> times can be inefficient. There is also no agreed cover for times when the
> committer is unavailable such as vacation, public holidays, etc. I propose
> that we change to a multi-committer model for the DPDK project. We should
> have three committers for each release that can commit changes to the
> master branch.
> > >
> > > There are a number of benefits:
> > >
> > > 1. Greater capacity to commit patches.
> > > 2. No single points of failure - a committer should always be
> available if we have three.
> > > 3. A more timely committing of patches. More committers should equal a
> faster turnaround - ideally, maintainers should also provide feedback on
> patches submitted within a 2-3 day period, as much as possible, to
> facilitate this.
> > > 4. It follows best practice in creating a successful multi-vendor
> community - to achieve this we must ensure there is a level playing field
> for all participants, no single person should be required to make all of
> the decisions on patches to be included in the release.
> > >
> > > Having multiple committers will require some degree of co-ordination
> but there are a number of other communities successfully following this
> model such as Apache, OVS, FD.io, OpenStack etc. so the approach is
> workable.
> > >
> > > John
> >
> > I agree that the problems you are attempting to address exist and are
> > worth finding a solution for.  That said, I don't think the solution you
> > are proposing is the ideal, or complete fix for any of the issues being
> > addressed.
> >
> > If I may, I'd like to ennumerate the issues I think you are trying to
> > address based on your comments above, then make a counter-proposal for a
> > solution:
> >
> > Problems to address:
> >
> > 1) high-availability - There is a desire to make sure that, when patches
> > are proposed, they are integrated in a timely fashion.
> >
> > 2) high-throughput - DPDK has a large volume of patches, more than one
> > person can normally integrate.  There is a desire to shard that work such
> > that it is handled by multiple individuals
> >
> > 3) Multi-Vendor fairness - There is a desire for multiple vendors to feel
> > as though the project tree maintainer isn't biased toward any individual
> > vendor.
> >
> > To solve these I would propose the following solution (which is simmilar
> > to, but not quite identical, to yours).
> >
> > A) Further promote subtree maintainership.  This was a conversation that
> I
> > proposed some time ago, but my proposed granularity was discarded in
> favor
> > of something that hasn't worked as well (in my opinion).  That is to say
> a
> > few driver pmds (i40e and fm10k come to mind) have their own tree that
> > send pull requests to Thomas.  We should be sharding that at a much
> higher
> > granularity and using it much more consistently.  That is to say, that we
> > should have a maintainer for all the ethernet pmds, and another for the
> > crypto pmds, another for the core eal layer, another for misc libraries
> > that have low patch volumes, etc.  Each of those subdivisions should have
> > their own list to communicate on, and each should have a tree that
> > integrates patches for their own subsystem, and they should on a regular
> > cycle send pull requests to Thomas.  Thomas in turn should by and large,
> > only be integrating pull requests.  This should address our high-
> > throughput issue, in that it will allow multiple maintainers to share the
> > workload, and integration should be relatively easy.
>
> +1
>
> >
> > B) Designate alternates to serve as backups for the maintainer when they
> > are unavailable.  This provides high-availablility, and sounds very much
> > like your proposal, but in the interests of clarity, there is still a
> > single maintainer at any one time, it just may change to ensure the
> > continued merging of patches, if the primary maintainer isn't available.
> > Ideally however, those backup alternates arent needed, because most of
> the
> > primary maintainers work in merging pull requests, which are done based
> on
> > the trust of the submaintainer, and done during a very limited window of
> > time.  This also partially addreses multi-vendor fairness if your subtree
> > maintainers come from multiple participating companies.
>
> +1
>
> >
> > Regards
> > Neil
> >
> >
>


More information about the dev mailing list