[dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4

Chas Williams 3chas3 at gmail.com
Tue Sep 11 05:31:27 CEST 2018


On Mon, Aug 6, 2018 at 3:35 PM Matan Azrad <matan at mellanox.com> wrote:
>
>
> Hi Chas
>
> From: Chas Williams
> >On Mon, Aug 6, 2018 at 1:46 PM Matan Azrad <mailto:matan at mellanox.com> wrote:
> >Hi Chas
> >
> >From: Chas Williams
> >>On Fri, Aug 3, 2018 at 1:47 AM Matan Azrad <mailto:mailto:matan at mellanox.com> wrote:
> >>Hi Chas
> >>
> >> From: Chas Williams [mailto:mailto:mailto:mailto:3chas3 at gmail.com] On Thu, Aug 2, 2018 at 1:33
> >>> PM Matan Azrad <mailto:mailto:matan at mellanox.com> wrote:
> >>> >
> >>> > > I suggest to do it like next,
> >>> > > To add one more parameter for LACP which means how to configure the
> >>> > LACP MC group - lacp_mc_grp_conf:
> >>> > > 1. rte_flow.
> >>> > > 2. flow director.
> >>> > > 3. add_mac.
> >>> > > 3. set_mc_add_list
> >>> > > 4. allmulti
> >>> > > 5. promiscuous
> >>> > > Maybe more... or less :)
> >>> > >
> >>> > > By this way the user decides how to do it, if it's fail for a slave,
> >>> > > the salve
> >>> > should be rejected.
> >>> > > Conflict with another configuration(for example calling to
> >>> > > promiscuous
> >>> > disable while running LACP lacp_mc_grp_conf=5) should raise an error.
> >>> > >
> >>> > > What do you think?
> >>> > >
> >>> >
> >>> > Supporting an LACP mc group specific configuration does make sense,
> >>> > but I wonder if this could just be handled by default during slave add.
> >>> >
> >>> >
> >>> > 1 and 2 are essentially the same hardware filtering offload mode, and
> >>> > the other modes are irrelevant if this is enabled, it should not be
> >>> > possible to add the slave if the bond is configured for this mode, or
> >>> > possible to change the bond into this mode if an existing slave
> >>> > doesn't support it.
> >>>
> >>> >
> >>> > 3 should be the default expected behavior, but
> >>> > rte_eth_bond_slave_add() should fail if the slave being added doesn't
> >>> > support either adding the MAC to the slave or adding the LACP MC address.
> >>> >
> >>> > Then the user could try either rte_eth_allmulticast_enable() on the
> >>> > bond port and then try to add the slave again, which should fail if
> >>> > existing slave didn't support allmulticast or the add slave would fail
> >>> > again if the slave didn't support allmulticast  and finally just call
> >>> > rte_eth_promiscuous_enable() on the bond and then try to re-add the
> >>> > that slave.
> >>> >
> >>> > but maybe having a explicit configuration parameter would be better.
> >>>
> >>> I don't sure you understand exactly what I’m suggesting here, again:
> >>> I suggest to add a new parameter to the LACP mode called
> >>> lacp_mc_grp_conf(or something else).
> >>> So, when the user configures LACP (mode 4) it must to configure the
> >>> lacp_mc_grp_conf parameter to one of the options I suggested.
> >>> This parameter is not per slave means the bond PMD will use the selected
> >>> option to configure the LACP MC group for all the slave ports.
> >>>
> >>> If one of the slaves doesn't support the selected option it should be rejected.
> >>> Conflicts should rais an error.
> >>>
> >>> I agree here.  Yes, if a slave can't manage to subscribe to the multicast group,
> >>> an error should be raised.  The only way for this to happen is that you don't
> >>> have promisc support which is the ultimate fallback.
> >>
> >>> The advantages are:
> >>> The user knows which option is better to synchronize with his application.
> >>> The user knows better than the bond PMD what is the slaves capabilities.
> >>> All the slaves are configured by the same way - consistent traffic.
> >>>
> >>>
> >>> It would be ideal if all the slaves would have the same features and
> >>> capabilities.  There wasn't enforced before, so this would be a new restriction
> >>> that would be less flexible than what we currently have.  That doesn't seem like
> >>> an improvement.
> >>
> >>> The bonding user probably doesn't care which mode is used.
> >>> The bonding user just wants bonding to work.  He doesn't care about the details.   If I am writing
> >>> an application with this proposed API, I need to make a list of adapters and
> >>> what they support (and keep this up to date as DPDK evolves).  Ugh.
> >>
> >>The applications commonly know what are the nics capabilities they work with.
> >>
> >>I know at least an one big application which really suffering because the bond
> >>configures promiscuous in mode 4 without the application asking (it's considered there as a bug in dpdk).
> >>I think that providing  another option will be better.
> >>
> >>I think providing another option will be better as well.  However we disagree on the option.
> >>If the PMD has no other way to subscribe the multicast group, it has to use promiscuous mode.
> >
> >>Yes, it is true but there are a lot of other and better options, promiscuous is greedy! Should be the last alternative to use.
> >
> >Unfortunately, it's the only option implemented.
>
> Yes, I know, I suggest to change it or at least not to make it worst.
>
> >>Providing a list of options only makes life complicated for the developer and doesn't really
> >>make any difference in the end results.
> >
> >>A big different, for example:
> >>Let's say the bonding groups 2 devices that support rte_flow.
> >>The user don't want neither promiscuous nor all multicast, he just want to get it's mac traffic + LACP MC group traffic,(a realistic use case)
> >> if he has an option to tell to the bond PMD, please use rte_flow  to configure the specific LACP MC group it will be great.
> >>Think how much work these applications should do in the current behavior.
> >
> >The bond PMD should already know how to do that itself.
>
> The bond can do it with a lot of complexity, but again the user must know what the bond chose to be synchronized.
> So, I think it's better that the user will define it because it is a traffic configuration (the same as promiscuous configuration - the user configures it)
> >  Again, you are forcing more work on the user to ask them to select between the methods.
>
> We can create a default option as now(promiscuous).
>
> >>  For instance, if the least common denominator between the two PMDs is promiscuous mode,
> >> you are going to be forced to run both in promiscuous mode
> >>instead of selecting the best mode for each PMD.
> >
> >>In this case promiscuous is better,
> >>Using a different configuration is worst and against the bonding PMD principle to get a consistent traffic from the slaves.
> >>So, if one uses allmulti and one uses promiscuous the application may get an inconsistent traffic
> >>and it may trigger a lot of problems and complications for some applications.
> >
> >Those applications should already have those problems.
> >  I can make the counter
> >argument that there are potentially applications relying on the broken behavior.
>
> You right. So adding allmulticast will require changes in these applications.
>
> >We need to ignore those issues and fix this the "right" way.  The "right" way IMHO
> >is the pass the least amount of traffic possible in each case.
>
> Not in cost of an inconsistency, but looks like we are not agree here.
>

I have recently run into this issue again with a device that doesn't
support promiscuous, but does let me
subscribe to the appropriate multicast groups.  At this point, I am
leaning toward adding another API
call to the bonding API so that the user can provide a callback to
setup whatever they want on the slaves.
The default setup routine would be enable promiscuous.

Comments?


More information about the dev mailing list