[dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4

Matan Azrad matan at mellanox.com
Mon Aug 6 19:46:11 CEST 2018


Hi Chas

From: Chas Williams 
>On Fri, Aug 3, 2018 at 1:47 AM Matan Azrad <mailto:matan at mellanox.com> wrote:
>Hi Chas
>
> From: Chas Williams [mailto:mailto:3chas3 at gmail.com] On Thu, Aug 2, 2018 at 1:33
>> PM Matan Azrad <mailto:matan at mellanox.com> wrote:
>> >
>> > > I suggest to do it like next,
>> > > To add one more parameter for LACP which means how to configure the
>> > LACP MC group - lacp_mc_grp_conf:
>> > > 1. rte_flow.
>> > > 2. flow director.
>> > > 3. add_mac.
>> > > 3. set_mc_add_list
>> > > 4. allmulti
>> > > 5. promiscuous
>> > > Maybe more... or less :)
>> > >
>> > > By this way the user decides how to do it, if it's fail for a slave,
>> > > the salve
>> > should be rejected.
>> > > Conflict with another configuration(for example calling to
>> > > promiscuous
>> > disable while running LACP lacp_mc_grp_conf=5) should raise an error.
>> > >
>> > > What do you think?
>> > >
>> >
>> > Supporting an LACP mc group specific configuration does make sense,
>> > but I wonder if this could just be handled by default during slave add.
>> >
>> >
>> > 1 and 2 are essentially the same hardware filtering offload mode, and
>> > the other modes are irrelevant if this is enabled, it should not be
>> > possible to add the slave if the bond is configured for this mode, or
>> > possible to change the bond into this mode if an existing slave
>> > doesn't support it.
>> 
>> >
>> > 3 should be the default expected behavior, but
>> > rte_eth_bond_slave_add() should fail if the slave being added doesn't
>> > support either adding the MAC to the slave or adding the LACP MC address.
>> >
>> > Then the user could try either rte_eth_allmulticast_enable() on the
>> > bond port and then try to add the slave again, which should fail if
>> > existing slave didn't support allmulticast or the add slave would fail
>> > again if the slave didn't support allmulticast  and finally just call
>> > rte_eth_promiscuous_enable() on the bond and then try to re-add the
>> > that slave.
>> >
>> > but maybe having a explicit configuration parameter would be better.
>> 
>> I don't sure you understand exactly what I’m suggesting here, again:
>> I suggest to add a new parameter to the LACP mode called
>> lacp_mc_grp_conf(or something else).
>> So, when the user configures LACP (mode 4) it must to configure the
>> lacp_mc_grp_conf parameter to one of the options I suggested.
>> This parameter is not per slave means the bond PMD will use the selected
>> option to configure the LACP MC group for all the slave ports.
>> 
>> If one of the slaves doesn't support the selected option it should be rejected.
>> Conflicts should rais an error.
>> 
>> I agree here.  Yes, if a slave can't manage to subscribe to the multicast group,
>> an error should be raised.  The only way for this to happen is that you don't
>> have promisc support which is the ultimate fallback.
>
>> The advantages are:
>> The user knows which option is better to synchronize with his application.
>> The user knows better than the bond PMD what is the slaves capabilities.
>> All the slaves are configured by the same way - consistent traffic.
>> 
>> 
>> It would be ideal if all the slaves would have the same features and
>> capabilities.  There wasn't enforced before, so this would be a new restriction
>> that would be less flexible than what we currently have.  That doesn't seem like
>> an improvement.
>
>> The bonding user probably doesn't care which mode is used.
>> The bonding user just wants bonding to work.  He doesn't care about the details.   If I am writing
>> an application with this proposed API, I need to make a list of adapters and
>> what they support (and keep this up to date as DPDK evolves).  Ugh.
>
>The applications commonly know what are the nics capabilities they work with.
>
>I know at least an one big application which really suffering because the bond
>configures promiscuous in mode 4 without the application asking (it's considered there as a bug in dpdk).
>I think that providing  another option will be better.
>
>I think providing another option will be better as well.  However we disagree on the option.
>If the PMD has no other way to subscribe the multicast group, it has to use promiscuous mode.

Yes, it is true but there are a lot of other and better options, promiscuous is greedy! Should be the last alternative to use. 

>Providing a list of options only makes life complicated for the developer and doesn't really
>make any difference in the end results.

A big different, for example:
Let's say the bonding groups 2 devices that support rte_flow.
The user don't want neither promiscuous nor all multicast, he just want to get it's mac traffic + LACP MC group traffic,(a realistic use case)
 if he has an option to tell to the bond PMD, please use rte_flow  to configure the specific LACP MC group it will be great.
Think how much work these applications should do in the current behavior.

>  For instance, if the least common denominator between the two PMDs is promiscuous mode,
> you are going to be forced to run both in promiscuous mode
>instead of selecting the best mode for each PMD.

In this case promiscuous is better,
Using a different configuration is worst and against the bonding PMD principle to get a consistent traffic from the slaves.
So, if one uses allmulti and one uses promiscuous the application may get an inconsistent traffic
and it may trigger a lot of problems and complications for some applications.
  
>DPDK already has a promiscuous flag for the PMDs:
>
>        RTE_FUNC_PTR_OR_RET(*dev->dev_ops->promiscuous_enable);
>        (*dev->dev_ops->promiscuous_enable)(dev);
>        dev->data->promiscuous = 1;
>
>So the bonding PMD already should be able to tell if it can safely propagate the enable/disable
>for promiscuous mode.  However, for 802.3ad, that is always going to be a no until we add
>some other way to subscribe to the multicast group.
> 
>
>So, providing to applications a list of options will ease the application life and may be big improvement
>while not hurting the current behavior. 
>
>Matan   
>


More information about the dev mailing list