[dpdk-dev] [PATCH] net/bonding: propagate promiscous mode in mode 4

Doherty, Declan declan.doherty at intel.com
Thu Aug 2 15:23:50 CEST 2018


On 02/08/2018 7:35 AM, Matan Azrad wrote:
> Hi Chas, Radu
> 
> From: Chas Williams
>> On Wed, Aug 1, 2018 at 9:48 AM Radu Nicolau <radu.nicolau at intel.com>
>> wrote:
>>
>>>
>>>
>>> On 8/1/2018 2:34 PM, Chas Williams wrote:
>>>
>>>
>>>
>>> On Wed, Aug 1, 2018 at 9:04 AM Radu Nicolau <radu.nicolau at intel.com>
>>> wrote:
>>>
>>>> Update the bonding promiscuous mode enable/disable functions as to
>>>> propagate the change to all slaves instead of doing nothing; this
>>>> seems to be the correct behaviour according to the standard, and also
>>>> implemented in the linux network stack.
>>>>
>>>> Signed-off-by: Radu Nicolau <radu.nicolau at intel.com>
>>>> ---
>>>>   drivers/net/bonding/rte_eth_bond_pmd.c | 8 ++------
>>>>   1 file changed, 2 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
>>>> b/drivers/net/bonding/rte_eth_bond_pmd.c
>>>> index ad6e33f..16105cb 100644
>>>> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
>>>> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
>>>> @@ -2617,12 +2617,10 @@ bond_ethdev_promiscuous_enable(struct
>>>> rte_eth_dev
>>>> *eth_dev)
>>>>          case BONDING_MODE_ROUND_ROBIN:
>>>>          case BONDING_MODE_BALANCE:
>>>>          case BONDING_MODE_BROADCAST:
>>>> +       case BONDING_MODE_8023AD:
>>>>                  for (i = 0; i < internals->slave_count; i++)
>>>>
>>>> rte_eth_promiscuous_enable(internals->slaves[i].port_id);
>>>>                  break;
>>>> -       /* In mode4 promiscus mode is managed when slave is
>> added/removed
>>>> */
>>>>
>>>
>>> This comment is true (and it appears it is always on in 802.3ad mode):
>>>
>>>          /* use this port as agregator */
>>>          port->aggregator_port_id = slave_id;
>>>          rte_eth_promiscuous_enable(slave_id);
>>>
>>> If we are going to do this here, we should probably get rid of it in
>>> the other location so that future readers aren't confused about which
>>> is the one doing the work.
>>>
>>> Since some adapters don't have group multicast support, we might
>>> already be in promiscuous anyway.  Turning off promiscuous for the
>>> bonding master might turn it off in the slaves where an application
>>> has already enabled it.
>>>
>>>
>>> The idea was to preserve the current behavior except for the explicit
>>> promiscuous disable/enable APIs; an application may disable the
>>> promiscuous mode on the bonding port and then enable it back,
>>> expecting it to propagate to the slaves.
>>>
>>
>> Yes, but an application doing that will break 802.3ad because promiscuous
>> mode is used to receive the LAG PDUs which are on a multicast group.
>> That's why this code doesn't let you disable promiscuous when you are in
>> 802.3ad mode.
>>
>> If you want to do this it needs to be more complicated.  In 802.3ad, you should
>> try to add the multicast group to the slave interface.  If that fails, turn on
>> promisc mode for the slave.  Make note of it.  Later if bonding wants to
>> enabled/disable promisc mode for the slaves, it needs to check if that slaves
>> needs to remain in promisc to continue to get the LAG PDUs.
> 
> I agree with Chas that this commit will hurt current LACP logic, but maybe this is the time to open discussion about it:
> The current bonding implementation is greedy while it setting promiscuous automatically for LACP,
> The user asks LACP and he gets promiscuous by the way.
> 
> So if the user don't want promiscuous he must to disable it directly via slaves ports and to allow LACP using rte_flow\flow director\set_mc_addr_list\allmulti...
> 
> I think the best way is to let the user to enable LACP  as he wants, directly via slaves or by the bond promiscuous_enable API.
> For sure, it must be documented well.
> 
> Matan.
> 

I'm thinking that default behavior should be that promiscuous mode 
should be disabled by default, and that the bond port should fail to 
start if any of the slave ports can't support subscription to the LACP 
multicast group. At this point the user can decided to enable 
promiscuous mode on the bond port (and therefore on all the slaves) and 
then start the bond. If we have slaves with different configurations for 
multicast subscriptions or promiscuous mode enablement, then there is 
potentially the opportunity for inconsistency in traffic depending on 
which slaves are active.

Personally I would prefer that all configuration if possible is 
propagated through the bond port. So if a user wants to use a port which 
doesn't support multicast subscription then all ports in the bond need 
to be in promiscuous mode, and the user needs to explicitly enable it 
through the bond port, that way at least we can guarantee consist 
traffic irrespective of which ports in the bond are active at any one time.
>   
> 



More information about the dev mailing list