[dpdk-dev] [PATCH 6/8] bond: handle slaves with fewer queues than bonding device

Stephen Hemminger stephen at networkplumber.org
Tue Jan 5 16:31:19 CET 2016

A common usage scenario is to bond a vnic like virtio which typically has
only a single rx queue with a VF device that has multiple receive queues.
This is done to do live migration
On Jan 5, 2016 05:47, "Declan Doherty" <declan.doherty at intel.com> wrote:

> On 04/12/15 19:18, Eric Kinzie wrote:
>> On Fri Dec 04 19:36:09 +0100 2015, Andriy Berestovskyy wrote:
>>> Hi guys,
>>> I'm not quite sure if we can support less TX queues on a slave that easy:
>>> queue_id = bond_slave_txqid(internals, i, bd_tx_q->queue_id);
>>>> num_tx_slave = rte_eth_tx_burst(slaves[i], queue_id,
>>>>       slave_bufs[i], slave_nb_pkts[i]);
>>> It seems that two different lcores might end up writing to the same
>>> slave queue at the same time, isn't it?
>>> Regards,
>>> Andriy
>> Andriy, I think you're probably right about this.  Perhaps it should
>> instead refuse to add or refuse to activate a slave with too few
>> tx queues.  Could probably fix this with another layer of buffering
>> so that an lcore with a valid tx queue could pick up the mbufs later,
>> but this doesn't seem very appealing.
>> Eric
>> On Fri, Dec 4, 2015 at 6:14 PM, Stephen Hemminger
>>> <stephen at networkplumber.org> wrote:
>>>> From: Eric Kinzie <ekinzie at brocade.com>
>>>> In the event that the bonding device has a greater number of tx and/or
>>>> rx
>>>> queues than the slave being added, track the queue limits of the slave.
>>>> On receive, ignore queue identifiers beyond what the slave interface
>>>> can support.  During transmit, pick a different queue id to use if the
>>>> intended queue is not available on the slave.
>>>> Signed-off-by: Eric Kinzie <ekinzie at brocade.com>
>>>> Signed-off-by: Stephen Hemminger <stephen at networkplumber.org>
>>>> ---
>>> ...
> I don't there is any straight forward way of supporting slaves with
> different numbers of queues, the initial library was written with the
> assumption that the number of tx/rx queues would always be the same on each
> slave. This is why,when a slave is added to a bonded device we reconfigure
> the queues. For features like RSS we have to have the same number of rx
> queues otherwise the flow distribution to an application could change in
> the case of a fail over event. Also by supporting different numbers of
> queues between slaves we would be no longer be supporting the standard
> behavior of ethdevs in DPDK were we expect that by using different queues
> we don't require locking to be thread safe.

More information about the dev mailing list