[dpdk-dev] [PATCH v5] net/ixgbe: fix l3fwd start failed on

Ananyev, Konstantin konstantin.ananyev at intel.com
Tue Jan 9 13:39:44 CET 2018


Hi Yanglong,

Sorry, I was just confused by the description of the patch.
The code itself seems ok... probably need to rephrase the description
(remove mentioning l3fwd?).
Konstantin  

> Hi, Konstantin
> 
> Thanks for your comments!
> Do you means that tx_q is must less than rx_q when the SRIOV is active and if not, the application case will not be supported?
> Do you think my patch will cause to  2 (or more cores)  could try to TX packets through the same TX queue? And as far as I know, the way of
> core using tx_q queue is depend on the application (e.g. in l3fwd tx_q equal to number of core) and multi core use same tx_q is not
> suggested for locker is needed in this situation.  So why do you think my patch will lead to multi core using same queue?
> 
> Yanglong Wu
> 
> 
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Monday, January 8, 2018 7:55 PM
> To: Wu, Yanglong <yanglong.wu at intel.com>; dev at dpdk.org
> Subject: RE: [PATCH v5] net/ixgbe: fix l3fwd start failed on
> 
> 
> 
> > -----Original Message-----
> > From: Wu, Yanglong
> > Sent: Monday, January 8, 2018 3:06 AM
> > To: dev at dpdk.org
> > Cc: Ananyev, Konstantin <konstantin.ananyev at intel.com>; Wu, Yanglong
> > <yanglong.wu at intel.com>
> > Subject: [PATCH v5] net/ixgbe: fix l3fwd start failed on
> >
> > L3fwd start failed on PF, for tx_q check failed.
> > That occurred when the SRIOV is active and tx_q > rx_q.
> > The tx_q is equal to nb_q_per_pool. The number of nb_q_per_pool should
> > equeal to max number of queues supported by HW not nb_rx_q.
> 
> But then 2 (or more cores)  could try to TX packets through the same TX queue?
> Why not just fil to start gracefully (call rte_exit() or so) if such situation occurred?
> Konstantin
> 
> >
> > Fixes: 27b609cbd1c6 (ethdev: move the multi-queue mode check to
> > specific drivers)
> >
> > Signed-off-by: Yanglong Wu <yanglong.wu at intel.com>
> > ---
> > v5:
> > Rework according to comments
> > ---
> >  drivers/net/ixgbe/ixgbe_ethdev.c | 10 +++++++---
> >  1 file changed, 7 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> > b/drivers/net/ixgbe/ixgbe_ethdev.c
> > index ff19a564a..baaeee5d9 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > @@ -95,6 +95,9 @@
> >  /* Timer value included in XOFF frames. */  #define IXGBE_FC_PAUSE
> > 0x680
> >
> > +/*Default value of Max Rx Queue*/
> > +#define IXGBE_MAX_RX_QUEUE_NUM 128
> > +
> >  #define IXGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
> >  #define IXGBE_LINK_UP_CHECK_TIMEOUT   1000 /* ms */
> >  #define IXGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
> > @@ -2194,9 +2197,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
> >  		return -EINVAL;
> >  	}
> >
> > -	RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = nb_rx_q;
> > -	RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx = pci_dev->max_vfs * nb_rx_q;
> > -
> > +	RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool =
> > +		IXGBE_MAX_RX_QUEUE_NUM / RTE_ETH_DEV_SRIOV(dev).active;
> > +	RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx =
> > +		pci_dev->max_vfs * RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
> >  	return 0;
> >  }
> >
> > --
> > 2.11.0



More information about the dev mailing list