[dpdk-dev] [PATCH 2/2] net/ixgbe: fix l3fwd start failed on PF

Wu, Jingjing jingjing.wu at intel.com
Tue Nov 7 09:39:59 CET 2017



> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Ananyev, Konstantin
> Sent: Thursday, November 2, 2017 10:06 PM
> To: Wu, Yanglong <yanglong.wu at intel.com>; dev at dpdk.org
> Cc: Wu, Yanglong <yanglong.wu at intel.com>
> Subject: Re: [dpdk-dev] [PATCH 2/2] net/ixgbe: fix l3fwd start failed on PF
> 
> Hi,
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Yanglong Wu
> > Sent: Thursday, November 2, 2017 5:05 PM
> > To: dev at dpdk.org
> > Cc: Wu, Yanglong <yanglong.wu at intel.com>
> > Subject: [dpdk-dev] [PATCH 2/2] net/ixgbe: fix l3fwd start failed on PF
> >
> > which occurred when the SRIOV is active and tx_q > rx_q.
> > The number of nb_q_per_pool should equeal to max number
> > of queues supported by HW not nb_rx_q.
> >
> > Fixes: 27b609cbd1c6 (ethdev: move the multi-queue mode
> > check to specific drivers)
> >
> > Signed-off-by: Yanglong Wu <yanglong.wu at intel.com>
> > ---
> >  drivers/net/ixgbe/ixgbe_ethdev.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> > index ae9c44421..0f0641da1 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > @@ -2180,7 +2180,7 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev,
> uint16_t nb_rx_q)
> >  		return -EINVAL;
> >  	}
> >
> > -	RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = nb_rx_q;
> > +	RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 128/RTE_ETH_DEV_SRIOV(dev).active;
> >  	RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx = pci_dev->max_vfs * nb_rx_q;
> >
> >  	return 0;
> > --
> > 2.11.0
> 
> Not sure I understand what is the purpose of that patch...
> Do you want to prevent RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1?
> Konstantin
> 
I think his purpose is to set the RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool to be the max number of queues in one pool
according to the how to split the queue index.

Now, for rss and virtualization mode, ixgbe has combination like 2 queues * 64 pools and 4 queues * 32 pools.

BTW, I think the title of this patch need to be reword. It looks confusing if it is a bug in ixgbe.
> 
> 
> 



More information about the dev mailing list