[dpdk-dev] [PATCH v3 0/6] Enable VF RSS for Niantic

Ouyang, Changchun changchun.ouyang at intel.com
Thu Dec 25 02:46:54 CET 2014


Hi,

> -----Original Message-----
> From: Vlad Zolotarov [mailto:vladz at cloudius-systems.com]
> Sent: Wednesday, December 24, 2014 5:59 PM
> To: Ouyang, Changchun; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3 0/6] Enable VF RSS for Niantic
> 
> 
> On 12/24/14 07:22, Ouyang Changchun wrote:
> > This patch enables VF RSS for Niantic, which allow each VF having at most 4
> queues.
> > The actual queue number per VF depends on the total number of pool,
> > which is determined by the total number of VF at PF initialization
> > stage and the number of queue specified in config:
> > 1) If the number of VF is in the range from 1 to 32 and the number of
> > rxq is 4('--rxq 4' in testpmd), then there is totally 32
> > pools(ETH_32_POOLS), and each VF have 4 queues;
> >
> > 2)If the number of VF is in the range from 33 to 64 and the number of
> > rxq is 2('--rxq 2' in testpmd), then there is totally 64
> > pools(ETH_64_POOLS), and each VF have 2 queues;
> >
> > On host, to enable VF RSS functionality, rx mq mode should be set as
> > ETH_MQ_RX_VMDQ_RSS or ETH_MQ_RX_RSS mode, and SRIOV mode
> should be activated(max_vfs >= 1).
> > It also needs config VF RSS information like hash function, RSS key, RSS key
> length.
> >
> > The limitation for Niantic VF RSS is:
> > the hash and key are shared among PF and all VF
> 
> Hmmm... This kinda contradicts the previous sentence where u say that VF
> on the host should configure hash and RSS key. If PF and VF share the same
> hash and key what's the use of configuring it in VF? Could u clarify, please?

What make you think of any "contradicts"? To be more clear, would you pls copy and paste which 2 sentences you think of "contradicts",
I can correct it if they are, but currently I don't find them. 
Share means vf doesn't has its own hash function, hash key, and reta table.
 
> > , the RETA table with 128 entries are
> > also shared among PF and all VF. So it is not good idea to query the
> > hash and reta content per VF on guest, instead, it makes sense to query
> them on host(PF).
> 
> On the contrary - it's a very good idea! We use DPDK on Amazon's guests
> with enhanced networking and we have no access to the PF. We still need to
> know the RSS redirection rules for our VF pool. From the 82599 spec, chapter
> 4.6.10.1.1: "redirection table is common to all the pools and only indicates the
> queue inside the pool to use once the pool is chosen". In that case we need
> to get the whole 128 entries of the RETA. Is there a reason why we can't have
> it?
>
Due to hardware limitation, VF could not query its own reta table, because there is not its own reta,
The reta table shared by pf and all vfs.
If you need know it, query them on pf is feasible way to do it.

> >
> > v3 change:
> >    - More cleanup;
> >
> > v2 change:
> >    - Update the description;
> >    - Use receiving queue number('--rxq <q-num>') specified in config to
> determine the number of pool and
> >      the number of queue per VF;
> >
> > v1 change:
> >    - Config VF RSS;
> >
> > Changchun Ouyang (6):
> >    ixgbe: Code cleanup
> >    ixgbe: Negotiate VF API version
> >    ixgbe: Get VF queue number
> >    ether: Check VMDq RSS mode
> >    ixgbe: Config VF RSS
> >    testpmd: Set Rx VMDq RSS mode
> >
> >   app/test-pmd/testpmd.c              |  10 +++
> >   lib/librte_ether/rte_ethdev.c       |  39 +++++++++--
> >   lib/librte_pmd_ixgbe/ixgbe_ethdev.h |   1 +
> >   lib/librte_pmd_ixgbe/ixgbe_pf.c     |  75 ++++++++++++++++++++-
> >   lib/librte_pmd_ixgbe/ixgbe_rxtx.c   | 127
> ++++++++++++++++++++++++++++--------
> >   5 files changed, 219 insertions(+), 33 deletions(-)
> >



More information about the dev mailing list