RSS queue problem with i40e on DPDK 20.11.3
Eric Christian
erclists at gmail.com
Mon Jan 24 16:16:35 CET 2022
Hi,
I am curious if you resolved this?
Eric
On Fri, Nov 19, 2021 at 3:49 AM Antti-Pekka Liedes <apl at iki.fi> wrote:
> Hi DPDK experts,
>
> I have a problem with upgrading our software from DPDK 20.11.1 to DPDK
> 20.11.3: the RSS we use for i40e now delivers all the packets to the first
> queue 0 only. I'm using the rte_flow API to configure the queues first and
> then all the flow types one by one to distribute the incoming packets using
> symmetric Toeplitz hash to 8 queues.
>
> Note that this is C++ code, and the m_ prefixed variables are members of
> the Port object, ie., port specific parameters.
>
> The queue region setup is:
>
> const struct rte_flow_attr attr = {
> .group = 0,
> .priority = 0,
> .ingress = 1,
> .egress = 0,
> .transfer = 0,
> .reserved = 0
> };
> uint16_t rss_queue[m_num_rx_queues];
> for (int i = 0; i < m_num_rx_queues; i++)
> {
> rss_queue[i] = i;
> }
>
> {
> const struct rte_flow_item pattern[] = {
> {
> .type = RTE_FLOW_ITEM_TYPE_END
> }
> };
>
> const struct rte_flow_action_rss action_rss = {
> .level = 0,
> .types = 0,
> .key_len = rss_key_len,
> .queue_num = m_num_rx_queues,
> .key = rss_key,
> .queue = rss_queue
> };
> const struct rte_flow_action action[] = {
> {
> .type = RTE_FLOW_ACTION_TYPE_RSS,
> .conf = &action_rss
> },
> {
> .type = RTE_FLOW_ACTION_TYPE_END
> }
> };
> struct rte_flow_error flow_error;
>
> struct rte_flow* flow = rte_flow_create(
> m_portid,
> &attr,
> pattern,
> action,
> &flow_error);
>
> where m_num_rx_queues = 8, and rss_key and rss_key_len are our enforced
> RSS key originally pulled from an X710. The rss_key_len = 52.
>
> After this I configure all the flow types:
>
> uint64_t rss_types[] = {
> ETH_RSS_FRAG_IPV4,
> ETH_RSS_NONFRAG_IPV4_TCP,
> ETH_RSS_NONFRAG_IPV4_UDP,
> ETH_RSS_NONFRAG_IPV4_SCTP,
> ETH_RSS_NONFRAG_IPV4_OTHER,
>
> ETH_RSS_FRAG_IPV6,
> ETH_RSS_NONFRAG_IPV6_TCP,
> ETH_RSS_NONFRAG_IPV6_UDP,
> ETH_RSS_NONFRAG_IPV6_SCTP,
> ETH_RSS_NONFRAG_IPV6_OTHER
> };
>
> and for each type:
>
> const struct rte_flow_attr attr = {
> .group = 0,
> .priority = 0,
> .ingress = 1,
> .egress = 0,
> .transfer = 0,
> .reserved = 0
> };
>
> // Room for L2 to L4.
> struct rte_flow_item pattern[] = {
> {
> .type = RTE_FLOW_ITEM_TYPE_ETH
> },
> {
> .type = RTE_FLOW_ITEM_TYPE_END
> },
> {
> .type = RTE_FLOW_ITEM_TYPE_END
> },
> {
> .type = RTE_FLOW_ITEM_TYPE_END
> }
> };
>
> // Add L2/L3/L4 to pattern according to rss_type.
>
> const struct rte_flow_action_rss action_rss = {
> .func = RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ,
> .level = 0,
> .types = rss_type,
> .key_len = rss_key_len,
> .queue_num = 0,
> .key = rss_key,
> .queue = NULL
> };
> const struct rte_flow_action action[] = {
> {
> .type = RTE_FLOW_ACTION_TYPE_RSS,
> .conf = &action_rss
> },
> {
> .type = RTE_FLOW_ACTION_TYPE_END
> }
> };
> struct rte_flow_error flow_error;
>
> struct rte_flow* flow = rte_flow_create(
> m_portid,
> &attr,
> pattern,
> action,
> &flow_error);
>
> We also have a software Toeplitz calculator that agrees with the HW hash
> value for both DPDK 20.11.1 and 20.11.3, so the hash calculation by the HW
> seems to be ok. AFAICT, the above is similar to the RSS setup instructions
> in https://doc.dpdk.org/guides-20.11/nics/i40e.html for test-pmd, except
> that we give our own key as well.
>
> Some random facts:
> - Changing rss_queues to all 3's doesn't affect the distribution; all the
> packets still go to queue 0.
> - I use Intel X710 for debugging and the observed behavior is from there,
> but according to performance testing X722 also exhibits the same problem.
> - My X710 fw versions are: i40e 0000:01:00.0: fw 8.4.66032 api 1.14 nvm
> 8.40 0x8000aba4 1.2992.0.
> - A quick test on DPDK 20.11.2 indicates correct spread among all 8 RX
> queues, so the problem is probably introduced in 20.11.3.
>
> Thanks,
>
> Antti-Pekka Liedes
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/users/attachments/20220124/57a144c0/attachment.htm>
More information about the users
mailing list