Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex
Raslan Darawsheh
rasland at nvidia.com
Thu Sep 30 03:13:12 CEST 2021
Hi Anna,
What you are basically doing is trying to do RSS on eth layer which we don't support the spreading on it.
To make it work you can do either adding ip layer to the items to make the RSS happen on L3 or simply through the rss types of the rss action which would cause an automatic expansion for the items in mlx5 pmd internally.
Kindest regards,
Raslan Darawsheh
________________________________
From: Anna A <pacman.n908 at gmail.com>
Sent: Thursday, September 30, 2021 3:29:51 AM
To: Wisam Monther <wisamm at nvidia.com>
Cc: NBU-Contact-Thomas Monjalon <thomas at monjalon.net>; users at dpdk.org <users at dpdk.org>; Matan Azrad <matan at nvidia.com>; Slava Ovsiienko <viacheslavo at nvidia.com>
Subject: Re: Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex
Hi Wisam,
I added .rxmode.mq_mode = ETH_MQ_RX_RSS to rte_eth_conf before calling the
fn, rte_eth_dev_configure() but still have the packets sent to a single
queue.
My order of configuration is as follows:
1. Enable .rxmode.mq_mode = ETH_MQ_RX_RSS
2. Initialize port by rte_eth_dev_configure()
3. Setup multiple Rxqueues for a single port by calling
rte_eth_rx_queue_setup() on each queueid.
4.setup a single txqueue by rte_eth_tx_queue_setup()
5. start the device with rte_eth_dev_start()
6. Configure rte_flow with pattern -> flow create port0 ingress pattern eth
/ end / action RSS on multiple queues / end
7. Add Mac address
8. Check the port link status
If I try to configure rte_flow before calling rte_eth_dev_start, I get the
error message "net_mlx5: port 0 is not started when inserting a flow and
rte_flow_create() returns NULL ". Also i enabled debug logging with
"--log-level=*:debug", but don't see any errors for flow validation/ flow
creation . Please let me know if I'm missing something, or need to add any
other configurations?
Thanks
Anna
On Wed, Sep 29, 2021 at 3:09 AM Wisam Monther <wisamm at nvidia.com> wrote:
> Hi Anna,
>
> > -----Original Message-----
> > From: Thomas Monjalon <thomas at monjalon.net>
> > Sent: Wednesday, September 29, 2021 12:54 PM
> > To: Anna A <pacman.n908 at gmail.com>
> > Cc: users at dpdk.org; Matan Azrad <matan at nvidia.com>; Slava Ovsiienko
> > <viacheslavo at nvidia.com>
> > Subject: Re: Using rte_flow to distribute single flow type among
> multiple Rx
> > queues using DPDK in Mellanox ConnectX-5 Ex
> >
> > 29/09/2021 07:26, Anna A:
> > > Hi,
> > >
> > > I'm trying to use rte_flow_action_type_rss to distribute packets all
> > > of the same flow type among multiple Rx queues on a single port.
> > > Mellanox
> > > ConnectX-5 Ex and DPDK version 20.05 is used for this purpose. It
> > > doesn't seem to work and all the packets are sent only to a single
> queue.
> >
> > Adding mlx5 maintainers Cc.
> >
> > > My queries are :
> > > 1. What am I missing or doing differently?
> > > 2. Should I be doing any other configurations in rte_eth_conf or
> > > rte_eth_rxmode?
>
> Can you please try to add?
> .rxmode.mq_mode = ETH_MQ_RX_RSS,
> in the rte_eth_conf and try again?
>
> >
> > Do you see any error log?
> > For info, you can change log level with --log-level.
> > Experiment options with '--log-level help' in recent DPDK.
> >
> > > My rte_flow configurations:
> > >
> > > struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN] = {};
> > > struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS] = {};
> > > struct rte_flow_attr attr;
> > > struct rte_flow_item_eth eth;
> > > struct rte_flow *flow = NULL;
> > > struct rte_flow_error error;
> > > int ret;
> > > int no_queues =2;
> > > uint16_t queues[2];
> > > struct rte_flow_action_rss rss;
> > > memset(&error, 0x22, sizeof(error));
> > > memset(&attr, 0, sizeof(attr));
> > > attr.egress = 0;
> > > attr.ingress = 1;
> > >
> > > memset(&pattern, 0, sizeof(pattern));
> > > memset(&action, 0, sizeof(action));
> > > /* setting the eth to pass all packets */
> > > pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
> > > pattern[0].spec = ð
> > > pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
> > >
> > > rss.types = ETH_RSS_IP;
> > > rss.level = 0;
> > > rss.func = RTE_ETH_HASH_FUNCTION_TOEPLITZ;
> > > rss.key_len =0;
> > > rss.key = NULL;
> > > rss.queue_num = no_queues;
> > > for (int i= 0; i < no_queues; i++){
> > > queues[i] = i;
> > > }
> > > rss.queue = queues;
> > > action[0].type = RTE_FLOW_ACTION_TYPE_RSS;
> > > action[0].conf = &rss;
> > >
> > > action[1].type = RTE_FLOW_ACTION_TYPE_END;
> > >
> > > ret = rte_flow_validate(portid, &attr, pattern, action, &error);
> > > if (ret < 0) {
> > > printf( "Flow validation failed %s\n", error.message);
> > > return;
> > > }
> > > flow = rte_flow_create(portid, &attr, pattern, action, &error);
> > >
> > > if (flow == NULL)
> > > printf(" Cannot create Flow create");
> > >
> > > And Rx queues configuration:
> > > for (int j = 0; j < no_queues; j++) {
> > >
> > > int ret = rte_eth_rx_queue_setup(portid, j, nb_rxd,
> > > rte_eth_dev_socket_id(port_id),
> > > NULL,mbuf_pool);
> > > if (ret < 0) {
> > > printf( "rte_eth_rx_queue_setup:err=%d, port=%u", ret,
> > > (unsigned) portid);
> > > exit(1);
> > > }
> > > }
> > >
> > > Thanks
> > > Anna
> >
> >
>
> BRs,
> Wisam Jaddo
>
More information about the users
mailing list