[RFC PATCH] eventdev: adapter API to configure multiple Rx queues
Naga Harish K, S V
s.v.naga.harish.k at intel.com
Thu Jan 30 16:30:48 CET 2025
> -----Original Message-----
> From: Jerin Jacob <jerinj at marvell.com>
> Sent: Wednesday, January 29, 2025 1:13 PM
> To: Naga Harish K, S V <s.v.naga.harish.k at intel.com>; Shijith Thotton
> <sthotton at marvell.com>; dev at dpdk.org
> Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula at marvell.com>; Pathak,
> Pravin <pravin.pathak at intel.com>; Hemant Agrawal
> <hemant.agrawal at nxp.com>; Sachin Saxena <sachin.saxena at nxp.com>;
> Mattias R_nnblom <mattias.ronnblom at ericsson.com>; Liang Ma
> <liangma at liangbit.com>; Mccarthy, Peter <peter.mccarthy at intel.com>; Van
> Haaren, Harry <harry.van.haaren at intel.com>; Carrillo, Erik G
> <erik.g.carrillo at intel.com>; Gujjar, Abhinandan S
> <abhinandan.gujjar at intel.com>; Amit Prakash Shukla
> <amitprakashs at marvell.com>; Burakov, Anatoly
> <anatoly.burakov at intel.com>
> Subject: RE: [RFC PATCH] eventdev: adapter API to configure multiple Rx
> queues
>
>
>
> > -----Original Message-----
> > From: Naga Harish K, S V <s.v.naga.harish.k at intel.com>
> > Sent: Wednesday, January 29, 2025 10:35 AM
> > To: Shijith Thotton <sthotton at marvell.com>; dev at dpdk.org
> > Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula at marvell.com>; Pathak,
> > Pravin <pravin.pathak at intel.com>; Hemant Agrawal
> > <hemant.agrawal at nxp.com>; Sachin Saxena <sachin.saxena at nxp.com>;
> > Mattias R_nnblom <mattias.ronnblom at ericsson.com>; Jerin Jacob
> > <jerinj at marvell.com>; Liang Ma <liangma at liangbit.com>; Mccarthy, Peter
> > <peter.mccarthy at intel.com>; Van Haaren, Harry
> > <harry.van.haaren at intel.com>; Carrillo, Erik G
> > <erik.g.carrillo at intel.com>; Gujjar, Abhinandan S
> > <abhinandan.gujjar at intel.com>; Amit Prakash Shukla
> > <amitprakashs at marvell.com>; Burakov, Anatoly
> > <anatoly.burakov at intel.com>
> > Subject: [EXTERNAL] RE: [RFC PATCH] eventdev: adapter API to configure
> > multiple Rx queues
> > > >
> > > >This requires a change to the rte_event_eth_rx_adapter_queue_add()
> > > >stable API parameters.
> > > >This is an ABI breakage and may not be possible now.
> > > >It requires changes to many current applications that are using the
> > > >rte_event_eth_rx_adapter_queue_add() stable API.
> > > >
> > >
> > > What I meant by mapping was to retain the stable API parameters as they
> are.
> > > Internally, the API can use the proposed eventdev PMD operation
> > > (eth_rx_adapter_queues_add) without causing an ABI break, as shown
> below.
> > >
> > > int rte_event_eth_rx_adapter_queue_add(uint8_t id, uint16_t eth_dev_id,
> > > int32_t rx_queue_id,
> > > const struct rte_event_eth_rx_adapter_queue_conf *conf) {
> > > if (rx_queue_id == -1)
> > > dev->dev_ops->eth_rx_adapter_queues_add)(
> > > dev, &rte_eth_devices[eth_dev_id], 0,
> > > conf, 0);
> > > else
> > > dev->dev_ops->eth_rx_adapter_queues_add)(
> > > dev, &rte_eth_devices[eth_dev_id], &rx_queue_id,
> > > conf, 1);
> > > }
> > >
> > > With above change, old op (eth_rx_adapter_queue_add) can be removed
> > > as both API (stable and proposed) will be using
> eth_rx_adapter_queues_add.
>
>
> Since this thread is not converging and looks like it is due to confusion.
> I am trying to summarize my understanding to define the next steps(like if
> needed, we need to reach tech board if there are no consensus)
>
>
> Problem statement:
> ==================
> 1) Implementation of rte_event_eth_rx_adapter_queue_add() in HW typically
> uses an administrative function to enable it. Typically, it translated to sending a
> mailbox to PF driver etc.
> So, this function takes "time" to complete in HW implementations.
> 2) For SW implementations, this won't take time as there is no other actors
> involved.
> 3) There are customer use cases, they add 300+
> rte_event_eth_rx_adapter_queue_add() on application bootup, that is
> introducing significant boot time for the application.
> Number of queues are function of number of ethdev ports, number of
> ethdev Rx queues per port and number of event queues.
>
>
> Expected outcome of problem statement:
> ======================================
> 1) The cases where application knows queue mapping(typically at boot time
> case),
> application can call burst variant of rte_event_eth_rx_adapter_queue_add()
> function
> to amortize the cost. Similar scheme used DPDK in control path API where
> latency is critical,
> like rte_acl_add_rules() or rte_flow via template scheme.
> 2) Solution should not break ABI or any impact to SW drivers.
> 3) Avoid duplicating the code as much as possible
>
>
> Proposed solution:
> ==================
> 1) Update eventdev_eth_rx_adapter_queue_add_t() PMD (Internal ABI) API
> to take burst parameters
> 2) Add new rte_event_eth_rx_adapter_queue*s*_add() function and wire to
> use updated PMD API
> 3) Use rte_event_eth_rx_adapter_queue_add() as
> rte_event_eth_rx_adapter_queue*s*_add(...., 1)
>
> If so, I am not sure what is the cons of this approach, it will let to have
> optimized applications when
> a) Application knows the queue mapping at priorly (typically in boot time)
> b) Allow HW drivers to optimize without breaking anything for SW drivers
> c) Provide applications to decide burst vs non burst selection based on the
> needed and performance requirements
The proposed API benefits only some hardware platforms that have optimized the "queue_add" eventdev PMD implementation for burst mode.
It may not benefit SW drivers/other HW platforms.
There will not be much difference in calling the existing API (rte_event_eth_rx_adapter_queue_add()) in a loop vs using the new API for the above cases.
If the new proposed API benefits all platforms, then it is useful.
This is the point I am making from the beginning, it is not captured in the summary.
More information about the dev
mailing list