[dpdk-dev] [PATCH] doc: announce changes to ethdev rxconf structure
Slava Ovsiienko
viacheslavo at mellanox.com
Thu Aug 6 18:39:13 CEST 2020
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit at intel.com>
> Sent: Thursday, August 6, 2020 19:37
> To: Slava Ovsiienko <viacheslavo at mellanox.com>; Andrew Rybchenko
> <arybchenko at solarflare.com>; dev at dpdk.org
> Cc: Matan Azrad <matan at mellanox.com>; Raslan Darawsheh
> <rasland at mellanox.com>; Thomas Monjalon <thomas at monjalon.net>;
> jerinjacobk at gmail.com; stephen at networkplumber.org;
> ajit.khaparde at broadcom.com; maxime.coquelin at redhat.com;
> olivier.matz at 6wind.com; david.marchand at redhat.com
> Subject: Re: [PATCH] doc: announce changes to ethdev rxconf structure
>
> On 8/6/2020 5:29 PM, Slava Ovsiienko wrote:
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit at intel.com>
> >> Sent: Thursday, August 6, 2020 19:16
> >> To: Andrew Rybchenko <arybchenko at solarflare.com>; Slava Ovsiienko
> >> <viacheslavo at mellanox.com>; dev at dpdk.org
> >> Cc: Matan Azrad <matan at mellanox.com>; Raslan Darawsheh
> >> <rasland at mellanox.com>; Thomas Monjalon <thomas at monjalon.net>;
> >> jerinjacobk at gmail.com; stephen at networkplumber.org;
> >> ajit.khaparde at broadcom.com; maxime.coquelin at redhat.com;
> >> olivier.matz at 6wind.com; david.marchand at redhat.com
> >> Subject: Re: [PATCH] doc: announce changes to ethdev rxconf structure
> >>
> >> On 8/3/2020 3:31 PM, Andrew Rybchenko wrote:
> >>> On 8/3/20 1:58 PM, Viacheslav Ovsiienko wrote:
> >>>> The DPDK datapath in the transmit direction is very flexible.
> >>>> The applications can build multisegment packets and manages almost
> >>>> all data aspects - the memory pools where segments are allocated
> >>>> from, the segment lengths, the memory attributes like external,
> >>>> registered, etc.
> >>>>
> >>>> In the receiving direction, the datapath is much less flexible, the
> >>>> applications can only specify the memory pool to configure the
> >>>> receiving queue and nothing more. In order to extend the receiving
> >>>> datapath capabilities it is proposed to add the new fields into
> >>>> rte_eth_rxconf structure:
> >>>>
> >>>> struct rte_eth_rxconf {
> >>>> ...
> >>>> uint16_t rx_split_num; /* number of segments to split */
> >>>> uint16_t *rx_split_len; /* array of segment lengthes */
> >>>> struct rte_mempool **mp; /* array of segment memory pools */
> >>>> ...
> >>>> };
> >>>>
> >>>> The non-zero value of rx_split_num field configures the receiving
> >>>> queue to split ingress packets into multiple segments to the mbufs
> >>>> allocated from various memory pools according to the specified
> >>>> lengths. The zero value of rx_split_num field provides the backward
> >>>> compatibility and queue should be configured in a regular way (with
> >>>> single/multiple mbufs of the same data buffer length allocated from
> >>>> the single memory pool).
> >>>
> >>> From the above description it is not 100% clear how it will coexist
> >>> with:
> >>> - existing mb_pool argument of the rte_eth_rx_queue_setup()
> >>
> >> +1
> > - supposed to be NULL if the array of lengths/pools is used
> >
> >>
> >>> - DEV_RX_OFFLOAD_SCATTER
> >>> - DEV_RX_OFFLOAD_HEADER_SPLIT
> >>> How will application know that the feature is supported? Limitations?
> >>
> >> +1
> > New flag DEV_RX_OFFLOAD_BUFFER_SPLIT is supposed to be introduced.
> > The feature requires the DEV_RX_OFFLOAD_SCATTER is set.
> > If DEV_RX_OFFLOAD_HEADER_SPLIT is set the error is returned.
> >
> >>
> >>> Is it always split by specified/fixed length?
> >>> What happens if header length is actually different?
> >>
> >> As far as I understand intention is to filter specific packets to a
> >> queue first and later do the split, so the header length will be fixed...
> >
> > Not exactly. The filtering should be handled by rte_flow engine.
> > The intention is to provide the more flexible way to describe rx
> > buffers. Currently it is the single pool with fixed size segments. No
> > way to split the packet into multiple segments with specified lengths
> > and in the specified pools. What if packet payload should be stored in
> > the physical memory on other device (GPU/Storage)? What if caching is
> > not desired for the payload (just forwarding application)? We could provide
> the special NC pool.
> > What if packet should be split into the chunks with specific gaps?
> > For Tx direction we have this opportunity to gather packet from
> > various pools and any desired combinations , but Rx is much less flexible.
> >
> >>>
> >>>> The new approach would allow splitting the ingress packets into
> >>>> multiple parts pushed to the memory with different attributes.
> >>>> For example, the packet headers can be pushed to the embedded data
> >>>> buffers within mbufs and the application data into the external
> >>>> buffers attached to mbufs allocated from the different memory pools.
> >>>> The memory attributes for the split parts may differ either - for
> >>>> example the application data may be pushed into the external memory
> >>>> located on the dedicated physical device, say GPU or NVMe. This
> >>>> would improve the DPDK receiving datapath flexibility preserving
> >>>> compatibility with existing API.
>
> If you don't know the packet types in advance, how can you use fixed sizes to
> split a packet? Won't it be like having random parts of packet in each
> mempool..
It is per queue configuration. We have the rte_flow engine and can filter out
the desired packets to the desired queue.
>
> >>>>
> >>>> Signed-off-by: Viacheslav Ovsiienko <viacheslavo at mellanox.com>
> >>>> ---
> >>>> doc/guides/rel_notes/deprecation.rst | 5 +++++
> >>>> 1 file changed, 5 insertions(+)
> >>>>
> >>>> diff --git a/doc/guides/rel_notes/deprecation.rst
> >>>> b/doc/guides/rel_notes/deprecation.rst
> >>>> index ea4cfa7..cd700ae 100644
> >>>> --- a/doc/guides/rel_notes/deprecation.rst
> >>>> +++ b/doc/guides/rel_notes/deprecation.rst
> >>>> @@ -99,6 +99,11 @@ Deprecation Notices
> >>>> In 19.11 PMDs will still update the field even when the offload is not
> >>>> enabled.
> >>>>
> >>>> +* ethdev: add new fields to ``rte_eth_rxconf`` to configure the
> >>>> +receiving
> >>>> + queues to split ingress packets into multiple segments according
> >>>> +to the
> >>>> + specified lengths into the buffers allocated from the specified
> >>>> + memory pools. The backward compatibility to existing API is
> preserved.
> >>>> +
> >>>> * ethdev: ``rx_descriptor_done`` dev_ops and
> >> ``rte_eth_rx_descriptor_done``
> >>>> will be deprecated in 20.11 and will be removed in 21.11.
> >>>> Existing ``rte_eth_rx_descriptor_status`` and
> >>>> ``rte_eth_tx_descriptor_status``
> >>>
> >
More information about the dev
mailing list