[dpdk-dev] [PATCH] doc: announce changes to ethdev rxconf structure

Stephen Hemminger stephen at networkplumber.org
Mon Aug 31 18:59:18 CEST 2020


On Mon, 31 Aug 2020 09:35:18 +0300
Andrew Rybchenko <arybchenko at solarflare.com> wrote:

> >>>>> multisegment packets.    
> >>>>
> >>>> I hope it will be mentioned in the feature documentation in the future, but
> >>>> I'm not 100% sure that it is required. See below.    
> >>> I suppose there is the hierarchy:
> >>> - applications configures DEV_RX_OFFLOAD_SCATTER on the port and tells in this way:
> >>> "Hey, driver, I'm ready to handle multi-segment packets". Readiness in general.
> >>> - application configures BUFFER_SPLIT and tells PMD _HOW_ it wants to split, in particular way:
> >>> "Hey, driver, please, drop ten bytes here, here and here, and the rest - over there"    
> >>
> >> My idea is to keep SCATTER and BUFFER_SPLIT independent.
> >> SCATTER is a possibility to make multi-segment packets getting
> >> mbufs from main rxq mempool as many as required.
> >> BUFFER_SPLIT is support of many mempools and splitting
> >> received packets as specified.  
> > 
> > No.
> > Once again, drivers should take anything from application and rely on using
> > logic to choose best path. Modern CPU's have good branch predictors, and making
> > the developer do that work is counter productive.  
> 
> Please, add a bit more details. I simply can see relationship.
> So, right now for me it looks like just misunderstanding.
> 
> Thanks,
> Andrew.

Ok, documenting the existing behaviour is good. I was just concerned that this was
going to lead to more per-queue flags.


More information about the dev mailing list