[dpdk-dev] [RFC] New packet type query API

Yang, Qiming qiming.yang at intel.com
Tue Jan 23 03:46:59 CET 2018


Answered in adrien’s email.

From: Andrew Rybchenko [mailto:arybchenko at solarflare.com]
Sent: Wednesday, January 17, 2018 4:09 PM
To: Adrien Mazarguil <adrien.mazarguil at 6wind.com>; Yang, Qiming <qiming.yang at intel.com>
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] [RFC] New packet type query API

On 01/16/2018 06:55 PM, Adrien Mazarguil wrote:
I understand the motivation behind this proposal, however since new ideas

must be challenged, I have a few comments:



- How about making packet type recognition an optional offload configurable

  per queue like any other (e.g. DEV_RX_OFFLOAD_PTYPE)? That way the extra

  processing cost could be avoided for applications that do not care.



- Depending on HW, packet type information inside RX descriptors may not

  necessarily fit 64-bit, or at least not without transformation. This

  transformation would still cause wasted cycles on the PMD side.



- In case enable_ptype_direct is enabled, the PMD may not waste CPU cycles

  but the subsequent look-up with the proposed API would translate to a

  higher cost on the application side. As a data plane API, how does this

  benefit applications that want to retrieve packet type information?



- Without a dedicated mbuf flag, an application cannot tell whether enclosed

  packet type data is in HW format. Even if present, if port information is

  discarded or becomes invalid (e.g. mbuf stored in an application queue for

  lengthy periods or passed as is to an unrelated application), there is no

  way to make sense of the data.



In my opinion, mbufs should only contain data fields in a standardized

format. Managing packet types like an offload which can be toggled at will

seems to be the best compromise. Thoughts?

+1


More information about the dev mailing list