[dpdk-dev] [PATCH v4 1/2] ethdev: add packet integrity checks

Ferruh Yigit ferruh.yigit at intel.com
Wed Apr 14 15:31:40 CEST 2021


On 4/14/2021 2:27 PM, Ferruh Yigit wrote:
> On 4/14/2021 1:56 PM, Gregory Etelson wrote:
>> From: Ori Kam <orika at nvidia.com>
>>
>> Currently, DPDK application can offload the checksum check,
>> and report it in the mbuf.
>>
>> However, as more and more applications are offloading some or all
>> logic and action to the HW, there is a need to check the packet
>> integrity so the right decision can be taken.
>>
>> The application logic can be positive meaning if the packet is
>> valid jump / do  actions, or negative if packet is not valid
>> jump to SW / do actions (like drop)  a, and add default flow
>> (match all in low priority) that will direct the miss packet
>> to the miss path.
>>
>> Since currently rte_flow works in positive way the assumption is
>> that the positive way will be the common way in this case also.
>>
>> When thinking what is the best API to implement such feature,
>> we need to considure the following (in no specific order):
>> 1. API breakage.
>> 2. Simplicity.
>> 3. Performance.
>> 4. HW capabilities.
>> 5. rte_flow limitation.
>> 6. Flexibility.
>>
>> First option: Add integrity flags to each of the items.
>> For example add checksum_ok to ipv4 item.
>>
>> Pros:
>> 1. No new rte_flow item.
>> 2. Simple in the way that on each item the app can see
>> what checks are available.
>>
>> Cons:
>> 1. API breakage.
>> 2. increase number of flows, since app can't add global rule and
>>     must have dedicated flow for each of the flow combinations, for example
>>     matching on icmp traffic or UDP/TCP  traffic with IPv4 / IPv6 will
>>     result in 5 flows.
>>
>> Second option: dedicated item
>>
>> Pros:
>> 1. No API breakage, and there will be no for some time due to having
>>     extra space. (by using bits)
>> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
>>     IPv6.
>> 3. Simplicity application can just look at one place to see all possible
>>     checks.
>> 4. Allow future support for more tests.
>>
>> Cons:
>> 1. New item, that holds number of fields from different items.
>>
>> For starter the following bits are suggested:
>> 1. packet_ok - means that all HW checks depending on packet layer have
>>     passed. This may mean that in some HW such flow should be splited to
>>     number of flows or fail.
>> 2. l2_ok - all check for layer 2 have passed.
>> 3. l3_ok - all check for layer 3 have passed. If packet doesn't have
>>     l3 layer this check should fail.
>> 4. l4_ok - all check for layer 4 have passed. If packet doesn't
>>     have l4 layer this check should fail.
>> 5. l2_crc_ok - the layer 2 crc is O.K.
>> 6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
>>     IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
>>     possible that checksum will be 0 and the l3_ok will be 1.
>> 7. l4_csum_ok - layer 4 checksum is O.K.
>> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
>>     frame len.
>>
>> Example of usage:
>> 1. check packets from all possible layers for integrity.
>>     flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>>
>> 2. Check only packet with layer 4 (UDP / TCP)
>>     flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>>
>> Signed-off-by: Ori Kam <orika at nvidia.com>
>> ---
>>   doc/guides/prog_guide/rte_flow.rst     | 19 ++++++++++
>>   doc/guides/rel_notes/release_21_05.rst |  5 +++
>>   lib/librte_ethdev/rte_flow.h           | 48 ++++++++++++++++++++++++++
>>   3 files changed, 72 insertions(+)
>>
>> diff --git a/doc/guides/prog_guide/rte_flow.rst 
>> b/doc/guides/prog_guide/rte_flow.rst
>> index e1b93ecedf..4b8723b84c 100644
>> --- a/doc/guides/prog_guide/rte_flow.rst
>> +++ b/doc/guides/prog_guide/rte_flow.rst
>> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
>>   - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
>>   - Default ``mask`` matches nothing, for all eCPRI messages.
>> +Item: ``PACKET_INTEGRITY_CHECKS``
>> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> +
>> +Matches packet integrity.
>> +Some devices require pre-enabling for this item before using it.
>> +
> 
> "pre-enabling" may not be clear enough, what about updating it slightly:
> 
> "Some devices require enabling integration checks in HW before using this flow 
> item."
> 

Indeed even with above it is not clear who should do the enabling, PMD or 
application, let me try again:

"For some devices application needs to enable integration checks in HW before 
using this flow item."


> For the record, the intention here is to highlight that if the requested 
> integration check is not enabled in HW, creating flow rule will fail.
> Application may need to enable the integration check in HW first.



More information about the dev mailing list