[dpdk-dev] [PATCH v5 1/2] ethdev: add packet integrity checks
Ori Kam
orika at nvidia.com
Thu Apr 15 17:10:38 CEST 2021
Hi Ajit,
> -----Original Message-----
> From: Ajit Khaparde <ajit.khaparde at broadcom.com>
> Subject: Re: [PATCH v5 1/2] ethdev: add packet integrity checks
>
> On Wed, Apr 14, 2021 at 9:10 AM Gregory Etelson <getelson at nvidia.com>
> wrote:
> >
> > From: Ori Kam <orika at nvidia.com>
> >
> > Currently, DPDK application can offload the checksum check,
> > and report it in the mbuf.
> >
> > However, as more and more applications are offloading some or all
> > logic and action to the HW, there is a need to check the packet
> > integrity so the right decision can be taken.
> >
> > The application logic can be positive meaning if the packet is
> > valid jump / do actions, or negative if packet is not valid
> > jump to SW / do actions (like drop) a, and add default flow
> > (match all in low priority) that will direct the miss packet
> > to the miss path.
>
> Unless I missed it,
> How do you specify the negative case?
> Can you provide an example as well?
>
You can use negative case by setting the bit to zero and the mask bit to 1:
This example was taken from the testpmd patch:
flow create 0 ingress pattern integrity value mask 1 value spec 0 / end actions queue index 0 / end
it matches all invalid packets and forward it to the application.
> >
> >
> > Since currently rte_flow works in positive way the assumption is
> > that the positive way will be the common way in this case also.
> >
> > When thinking what is the best API to implement such feature,
> > we need to considure the following (in no specific order):
> > 1. API breakage.
> > 2. Simplicity.
> > 3. Performance.
> > 4. HW capabilities.
> > 5. rte_flow limitation.
> > 6. Flexibility.
> >
> > First option: Add integrity flags to each of the items.
> > For example add checksum_ok to ipv4 item.
> >
> > Pros:
> > 1. No new rte_flow item.
> > 2. Simple in the way that on each item the app can see
> > what checks are available.
> >
> > Cons:
> > 1. API breakage.
> > 2. increase number of flows, since app can't add global rule and
> > must have dedicated flow for each of the flow combinations, for example
> > matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> > result in 5 flows.
> >
> > Second option: dedicated item
> >
> > Pros:
> > 1. No API breakage, and there will be no for some time due to having
> > extra space. (by using bits)
> > 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> > IPv6.
> > 3. Simplicity application can just look at one place to see all possible
> > checks.
> > 4. Allow future support for more tests.
> >
> > Cons:
> > 1. New item, that holds number of fields from different items.
> >
> > For starter the following bits are suggested:
> > 1. packet_ok - means that all HW checks depending on packet layer have
> > passed. This may mean that in some HW such flow should be splited to
> > number of flows or fail.
> > 2. l2_ok - all check for layer 2 have passed.
> > 3. l3_ok - all check for layer 3 have passed. If packet doesn't have
> > l3 layer this check should fail.
> > 4. l4_ok - all check for layer 4 have passed. If packet doesn't
> > have l4 layer this check should fail.
> > 5. l2_crc_ok - the layer 2 crc is O.K.
> > 6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
> > IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
> > possible that checksum will be 0 and the l3_ok will be 1.
> > 7. l4_csum_ok - layer 4 checksum is O.K.
> > 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> > frame len.
> >
> > Example of usage:
> > 1. check packets from all possible layers for integrity.
> > flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> >
> > 2. Check only packet with layer 4 (UDP / TCP)
> > flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
> >
> > Signed-off-by: Ori Kam <orika at nvidia.com>
> > ---
> > doc/guides/prog_guide/rte_flow.rst | 20 +++++++++++
> > doc/guides/rel_notes/release_21_05.rst | 5 +++
> > lib/librte_ethdev/rte_flow.h | 49 ++++++++++++++++++++++++++
> > 3 files changed, 74 insertions(+)
> >
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> > index e1b93ecedf..1dd2301a07 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -1398,6 +1398,26 @@ Matches a eCPRI header.
> > - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> > - Default ``mask`` matches nothing, for all eCPRI messages.
> >
> > +Item: ``PACKET_INTEGRITY_CHECKS``
> > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > +
> > +Matches packet integrity.
> > +For some devices application needs to enable integration checks in HW
> > +before using this item.
> > +
> > +- ``level``: the encapsulation level that should be checked. level 0 means
> the
> > + default PMD mode (Can be inner most / outermost). value of 1 means
> outermost
> > + and higher value means inner header. See also RSS level.
> > +- ``packet_ok``: All HW packet integrity checks have passed based on the
> max
> > + layer of the packet.
> > +- ``l2_ok``: all layer 2 HW integrity checks passed.
> > +- ``l3_ok``: all layer 3 HW integrity checks passed.
> > +- ``l4_ok``: all layer 4 HW integrity checks passed.
> > +- ``l2_crc_ok``: layer 2 crc check passed.
> > +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> > +- ``l4_csum_ok``: layer 4 checksum check passed.
> > +- ``l3_len_ok``: the layer 3 len is smaller than the frame len.
> > +
> > Actions
> > ~~~~~~~
> >
> > diff --git a/doc/guides/rel_notes/release_21_05.rst
> b/doc/guides/rel_notes/release_21_05.rst
> > index a0b907994a..986f749384 100644
> > --- a/doc/guides/rel_notes/release_21_05.rst
> > +++ b/doc/guides/rel_notes/release_21_05.rst
> > @@ -168,6 +168,11 @@ New Features
> > the events across multiple stages.
> > * This also reduced the scheduling overhead on a event device.
> >
> > +* **Added packet integrity match to RTE flow rules.**
> > +
> > + * Added ``PACKET_INTEGRITY_CHECKS`` flow item.
> > + * Added ``rte_flow_item_integrity`` data structure.
> > +
> > * **Updated testpmd.**
> >
> > * Added a command line option to configure forced speed for Ethernet
> port.
> > diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> > index c476a0f59d..446ff48140 100644
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -551,6 +551,17 @@ enum rte_flow_item_type {
> > * See struct rte_flow_item_geneve_opt
> > */
> > RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Matches on packet integrity.
> > + * For some devices application needs to enable integration checks in
> HW
> > + * before using this item.
> > + *
> > + * See struct rte_flow_item_integrity.
> > + */
> > + RTE_FLOW_ITEM_TYPE_INTEGRITY,
> > };
> >
> > /**
> > @@ -1685,6 +1696,44 @@ rte_flow_item_geneve_opt_mask = {
> > };
> > #endif
> >
> > +__extension__
> > +struct rte_flow_item_integrity {
> > + uint32_t level;
> > + /**< Packet encapsulation level the item should apply to.
> > + * @see rte_flow_action_rss
> > + */
> > + union {
> > + struct {
> > + uint64_t packet_ok:1;
> > + /** The packet is valid after passing all HW checks. */
> > + uint64_t l2_ok:1;
> > + /**< L2 layer is valid after passing all HW checks. */
> > + uint64_t l3_ok:1;
> > + /**< L3 layer is valid after passing all HW checks. */
> > + uint64_t l4_ok:1;
> > + /**< L4 layer is valid after passing all HW checks. */
> > + uint64_t l2_crc_ok:1;
> > + /**< L2 layer crc is valid. */
> > + uint64_t ipv4_csum_ok:1;
> > + /**< IPv4 layer checksum is valid. */
> > + uint64_t l4_csum_ok:1;
> > + /**< L4 layer checksum is valid. */
> > + uint64_t l3_len_ok:1;
> > + /**< The l3 len is smaller than the frame len. */
> > + uint64_t reserved:56;
> > + };
> > + uint64_t value;
> > + };
> > +};
> > +
> > +#ifndef __cplusplus
> > +static const struct rte_flow_item_integrity
> > +rte_flow_item_integrity_mask = {
> > + .level = 0,
> > + .value = 0,
> > +};
> > +#endif
> > +
> > /**
> > * Matching pattern item definition.
> > *
> > --
> > 2.25.1
> >
More information about the dev
mailing list