[dpdk-dev] [PATCH v15 3/7] ethdev: add validation to offloads set by PMD

Matan Azrad matan at mellanox.com
Thu Nov 7 07:56:51 CET 2019


Hi

From: Andrew Rybchenko
> On 11/6/19 9:58 AM, Matan Azrad wrote:
> >
> >
> > From: Andrew Rybchenko
> >> On 11/5/19 5:05 PM, Matan Azrad wrote:
> >>> From: Andrew Rybchenko
> >>>> On 11/3/19 6:16 PM, Matan Azrad wrote
> >>>>> From: Andrew Rybchenko
> >>>>>> On 11/3/19 9:57 AM, Matan Azrad wrote:
> >>>>>>> Hi
> >>>>>>>
> >>>>>>> From: Andrew Rybchenko
> >>>>>>>> On 10/31/19 7:33 PM, Pavan Nikhilesh Bhagavatula
> >>>>>>>> wrote:
> >>>>>>>>>> From: Pavan Nikhilesh Bhagavatula
> >>>>>>>>>>> Hi Matan,
> >>>>>>>>>>>
> >>>>>>>>>>>> Hi Pavan
> >>>>>>>>>>>>
> >>>>>>>>>>>> From: Pavan Nikhilesh
> >>>>>>>>>>>> <pbhagavatula at marvell.com>
> >>>>>>>>>>>>> Some PMDs cannot work when certain offloads are
> >>>>>>>>>>>>> enable/disabled, as a workaround PMDs auto
> enable/disable
> >>>>>>>>>>>>> offloads internally and expose it through
> >>>>>>>>>>>>> dev->data-dev_conf.rxmode.offloads.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> After device specific dev_configure is called compare the
> >>>>>>>>>>>>> requested offloads to the offloads exposed by the PMD
> and,
> >>>>>>>>>>>>> if the PMD failed to enable a given offload then log it
> >>>>>>>>>>>>> and return -EINVAL from rte_eth_dev_configure, else if
> the
> >>>>>>>>>>>>> PMD failed to disable a given offload log and continue
> >>>>>>>>>>>>> with rte_eth_dev_configure.
> >>>>>>>>>>>>>
> >>>>>>>>>>>> rte_eth_dev_configure can be called more than 1 time in the
> >>>>>>>>>>>> device life time, How can you know what is the minimum
> >>>>>>>>>>>> offload configurations required by the port after the first
> >>>>>>>>>>>> call?
> >>>>>>>>>>>> Maybe putting it in dev info is better, what do you think?
> >>>>>>>>>>>>
> >>>>>>>>>>> We only return -EINVAL in the case where we enable an
> >>>>>>>>>>> offload advertised by dev_info and the port still fails to
> >>>>>>>>>>> enable it.
> >>>>>>>>>> Are you sure it is ok that devices may disable\enable
> >>>>>>>>>> offloads under the hood without user notification?
> >>>>>>>>> Some devices already do it. The above check adds validation
> >>>>>>>>> for the same.
> >>>>>>>> The problem is that some offloads cannot be disabled.
> >>>>>>> Yes, I understand it.
> >>>>>>>
> >>>>>>>> If application does not request Rx checksum offload since it
> >>>>>>>> does use it, it is not a problem to report it.
> >>>>>>> Yes, for RX checksum I tend to agree that application doesn't
> >>>>>>> care if the
> >>>>>> PMD will calculate the checksum in spite of the offload is
> >>>>>> disabled.
> >>>>>>>
> >>>>>>> But what's about other offloads: For example in RX: LRO,
> >>>>>>> CRC_KEEP, VLAN_STRIP, JUMBO If the PMD will stay them on while
> >>>>>>> the app is disabling it, It can cause a problems to the
> >>>>>> application (affects the packet length).
> >>>>>>
> >>>>>> Yes, I agree that some offloads are critical to be disabled, but
> >>>>>> RSS_HASH discussed in the changeset is not critical.
> >>>>>
> >>>>> So, are you agree It should not be checked globally for all the
> >>>>> offloads in
> >>>> ethdev layer?
> >>>>
> >>>> If offload is not requested, but enabled (since PMD cannot disable
> >>>> it), right not it will not fail configure, but warn about it in
> >>>> logs.
> >>>>
> >>>
> >>> In this case warning print is not enough since it can be critical
> >>> for the
> >> application for some offloads.
> >>> It can be very weird for the application to see that some offload
> >>> are on
> >> while the application doesn't expect them to be on.
> >>> it even can cause app crash(at least for the RX offload I wrote
> >>> above).
> >>
> >> The patch improves the situation. Earlier it was silent, now it will
> >> be at least visible.
> >
> > We can do it visible inside the limited PMDs.
> 
> Why?

Because this is not according to what application should understand from the ethdev API. 

> >> I'm afraid that in 19.11 release cycle we cannot change it to fail
> >> dev_configure. I think it will be too destructive. Future improvement
> >> should be discussed separately.
> >
> > So we can remove this ethdev patch now and let the PMD to do it until
> > we will find better solution later.
> 
> Sorry, but I don't think so.
> 
> >>>>> It even be more problematic if the dynamic offload field in mbuf
> >>>>> is not exist at all.
> >>>
> >>> Any answer here?
> >
> > A Rx offload requires dynamic mbuf field cannot stay visible while the
> > app disabling it. Because the dynamic mbuf field probably is not set
> > in the mbuf. May cause problems.
> >
> >> Please, clarify the question.
> >>

No answer here.

> >>>>>>
> >>>>>>> For example in TX: TSO, VLAN, MULTI_SEG.....
> >>>>>>
> >>>>>> Tx is not that critical since application should not request
> >>>>>> these offloads per- packet. Tx offloads are mainly required to
> >>>>>> ensure that application may request the offload per packet and it
> >>>>>> will be done.
> >>>>>
> >>>>> yes, you right, In TX it looks less critical (for now).
> >>>>>
> >>>>>>
> >>>>>>>> Of course, it could be a problem if the offload is used, but
> >>>>>>>> application wants to disable it, for example, for debugging
> >>>>>>>> purposes. In this case, the solution is to mask offloads on
> >>>>>>>> application level, which is not ideal as well.
> >>>>>>> Why not ideal?
> >>>>>>
> >>>>>> It eats CPU cycles.
> >>>>>
> >>>>> Sorry, I don't understand your use case here.
> >>>>
> >>>> If application wants to try code path without, for example, Rx
> >>>> checksum offload, it could be insufficient to disable the offload
> >>>> right now, but also required to cleanup offload results flags in
> >>>> each mbuf (if PMD does not support the offload disabling).
> >>>
> >>> What is "right now"? Configuration time?
> >>
> >> Right now is the current state of some drivers in DPDK tree.
> >>
> >
> > OK. I think the offload configuration is in configuration time. No
> > data-path.
> >
> >>> If application will know that PMD cannot disable the rx-checksum in
> >>> configuration time, It can plan to not clean this flag in mbuf for
> >>> each rx
> >> mbuf.
> >>
> >> Yes and application has a way to know it - take a look at
> >> dev->data->dev_conf.rxmode.offloads.
> >
> > As I understand, before this patch, this field used for ethdev layer
> > knowledge to track on the application Rx offload configuration. Am I
> > wrong?
> 
> I think it is just Rx offloads configuration.
> It is better to have real offloads here since it is used on Rx queue setup to
> mask already enabled offloads.

And in DPDK or any SW management controls a device, the configuration must be set by the user.
So, it should reflect the user configuration as is.

 
> > And If the meaning is the PMD configuration set (which weirdly can be
> > different from what application want) I think it should be an error -
> > because app doesn't follow the API.
> 
> Which app? Which API?

App - the dpdk application which configures an offload that cannot be masked.
API - The Rx offload field in the ethdev data (which weirdly means what was configured by the PMD).

> 
> >>> It looks me like PMD limitation which can be solved by 2
> >>> options: 1. Capability information which say to the app what offload
> >>> may not be disabled.
> >>> 2. Add limitation in the PMD documentation and print warning\error
> >>> massage from the PMD.
> >>
> >> Yes, right now we are going way (2).
> >>
> >>>>>>> If application can know the limitation of offloads disabling
> >>>>>>> (for example to
> >>>>>> read capability on it)
> >>>>>>> The application has all information to take decisions.
> >>>>>>>
> >>>>>>>> Anyway, the patch just tries to highlight difference of applied
> >>>>>>>> from requested. So, it is a step forward. Also, the patch will
> >>>>>>>> fail configure if an offload is requested, but not
> >>>>>> enabled.
> >>>>>>>>
> >>>>>>>>>> Can't it break applications? Why does the device expose
> >>>>>>>>>> unsupported offloads in dev info? Does it update the running
> >>>>>>>>>> offload usynchronically? Race?
> >>>>>>>>>> Can you explain also your specific use case?
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>>> Matan
> >>>>>
> >>>
> >



More information about the dev mailing list