[dpdk-dev] [PATCH v2 2/2] test/compress: im buffer too small - add unit tests

Akhil Goyal akhil.goyal at nxp.com
Fri Apr 17 17:39:17 CEST 2020


> > > > > > Hi Fiona/Adam,
> > > > > >
> > > > > > > This patch adds a new test suite for verification of the "internal
> > > > > > > QAT IM buffer too small" case handling. These unit tests are
> > > > > > > specific to the QAT PMD only - that's why they are contained in
> > > > > > > a separate test suite.
> > > > > > >
> > > > > > > Signed-off-by: Adam Dybkowski <adamx.dybkowski at intel.com>
> > > > > > > ---
> > > > > >
> > > > > > Why do we need to have separate testsuite for QAT?
> > > > > > Can't we have a single one and based on capability of the driver,
> > > > > > Determine which tests need to be skipped in case they are not
> supported.
> > > > > > This would create a mess in the longer run just like cryptodev.
> > > > > >
> > > > > > Please fix this, we cannot take this patch as is.
> > > > >
> > > > > [Fiona] Yes, I understand your concern and we considered including in the
> > > main
> > > > > suite.
> > > > > However these tests are not based on something that can be
> > > > > checked in capabilities. They are tests to hone in on a specific corner case
> > > > > based on a QAT limitation in its intermediate buffer size. So some of the
> > > > > tests are to validate that the recent changes we made in the PMD
> correctly
> > > > > work around that limitation, but other tests are negative and expected to
> fail
> > > > > as provoking a corner-case that still exists. Other devices would probably
> not
> > > fail
> > > > > the same tests.
> > > >
> > > > Does that mean that all PMDs will pass with the newly added testcase
> which is
> > > for
> > > > A corner case in QAT. If that is the case what is the issue in adding that in
> the
> > > main
> > > > Test suite. It will get passed in all PMDs, isn't it? Am I missing something?
> > > >
> > > > I believe we should not have PMD specific test suites, rather it should be
> based
> > > on
> > > > Capabilities to identify the cases which should be run for that particular
> PMD.
> > > [Fiona] yes, several of the cases should pass on all PMDs.
> > > So we could move those into the main suite.
> > > But what to do about the negative tests?
> > > Example: If a very large data buffer is passed to QAT to compress with dyn
> > > compression, it will get
> > > split in the PMD into many smaller requests to the hardware. However if the
> > > number
> > > of requests is bigger than can fit on the qp then this will never succeed. The
> test
> > > validates that the PMD behaves appropriately in this expected error case.
> That
> > > same
> > > case would probably not have an error on another device. Maybe we should
> just
> > > leave out
> > > such negative tests, but I find them useful as they validate the known
> behaviour.
> > > The buffer size used in the test is based on the known size QAT can handle
> and
> > > the
> > > corner case in which QAT will return an error.
> > >
> > > I see 4 options to handle this:
> > > 1. Leave out those tests
> > > 2. Use a qat-specific test suite only for negative cases which are constructed
> > > based on specific qat internal meta-data.
> > > 3. Include the negative tests in the main suite, but only run them on QAT (by
> > > checking driver type)
> > > 4. include the negative tests in the main suite, run them on all, expecting a
> FAIL
> > > from QAT and a PASS from other devices.
> > >
> > > My preference is for 2.
> > > But up to you.
> > >
> > I would say 4 is better. And why do you say negative cases will fail on QAT and
> pass on all other.
> > The test cases are to test the library APIs which are same for all the PMDs and
> the behavior should
> > Be same.
> [Fiona] I've explained above why QAT fails, sorry if it isn't clear.
> Any device can have errors - it's not an API or capability issue, it's a device
> limitation in a very unlikely corner case.
> So 4 is ok? i.e. if there is conditional code in the UT expecting different result
> depending on PMD type?
> If not, we'll revert to 1 and leave out those tests.

I am still not convinced how different PMDs will behave differently for a particular case.
Even if QAT/any PMD has a corner case, the test case will fail in that case.
You mean you want to make that case pass if the corner case has hit because you have
A known issue reported for that case and you don't want to highlight that in the test summary?
I am not sure if that is a good thing to do.
If the case is failing, then it should report as failed even if you have a defined known issue for that.

We don't need to add any checks for PMD types.

Regards,
Akhil



More information about the dev mailing list