[PATCH 1/1] tools: check for pending test status when parsing emails

Patrick Robb probb at iol.unh.edu
Thu May 23 23:47:59 CEST 2024


On Tue, May 21, 2024 at 12:08 PM Thomas Monjalon <thomas at monjalon.net> wrote:
>
> 20/05/2024 23:36, Patrick Robb:
> > 2. UNH Lab triggers some testrun pipelines in our CI system (jenkins).
> > The first action the pipeline takes is to create in our database a
> > test result record for each testrun, setting the status to PENDING. It
> > is important to note that one patchwork context, Like
> > "iol-compile-amd64-testing," may consist of many individual testruns,
> > each for different distros, hardware, environment etc.
> > 3. When each testrun completes, it will send a report to Patchwork
> > with the new result (pass or fail). When it does this it will update
> > the context's results table, changing the environment's result from
> > pending to pass/fail. So, when the first report comes in for, say,
> > context "iol-compile-amd64-testing," you would see 1 pass/fail, 12
> > pending, or similar. Then, as subsequent testruns complete, and report
> > their results, the updated table comes with the new report. The
> > overall context result (the _Testing {PASS/FAIL/PENDING}_ at the top
> > of the test report email) is determined in the manner you might
> > expect, i.e. if there is at least one testrun fail result, overall
> > context is fail, else if there is at least one pending result, overall
> > context is pending, else if all results are passing, overall result is
> > passing. As an example, when testing is nearly complete, the top of
> > the report email may look like this:
> >
> > _Testing PENDING_
> >
> > Branch: tags/v22.11
> >
> > a409653a123bf105970a25c594711a3cdc44d139 --> testing pass
> >
> > Test environment and result as below:
> >
> > +------------------------------------+-----------------------------------------------------+
> > |            Environment             |       dpdk_meson_compile      |
> > +====================================+====================+
> > | Ubuntu 20.04 ARM SVE                          | PASS               |
> > +------------------------------------+--------------------+
> > | Debian 12 with MUSDK                           | PENDING        |
> > +------------------------------------+--------------------+
> > | Fedora 37 (ARM)                                     | PASS               |
> > +------------------------------------+--------------------+
> > | Ubuntu 20.04 (ARM)                                | PASS               |
> > +------------------------------------+--------------------+
> > | Fedora 38 (ARM)                                     | PASS               |
> > +------------------------------------+--------------------+
> > | Fedora 39 (ARM)                                     | PENDING        |
> > +------------------------------------+--------------------+
> > | Debian 12 (arm)                                       | PASS               |
> > +------------------------------------+--------------------+
> > | CentOS Stream 9 (ARM)                         | PASS               |
> > +------------------------------------+--------------------+
> > | Debian 11 (Buster) (ARM)                        | PASS               |
> > +------------------------------------+--------------------+
> > | Ubuntu 20.04 ARM GCC Cross Compile | PASS               |
> > +------------------------------------+--------------------+
>
> It is quite strange to receive a new email each time a line of the table is updated.
>
> > 4. Eventually, all testruns are complete for a patchwork context, and
> > the table switches from pending to pass or fail.
> >
> > This does not slow the delivery of results, nor does it increase the
> > number of test report emails sent. We still send only 1 email per
> > testrun.
>
> I had not realised that so many emails are sent.
> I thought it was 1 patchwork context == 1 email.

This is how it worked until last year, but our test results delivery
was slow in some cases. So, we implemented "tail reporting" i.e. the
ability for an environment to report its own test run when it finishes
testing, as opposed to relying on a testrun aggregator which runs
later. If a test run fails, we want to share that information as soon
as we can, not block it on other testing.


More information about the ci mailing list