distributed tests of a patch

Thomas F Herbert therbert at redhat.com
Fri Jul 24 00:13:02 AEST 2015



On 7/22/15 7:26 PM, Thomas Monjalon wrote:
> Hello,
>
> Summary: it would be very nice to take into account some distributed tests
> in patchwork.
> Longer text below...
>
> Patchwork is incredibly useful to list the patches received in the mailing
> list of an Open Source project and classify them thanks to the associated
> status. A patch is accepted after doing some review, reading the discussion,
> having some specific associated tags (A/R/T column) and testing. This last
> point may be automated but there is no way of gathering test results from
> many contributors in a central view which is patchwork.
> It would be really useful to add a new column to count the automated test
> results Success, Warnings and Errors (S/W/E) as it is done for manual tags
> Acked-by/Reviewed-by/Tested-by.
>
> As it is done for managing the patches, an XML-RPC API could be used to add
> a new test report associated to a patch ID. A test report could be also
> received by email on a mailing list dedicated to test reports.
> An advantage of sending test reports to a mailing list, is to be able
> to receive them without the need of patchwork, and eventually do some
> custom processing. It is also possible to offload the storage of the
> test reports in the mailing list archives. Last advantage, the mailing list
> administration allows to whitelist the test contributors.
>
> With such a distributed test system, many contributors can run different test
> suites on their own set of hardware / OS environment for each submitted patch,
> improving the quality of the impacted project.
>
> Below is a workflow giving more details.
>
> Let's consider a project having 3 mailing lists:
> - devel@ to receive the patches and comments.
> - test@ is a mailing list without archive to send patches to test.
> - reports@ is a mailing list with archives to store the test reports.
> Each patch from devel@ is sent to test@ with prefix [patchwork id].
> Private tests may be requested by sending patches to test at .
> If an email is received in test@ without [PATCH] keyword, it is filtered out.
> Each email sent to test@ is forwarded to each registered test platforms
> (i.e. testers are members of the test@ mailing list).
> Each test platform apply the patch (and previous patches of the series) on its
> local git tree.
> Each test platform must reply 1 or more reports with exactly the same subject
> to the original sender or reports@ if List-Id was devel at .
> A test report may be linked to a test suite doing several tests.
> The first line begins with Test-Label: followed by a test name.
> The second line begins with Test-Status: followed by SUCCESS, WARNING or ERROR.
> The WARNING keyword may be used by tests known to have some false positives
> or not so important.
> The detailed report must be inline to allow easy reading from mail archives.
> When the report is received in reports@, the patchwork id entry is updated to
> be associated to a new test entry including the test label, status and the URL
> to the report archives.
> The new test result is visible in 2 patchwork views.
> The patch list has a column to display the count of test success, warnings and
> errors. These counters are a link to the patch view with test section unfold.
> The detailed patch view has a test section (fold by default) like the headers
> section. When fold, it shows only the counters. When unfold, it shows one test
> result per line: the test label is followed by the status being a HTML link to
> the full report.
I have also been working on this in the context of the DPDK community 
and have already made some client side changes. I think it is important 
to note that the actual changes we are making are relatively minor. I 
have a few points to add to @Thomas.Monjalon's explanation.

We want to make these changes as transparent as possible and work in 
such a way that deployers of patchwork that want to upgrade for other 
reasons will not be forced to use the new features.

First: We will add some additional params in the patch_set API to 
indicate the test status.

Second: We will add a column to the main page showing test status for 
each patch ID with three counts for Success/Warning/Error

Third: We will be updating the patch detailed view analogously to 
"headers show" with summary descriptions of the submittted test results.

Fourth: We will have an additional new main page showing "incoming test 
results" indexed by patch_id with a list of incoming test results. 
Clicking on each test result will show the longer test results generally 
Jenkins console dumps (However there is nothing in this proposal that 
maintains Jenkins or any other CI tool)

Fifth: There will be additional email parsing in the client side tools 
to parse the additional mailing lists "results" and "tests"

> When refreshing one of these views, it is possible to see the test progress,
> i.e. how many tests are complete.
> In this scheme, there is no definitive global test status.
> The idea is to wait to have a common number of success and no error (removing
> possible false positives) to consider a patch as good. This judgement is done
> by the maintainers.
> In case a test need to be re-run, it is possible to send an email to test@,
> keeping the patch id and the test label. When the new report will be received,
> the counters and the detailed view show only the last result of this test.
>
> That's the end of this long story. Hope you like it and welcome the idea!
>

-- 
Thomas F Herbert Red Hat


More information about the Patchwork mailing list