distributed tests of a patch
damien.lespiau at intel.com
Thu Sep 24 21:01:22 AEST 2015
On Thu, Jul 23, 2015 at 01:26:22AM +0200, Thomas Monjalon wrote:
> Summary: it would be very nice to take into account some distributed tests
> in patchwork.
> Longer text below...
> Patchwork is incredibly useful to list the patches received in the mailing
> list of an Open Source project and classify them thanks to the associated
> status. A patch is accepted after doing some review, reading the discussion,
> having some specific associated tags (A/R/T column) and testing. This last
> point may be automated but there is no way of gathering test results from
> many contributors in a central view which is patchwork.
> It would be really useful to add a new column to count the automated test
> results Success, Warnings and Errors (S/W/E) as it is done for manual tags
> As it is done for managing the patches, an XML-RPC API could be used to add
> a new test report associated to a patch ID. A test report could be also
> received by email on a mailing list dedicated to test reports.
> An advantage of sending test reports to a mailing list, is to be able
> to receive them without the need of patchwork, and eventually do some
> custom processing. It is also possible to offload the storage of the
> test reports in the mailing list archives. Last advantage, the mailing list
> administration allows to whitelist the test contributors.
> With such a distributed test system, many contributors can run different test
> suites on their own set of hardware / OS environment for each submitted patch,
> improving the quality of the impacted project.
> Below is a workflow giving more details.
> Let's consider a project having 3 mailing lists:
> - devel@ to receive the patches and comments.
> - test@ is a mailing list without archive to send patches to test.
> - reports@ is a mailing list with archives to store the test reports.
> Each patch from devel@ is sent to test@ with prefix [patchwork id].
> Private tests may be requested by sending patches to test at .
> If an email is received in test@ without [PATCH] keyword, it is filtered out.
> Each email sent to test@ is forwarded to each registered test platforms
> (i.e. testers are members of the test@ mailing list).
> Each test platform apply the patch (and previous patches of the series) on its
> local git tree.
This relies on parsing more mails. Parsing mailing-list is done because
patches are sent as mail but I wouldn't use the same mechanism for what
should be more of an API or at least more structured data.
You're also making the understanding what a series is part of the
client, I did it as part of patchwork itself. There are quite complex
things about series: eg. knowing that a v2 patch is actually part of a
bigger series, but just that patch was re-sent as reply of the v1
because the remaining patches are all r-b'ed. These complex things are
done in patchwork, exposing high level concepts to test
What I have in store: Patchwork knows what is a series and presents a
Series object with all the info needed. a RSS feed with extra elements
in a patchwork namespace to get a list of events and API entry points
("new-series-revision", REST URL). From there it's easy to retrieve the
list of the patches to apply. The logic of knowing to re-test of full
series with a single patch as v2 (and other interesting mailing-lits
usages) is abstract by patchwork.
> Each test platform must reply 1 or more reports with exactly the same subject
> to the original sender or reports@ if List-Id was devel at .
> A test report may be linked to a test suite doing several tests.
> The first line begins with Test-Label: followed by a test name.
> The second line begins with Test-Status: followed by SUCCESS, WARNING or ERROR.
> The WARNING keyword may be used by tests known to have some false positives
> or not so important.
> The detailed report must be inline to allow easy reading from mail archives.
> When the report is received in reports@, the patchwork id entry is updated to
> be associated to a new test entry including the test label, status and the URL
> to the report archives.
So. You are now decoupling the patch from the test results email. I'd
argue that a report mail that is not a reply to the patch is not as
useful as a direct reply, because the context is lost and one needs to
match the patch mail with the result manually. Sure patchwork does that,
but it means the emails are roughly useless.
I'd send the test-results directly as replies to original patches, with
consolidated results for a defined list of tests. That way, the report
mail can be useful in autonomy.
This also works for tests that would only run for full series (Vs
running for each individual patch of a series), replying to the
cover-letter (or first patch if no cover-letter).
> The new test result is visible in 2 patchwork views.
> The patch list has a column to display the count of test success, warnings and
> errors. These counters are a link to the patch view with test section unfold.
> The detailed patch view has a test section (fold by default) like the headers
> section. When fold, it shows only the counters. When unfold, it shows one test
> result per line: the test label is followed by the status being a HTML link to
> the full report.
> When refreshing one of these views, it is possible to see the test progress,
> i.e. how many tests are complete.
> In this scheme, there is no definitive global test status.
> The idea is to wait to have a common number of success and no error (removing
> possible false positives) to consider a patch as good. This judgement is done
> by the maintainers.
I'd really like some idea of test completion. I think it's quite key to
even consider doing more work with the patch, make sure all tests have
More information about the Patchwork