openbmc/telemetry: First complaint of unresponsiveness
Patrick Williams
patrick at stwcx.xyz
Thu Dec 21 12:54:00 AEDT 2023
On Thu, Dec 21, 2023 at 09:28:06AM +1030, Andrew Jeffery wrote:
> On Wed, 2023-12-20 at 10:56 -0600, Patrick Williams wrote:
> > My gripe is that you should not be holding up the open-source project for
> > your own unpublished, undocumented, non-aligned tests.
>
> My understanding is now that Adrian is aware of the patches he is doing
> some of his own testing to build confidence in merging them. *That*
> latency should probably be measured from around the time I sent the
> initial email, as I suspect that's when he became aware of the patches.
> So far that's a few days, which isn't unreasonable to me. As a
> contributor I sympathise with measuring from when you pushed the
> patches for review, and that this seems like yet more delay, but
> hopefully we can separate out the events here.
My concern is not about these specific commits and the timelines on
getting them merged. My concern is on the general concept of secret
automated tests and/or maintainer-intensive "test driving" of every code
change.
I've seen a few other maintainers say something similar of "this code
tested fine on my system", so this isn't a one-off. Don't read this as
I don't want people testing code, but, especially for trivial changes, I
don't think we should be constraining the review and merge process by
separate "test driving". If that is an expectation I can see why nobody
wants to be a maintainer...
Even still, I have no idea what the process is if any code fails the
maintainer's "test driving". Can I become a maintainer of any repository
and require testing on my super-secret hardware before merge? And if
it fails, too bad, the code doesn't get merged? (I know this isn't what
you're suggesting and I'm being far to one extreme here). I frankly
don't see value that this kind of "test driving" is doing for the
community other than being a time sink and road block.
A big portion of our repositories don't touch hardware enough that they
should need any testing on hardware (certainly not openbmc/telemetry in
question here). If we can't get sufficient coverage in unit test,
something seems missing. If we absolutely need some integration tests,
those should go in openbmc-test-automation and aren't even
single-repository-dependent. I don't understand what the motivation is
for "other testing".
> I'd be more concerned about a knee-jerk merge due to getting a mildly
> stern email and having the merge breaking things. The fact that he's
> testing them to build his confidence seems like reasonable maintainer
> practice to me. The fact that it's felt that tests are required in
> addition to the automated testing is a concern, but I wouldn't yet
> class this effort as "holding things up" in a tar-pit sense.
>
> Andrew
--
Patrick Williams
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <http://lists.ozlabs.org/pipermail/openbmc/attachments/20231220/46deb795/attachment.sig>
More information about the openbmc
mailing list