Pain points in Git's patch flow

Sebastian Schuberth sschuberth at gmail.com
Tue Apr 20 05:23:14 AEST 2021


On Mon, Apr 19, 2021 at 10:26 AM Ævar Arnfjörð Bjarmason
<avarab at gmail.com> wrote:

> > If you send around code patches by mail instead of directly working on
> > Git repos plus some UI, that feels to me like serializing a data class
> > instance to JSON, printing the JSON string to paper, taking that sheet
> > of paper to another PC with a scanner, using OCR to scan it into a
> > JSON string, and then deserialize it again to a new data class
> > instance, when you could have just a REST API to push the data from on
> > PC to the other.
>
> That's not inherent with the E-Mail workflow, e.g. Linus on the LKML
> also pulls from remotes.

Yeah, I was vaguely aware of this. To me, the question is why "also"?
Why not *only* pull from remotes? What's the feature gap email patches
try to close?

> It does ensure that e.g. if someone submits patches and then deletes
> their GitHub account the patches are still on the ML.

Ah, so it's basically just about a backup? That could also be solved
differently by forking / syncing Git repos.

> To begin with if we'd have used the bugtracker solution from the
> beginning we'd probably be talking about moving away from Bugzilla
> now. I.e. using those things means your data becomes entangled with the
> their opinionated data models.

Indeed, it's an art to choose the right tool at the time, and to
ensure you're not running into some "vendor-lock-in" if data export is
made too hard. And aligning on someone's "opinionated data model" is
not necessarily a bad thing, as standardization can also help
interoperability and to smoothen workflows.

> > Not necessarily. As many of these tools have (REST) APIs, also
> > different API clients exist that you could try.
>
> API use that usually (always?) requires an account/EULA with some entity
> holding the data, and as a practical concern getting all the data is
> usually some huge number of API requests.

I'm not sure how relevant that concern really is, but in any cause it
would be irrelevant for a self-hosted solution.

> >> And to e.g. as one good example to use (as is the common convention on
> >> this list) git-range-diff to display a diff to the "last rebased
> >> revision" would mean some long feature cycle in those tools, if they're
> >> even interested in implementing such a thing at all.
> >
> > AFAIK Gerrit can already do that.
>
> Sure, FWIW the point was that you needed Gerrit to implement that, and
> to suggest what if they weren't interested. Would you need to maintain a
> forked Gerrit?

Sorry, I can't follow that. Why would you need to maintain a fork of
Gerrit if Gerrit already has the feature you're looking for? Is it a
hypothetical question about what to do if Gerrit would not have the
feature yet?

> Anyway, as before don't take any of the above as arguing, FWIW I
> wouldn't mind using one of these websites overall if it helped
> development velocity in the project.

I appreciate that open mindset of yours here.

> I just wanted to help bridge the gap between the distributed E-Mail v.s
> centralized website flow.

Maybe, instead of jumping into something like an email vs Gerrit
discussion, what would help is to get back one step and gather the
abstract requirements. Then, with a fresh and unbiased mind, look at
all the tools and infrastructure out there that are able to fulfill
the needs, and then make a choice.

-- 
Sebastian Schuberth


More information about the Patchwork mailing list