Pain points in Git's patch flow
tytso at mit.edu
Tue Apr 20 08:34:17 AEST 2021
On Mon, Apr 19, 2021 at 09:23:14PM +0200, Sebastian Schuberth wrote:
> > That's not inherent with the E-Mail workflow, e.g. Linus on the LKML
> > also pulls from remotes.
> Yeah, I was vaguely aware of this. To me, the question is why "also"?
> Why not *only* pull from remotes? What's the feature gap email patches
> try to close?
Linus mostly pulls from git trees. The e-mail workflow tends to be
used by maintainers, who are reviewing submissions from their
contributors. People submitting changes relating to ext4 know to send
it to the linux-ext4 mailing list; people who are submitting changes
to the xfs file system send it to linux-xfs, etc.
> > It does ensure that e.g. if someone submits patches and then deletes
> > their GitHub account the patches are still on the ML.
> Ah, so it's basically just about a backup? That could also be solved
> differently by forking / syncing Git repos.
The primary reason why the kernel uses mailing lists is because code
reviews are fundamentally *discussions*, and people are used to using
inboxes. Sure, you can have a gerrit server send e-mail notifications
about code reviews, but then you have to reply by going to the gerrit
server (and gerrit really doesn't work well on slow network link such
as those found on airplanes and cruise ships). I'd say that most
maintainers simply find e-mail reviews to simply be more *convenient*
than using gerrit. And over time, we've used other tools to track
metadata over the status of a patch, such as patchwork, which are
> > I just wanted to help bridge the gap between the distributed E-Mail v.s
> > centralized website flow.
> Maybe, instead of jumping into something like an email vs Gerrit
> discussion, what would help is to get back one step and gather the
> abstract requirements. Then, with a fresh and unbiased mind, look at
> all the tools and infrastructure out there that are able to fulfill
> the needs, and then make a choice.
I'll note that the kernel folks have done this, starting with a 2019
Kernel Summit talk at the Linux Plumbers Conference in Lisbon. A
description of the follow-up discussions from that talk can be found
There was a collection of requirements on a thread on the newly
created workflows at vger.kernel.org mailing list. This has led to a
number of proposals to make improvements to git, public-inbox,
patchwork, the kernel.org infrastructures, etc., some of which were
funded by the Linux Foundation last year.
Konstantin Ryabitsev has been driving a large amount of that work, and
one of the things that has come out of that is b4. (Yes, that's a
Star Trek reference... https://memory-alpha.fandom.com/wiki/B-4)
Obviously, this isn't intended to be a solution for everyone, and I'm
sure there are many projects that are happy forcing developers to use,
say, Gerrit, which might be a better solution for them.
However, there are a number of core kernel developers who are
super-allergic to solutions which force users to use web interfaces.
So solutions that have a combination of CLI's as well as web interface
is probably going to be the right approach. Things like pwclient and
b4 are exciting starting points for improved kernel workflows.
Of course, we've gone a bit farther afield from the original question
which is what should git's development workflows should be. Given
that git is using some of the kernel.org infrastructures, certainly
some of the kernel workflow tools are options for the git development
community to consider.
One of the advantages of the kernel workflows model is that we don't
force users to use github or gitlab or gerrit, without having to make
a global decision for the entire community. For example, if some
developers want to start using b4 to download patch series for git,
they could start doing that today.
More information about the Patchwork