[PATCH v1 1/3] mm/gup: consistently name GUP-fast functions
David Hildenbrand
david at redhat.com
Sat Apr 27 07:33:08 AEST 2024
>>
>>> Hmm, so when I enable 2M hugetlb I found ./cow is even failing on x86.
>>>
>>> # ./cow | grep -B1 "not ok"
>>> # [RUN] vmsplice() + unmap in child ... with hugetlb (2048 kB)
>>> not ok 161 No leak from parent into child
>>> --
>>> # [RUN] vmsplice() + unmap in child with mprotect() optimization ... with hugetlb (2048 kB)
>>> not ok 215 No leak from parent into child
>>> --
>>> # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (2048 kB)
>>> not ok 269 No leak from child into parent
>>> --
>>> # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (2048 kB)
>>> not ok 323 No leak from child into parent
>>>
>>> And it looks like it was always failing.. perhaps since the start? We
>>
>> Yes!
>>
>> commit 7dad331be7816103eba8c12caeb88fbd3599c0b9
>> Author: David Hildenbrand <david at redhat.com>
>> Date: Tue Sep 27 13:01:17 2022 +0200
>>
>> selftests/vm: anon_cow: hugetlb tests
>> Let's run all existing test cases with all hugetlb sizes we're able to
>> detect.
>> Note that some tests cases still fail. This will, for example, be fixed
>> once vmsplice properly uses FOLL_PIN instead of FOLL_GET for pinning.
>> With 2 MiB and 1 GiB hugetlb on x86_64, the expected failures are:
>> # [RUN] vmsplice() + unmap in child ... with hugetlb (2048 kB)
>> not ok 23 No leak from parent into child
>> # [RUN] vmsplice() + unmap in child ... with hugetlb (1048576 kB)
>> not ok 24 No leak from parent into child
>> # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (2048 kB)
>> not ok 35 No leak from child into parent
>> # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (1048576 kB)
>> not ok 36 No leak from child into parent
>> # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (2048 kB)
>> not ok 47 No leak from child into parent
>> # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (1048576 kB)
>> not ok 48 No leak from child into parent
>>
>> As it keeps confusing people (until somebody cares enough to fix vmsplice), I already
>> thought about just disabling the test and adding a comment why it happens and
>> why nobody cares.
>
> I think we should, and when doing so maybe add a rich comment in
> hugetlb_wp() too explaining everything?
Likely yes. Let me think of something.
>
>>
>>> didn't do the same on hugetlb v.s. normal anon from that regard on the
>>> vmsplice() fix.
>>>
>>> I drafted a patch to allow refcount>1 detection as the same, then all tests
>>> pass for me, as below.
>>>
>>> David, I'd like to double check with you before I post anything: is that
>>> your intention to do so when working on the R/O pinning or not?
>>
>> Here certainly the "if it's easy it would already have done" principle applies. :)
>>
>> The issue is the following: hugetlb pages are scarce resources that cannot usually
>> be overcommitted. For ordinary memory, we don't care if we COW in some corner case
>> because there is an unexpected reference. You temporarily consume an additional page
>> that gets freed as soon as the unexpected reference is dropped.
>>
>> For hugetlb, it is problematic. Assume you have reserved a single 1 GiB hugetlb page
>> and your process uses that in a MAP_PRIVATE mapping. Then it calls fork() and the
>> child quits immediately.
>>
>> If you decide to COW, you would need a second hugetlb page, which we don't have, so
>> you have to crash the program.
>>
>> And in hugetlb it's extremely easy to not get folio_ref_count() == 1:
>>
>> hugetlb_fault() will do a folio_get(folio) before calling hugetlb_wp()!
>>
>> ... so you essentially always copy.
>
> Hmm yes there's one extra refcount. I think this is all fine, we can simply
> take all of them into account when making a CoW decision. However crashing
> an userspace can be a problem for sure.
Right, and a simple reference from page migration or some other PFN
walker would be sufficient for that.
I did not dare being responsible for that, even though races are rare :)
The vmsplice leak is not worth that: hugetlb with MAP_PRIVATE to
COW-share data between processes with different privilege levels is not
really common.
>
>>
>>
>> At that point I walked away from that, letting vmsplice() be fixed at some point. Dave
>> Howells was close at some point IIRC ...
>>
>> I had some ideas about retrying until the other reference is gone (which cannot be a
>> longterm GUP pin), but as vmsplice essentially does without FOLL_PIN|FOLL_LONGTERM,
>> it's quit hopeless to resolve that as long as vmsplice holds longterm references the wrong
>> way.
>>
>> ---
>>
>> One could argue that fork() with hugetlb and MAP_PRIVATE is stupid and fragile: assume
>> your child MM is torn down deferred, and will unmap the hugetlb page deferred. Or assume
>> you access the page concurrently with fork(). You'd have to COW and crash the program.
>> BUT, there is a horribly ugly hack in hugetlb COW code where you *steal* the page form
>> the child program and crash your child. I'm not making that up, it's horrible.
>
> I didn't notice that code before; doesn't sound like a very responsible
> parent..
>
> Looks like either there come a hugetlb guru who can make a decision to
> break hugetlb ABI at some point, knowing that nobody will really get
> affected by it, or that's the uncharted area whoever needs to introduce
> hugetlb v2.
I raised this topic in the past, and IMHO we either (a) never should
have added COW support; or (b) added COW support by using ordinary
anonymous memory (hey, partial mappings of hugetlb pages! ;) ).
After all, COW is an optimization to speed up fork and defer copying. It
relies on memory overcommit, but that doesn't really apply to hugetlb,
so we fake it ...
One easy ABI break I had in mind was to simply *not* allow COW-sharing
of anon hugetlb folios; for example, simply don't copy the page into the
child. Chances are there are not really a lot of child processes that
would fail ... but likely we would break *something*. So there is no
easy way out :(
--
Cheers,
David / dhildenb
More information about the Linuxppc-dev
mailing list