[PATCH v3 00/15] mm/memory: optimize fork() with PTE-mapped THP
Ryan Roberts
ryan.roberts at arm.com
Thu Feb 1 00:58:04 AEDT 2024
On 31/01/2024 13:38, David Hildenbrand wrote:
>>>> Nope: looks the same. I've taken my test harness out of the picture and done
>>>> everything manually from the ground up, with the old tests and the new.
>>>> Headline
>>>> is that I see similar numbers from both.
>>>
>>> I took me a while to get really reproducible numbers on Intel. Most importantly:
>>> * Set a fixed CPU frequency, disabling any boost and avoiding any
>>> thermal throttling.
>>> * Pin the test to CPUs and set a nice level.
>>
>> I'm already pinning the test to cpu 0. But for M2, at least, I'm running in a VM
>> on top of macos, and I don't have a mechanism to pin the QEMU threads to the
>> physical CPUs. Anyway, I don't think these are problems because for a given
>> kernel build I can accurately repro numbers.
>
> Oh, you do have a layer of virtualization in there. I *suspect* that might
> amplify some odd things regarding code layout, caching effects, etc.
>
> I guess especially the fork() benchmark is too sensible (fast) for things like
> that, so I would just focus on bare metal results where you can control the
> environment completely.
Yeah, maybe. OK I'll park M2 for now.
>
> Note that regarding NUMA effects, I mean when some memory access within the same
> socket is faster/slower even with only a single node. On AMD EPYC that's
> possible, depending on which core you are running and on which memory controller
> the memory you want to access is located. If both are in different quadrants
> IIUC, the access latency will be different.
I've configured the NUMA to only bring the RAM and CPUs for a single socket
online, so I shouldn't be seeing any of these effects. Anyway, I've been using
the Altra as a secondary because its so much slower than the M2. Let me move
over to it and see if everything looks more straightforward there.
>
>>> But yes: I was observing something similar on AMD EPYC, where you get
>>> consecutive pages from the buddy, but once you allocate from the PCP it might no
>>> longer be consecutive.
>>>
>>>> - test is 5-10% slower when output is printed to terminal vs when
>>>> redirected to
>>>> file. I've always effectively been redirecting. Not sure if this overhead
>>>> could start to dominate the regression and that's why you don't see it?
>>>
>>> That's weird, because we don't print while measuring? Anyhow, 5/10% variance on
>>> some system is not the end of the world.
>>
>> I imagine its cache effects? More work to do to print the output could be
>> evicting some code that's in the benchmark path?
>
> Maybe. Do you also see these oddities on the bare metal system?
>
>>
>>>
>>>>
>>>> I'm inclined to run this test for the last N kernel releases and if the number
>>>> moves around significantly, conclude that these tests don't really matter.
>>>> Otherwise its an exercise in randomly refactoring code until it works well, but
>>>> that's just overfitting to the compiler and hw. What do you think?
>>>
>>> Personally, I wouldn't lose sleep if you see weird, unexplainable behavior on
>>> some system (not even architecture!). Trying to optimize for that would indeed
>>> be random refactorings.
>>>
>>> But I would not be so fast to say that "these tests don't really matter" and
>>> then go wild and degrade them as much as you want. There are use cases that care
>>> about fork performance especially with order-0 pages -- such as Redis.
>>
>> Indeed. But also remember that my fork baseline time is ~2.5ms, and I think you
>> said yours was 14ms :)
>
> Yes, no idea why M2 is that fast (BTW, which page size? 4k or 16k? ) :)
The guest kernel is using 4K pages. I'm not quite sure what is happening at
stage2; QEMU doesn't expose any options to explicitly request huge pages for
macos AFAICT.
>
>>
>> I'll continue to mess around with it until the end of the day. But I'm not
>> making any headway, then I'll change tack; I'll just measure the performance of
>> my contpte changes using your fork/zap stuff as the baseline and post based on
>> that.
>
> You should likely not focus on M2 results. Just pick a representative bare metal
> machine where you get consistent, explainable results.
>
> Nothing in the code is fine-tuned for a particular architecture so far, only
> order-0 handling is kept separate.
>
> BTW: I see the exact same speedups for dontneed that I see for munmap. For
> example, for order-9, it goes from 0.023412s -> 0.009785, so -58%. So I'm
> curious why you see a speedup for munmap but not for dontneed.
Ugh... ok, coming up.
More information about the Linuxppc-dev
mailing list