[External] [PATCH v2 00/33] Per-VMA locks

Punit Agrawal punit.agrawal at bytedance.com
Thu Feb 16 04:32:58 AEDT 2023


Suren Baghdasaryan <surenb at google.com> writes:

> Previous version:
> v1: https://lore.kernel.org/all/20230109205336.3665937-1-surenb@google.com/
> RFC: https://lore.kernel.org/all/20220901173516.702122-1-surenb@google.com/
>
> LWN article describing the feature:
> https://lwn.net/Articles/906852/
>
> Per-vma locks idea that was discussed during SPF [1] discussion at LSF/MM
> last year [2], which concluded with suggestion that “a reader/writer
> semaphore could be put into the VMA itself; that would have the effect of
> using the VMA as a sort of range lock. There would still be contention at
> the VMA level, but it would be an improvement.” This patchset implements
> this suggested approach.

I took the patches for a spin on a 2-socket 32 core (64 threads) system
with Intel 8336C (Ice Lake) and 512GB of RAM.

For the initial testing, "pft-threads" from the mm-tests suite[0] was
used. The test mmaps a memory region (~100GB on the test system) and
triggers access by a number of threads executing in parallel. For each
degree of parallelism, the test is repeated 10 times to get a better
feel for the behaviour. Below is an excerpt of the harmonic mean
reported by 'compare_kernel' script[1] included with mm-tests.

The first column is results for mm-unstable as of 2023-02-10, the second
column is the patches posted here while the third column includes
optimizations to reclaim some of the observed regression.

>From the results, there is a drop in page fault/second for low number of
CPUs but good improvement with higher CPUs.

                                        6.2.0-rc4                6.2.0-rc4                6.2.0-rc4
                             mm-unstable-20230210                   pvl-v2               pvl-v2+opt

Hmean     faults/cpu-1     898792.9338 (   0.00%)   894597.0474 *  -0.47%*   895933.2782 *  -0.32%*
Hmean     faults/cpu-4     751903.9803 (   0.00%)   677764.2975 *  -9.86%*   688643.8163 *  -8.41%*
Hmean     faults/cpu-7     612275.5663 (   0.00%)   565363.4137 *  -7.66%*   597538.9396 *  -2.41%*
Hmean     faults/cpu-12    434460.9074 (   0.00%)   410974.2708 *  -5.41%*   452501.4290 *   4.15%*
Hmean     faults/cpu-21    291475.5165 (   0.00%)   293936.8460 (   0.84%)   308712.2434 *   5.91%*
Hmean     faults/cpu-30    218021.3980 (   0.00%)   228265.0559 *   4.70%*   241897.5225 *  10.95%*
Hmean     faults/cpu-48    141798.5030 (   0.00%)   162322.5972 *  14.47%*   166081.9459 *  17.13%*
Hmean     faults/cpu-79     90060.9577 (   0.00%)   107028.7779 *  18.84%*   109810.4488 *  21.93%*
Hmean     faults/cpu-110    64729.3561 (   0.00%)    80597.7246 *  24.51%*    83134.0679 *  28.43%*
Hmean     faults/cpu-128    55740.1334 (   0.00%)    68395.4426 *  22.70%*    69248.2836 *  24.23%*

Hmean     faults/sec-1     898781.7694 (   0.00%)   894247.3174 *  -0.50%*   894440.3118 *  -0.48%*
Hmean     faults/sec-4    2965588.9697 (   0.00%)  2683651.5664 *  -9.51%*  2726450.9710 *  -8.06%*
Hmean     faults/sec-7    4144512.3996 (   0.00%)  3891644.2128 *  -6.10%*  4099918.8601 (  -1.08%)
Hmean     faults/sec-12   4969513.6934 (   0.00%)  4829731.4355 *  -2.81%*  5264682.7371 *   5.94%*
Hmean     faults/sec-21   5814379.4789 (   0.00%)  5941405.3116 *   2.18%*  6263716.3903 *   7.73%*
Hmean     faults/sec-30   6153685.3709 (   0.00%)  6489311.6634 *   5.45%*  6910843.5858 *  12.30%*
Hmean     faults/sec-48   6197953.1327 (   0.00%)  7216320.7727 *  16.43%*  7412782.2927 *  19.60%*
Hmean     faults/sec-79   6167135.3738 (   0.00%)  7425927.1022 *  20.41%*  7637042.2198 *  23.83%*
Hmean     faults/sec-110  6264768.2247 (   0.00%)  7813329.3863 *  24.72%*  7984344.4005 *  27.45%*
Hmean     faults/sec-128  6460727.8216 (   0.00%)  7875664.8999 *  21.90%*  8049910.3601 *  24.60%*

[0] https://github.com/gormanm/mmtests
[1] https://github.com/gormanm/mmtests/blob/master/compare-kernels.sh


More information about the Linuxppc-dev mailing list