[PATCH v4] mm: Avoid unnecessary page fault retires on shared memory types
Heiko Carstens
hca at linux.ibm.com
Mon May 30 06:33:23 AEST 2022
On Fri, May 27, 2022 at 03:39:36PM -0400, Peter Xu wrote:
> diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
> index e173b6187ad5..4608cc962ecf 100644
> --- a/arch/s390/mm/fault.c
> +++ b/arch/s390/mm/fault.c
> @@ -433,6 +433,17 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
> goto out_up;
> goto out;
> }
> +
> + /* The fault is fully completed (including releasing mmap lock) */
> + if (fault & VM_FAULT_COMPLETED) {
> + /*
> + * Gmap will need the mmap lock again, so retake it. TODO:
> + * only conditionally take the lock when CONFIG_PGSTE set.
> + */
> + mmap_read_lock(mm);
> + goto out_gmap;
> + }
> +
> if (unlikely(fault & VM_FAULT_ERROR))
> goto out_up;
>
Guess the patch below on top of your patch is what we want.
Just for clarification: if gmap is not NULL then the process is a kvm
process. So, depending on the workload, this optimization makes sense.
diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
index 4608cc962ecf..e1d40ca341b7 100644
--- a/arch/s390/mm/fault.c
+++ b/arch/s390/mm/fault.c
@@ -436,12 +436,11 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
/* The fault is fully completed (including releasing mmap lock) */
if (fault & VM_FAULT_COMPLETED) {
- /*
- * Gmap will need the mmap lock again, so retake it. TODO:
- * only conditionally take the lock when CONFIG_PGSTE set.
- */
- mmap_read_lock(mm);
- goto out_gmap;
+ if (gmap) {
+ mmap_read_lock(mm);
+ goto out_gmap;
+ }
+ goto out;
}
if (unlikely(fault & VM_FAULT_ERROR))
More information about the Linuxppc-dev
mailing list