[1/1] powerpc: Update page in counter for CMM

Milton Miller miltonm at bga.com
Tue Oct 21 15:36:40 EST 2008


X-Patchwork-Id: 5144

On Mon Oct 20, 2008 near 12:19:21 GMT, Brian King wrote:
> 
> A new field has been added to the VPA as a method for
> the client OS to communicate to firmware the number of
> page ins it is performing when running collaborative
> memory overcommit. The hypervisor will use this information
> to better determine if a partition is experiencing memory
> pressure and needs more memory allocated to it.
> 
> Signed-off-by: Brian King <brking at linux.vnet.ibm.com>
> ---
> 
>  arch/powerpc/include/asm/lppaca.h |    3 ++-
>  arch/powerpc/kernel/paca.c        |    1 +
>  arch/powerpc/mm/fault.c           |    8 ++++++--
>  3 files changed, 9 insertions(+), 3 deletions(-)
> 
> diff -puN arch/powerpc/mm/fault.c~powerpc_vrm_mm_pressure arch/powerpc/mm/fault.c
> --- linux-2.6/arch/powerpc/mm/fault.c~powerpc_vrm_mm_pressure	2008-10-20 17:13:25.000000000 -0500
> +++ linux-2.6-bjking1/arch/powerpc/mm/fault.c	2008-10-20 17:13:25.000000000 -0500
..
> @@ -318,9 +320,11 @@ good_area:
>  			goto do_sigbus;
>  		BUG();
>  	}
> -	if (ret & VM_FAULT_MAJOR)
> +	if (ret & VM_FAULT_MAJOR) {
>  		current->maj_flt++;
> -	else
> +		if (firmware_has_feature(FW_FEATURE_CMO))
> +			atomic_inc((atomic_t *)(&(get_lppaca()->page_ins)));
> +	} else
>  		current->min_flt++;
>  	up_read(&mm->mmap_sem);
>  	return 0;

(1) why do we need atomic_inc and the hundreds or thousands of cycles to
do the atomic inc in a per-cpu area?

(2) assuming we make this a normal increment, should we keep the
feature test or just do it unconditionally (ie is the additional load
and branch worse that just doing the load and store of the counter --
the address was previously reserved, right?  (no question if
it has to be atomic).


<Ramble things one might consider>

Ben asked if we need to worry about the hypervisor clearing the
count.  If they treat it as a only-incrementing then we don't need
to worry.  And since its only an indicator, then we may not care
about a big count by them interrupting between the load and store

If we are worried about linux preemption, then we need to disable
it to avoid crossing cpu variables or  getting to this point multiple
times.  (I have not looked at the context to see if we are already
disabled).

</Ramble>


milton



More information about the Linuxppc-dev mailing list