[PATCH] Fix race between pte_free and hash_page

Anton Blanchard anton at samba.org
Sat Dec 13 03:17:01 EST 2003


Hi Ben,

> +static inline void __pte_free_tlb(struct mmu_gather *tlb, struct page *ptepage)
> +{
> +	/* This is safe as we are holding page_table_lock */
> +        cpumask_t local_cpumask = cpumask_of_cpu(smp_processor_id());
> +	struct pte_freelist_batch **batchp = &__get_cpu_var(pte_freelist_cur);
> +
> +	if (cpus_equal(tlb->mm->cpu_vm_mask, local_cpumask) ||
> +	    cpus_equal(tlb->mm->cpu_vm_mask, CPU_MASK_NONE)) {
> +		pte_free(ptepage);
> +		return;
> +	}

Looks good. Since we hold the pagetable lock, can we also check for
mm users == 1 and take the fast path?

Anton

** Sent via the linuxppc64-dev mail list. See http://lists.linuxppc.org/





More information about the Linuxppc64-dev mailing list