[PATCH V2 4/5] ocxl: Add mmu notifier

Christophe Lombard clombard at linux.vnet.ibm.com
Wed Nov 25 03:48:55 AEDT 2020


Le 24/11/2020 à 14:45, Jason Gunthorpe a écrit :
> On Tue, Nov 24, 2020 at 09:17:38AM +0000, Christoph Hellwig wrote:
>
>>> @@ -470,6 +487,26 @@ void ocxl_link_release(struct pci_dev *dev, void *link_handle)
>>>   }
>>>   EXPORT_SYMBOL_GPL(ocxl_link_release);
>>>   
>>> +static void invalidate_range(struct mmu_notifier *mn,
>>> +			     struct mm_struct *mm,
>>> +			     unsigned long start, unsigned long end)
>>> +{
>>> +	struct pe_data *pe_data = container_of(mn, struct pe_data, mmu_notifier);
>>> +	struct ocxl_link *link = pe_data->link;
>>> +	unsigned long addr, pid, page_size = PAGE_SIZE;
> The page_size variable seems unnecessary
>
>>> +
>>> +	pid = mm->context.id;
>>> +
>>> +	spin_lock(&link->atsd_lock);
>>> +	for (addr = start; addr < end; addr += page_size)
>>> +		pnv_ocxl_tlb_invalidate(&link->arva, pid, addr);
>>> +	spin_unlock(&link->atsd_lock);
>>> +}
>>> +
>>> +static const struct mmu_notifier_ops ocxl_mmu_notifier_ops = {
>>> +	.invalidate_range = invalidate_range,
>>> +};
>>> +
>>>   static u64 calculate_cfg_state(bool kernel)
>>>   {
>>>   	u64 state;
>>> @@ -526,6 +563,8 @@ int ocxl_link_add_pe(void *link_handle, int pasid, u32 pidr, u32 tidr,
>>>   	pe_data->mm = mm;
>>>   	pe_data->xsl_err_cb = xsl_err_cb;
>>>   	pe_data->xsl_err_data = xsl_err_data;
>>> +	pe_data->link = link;
>>> +	pe_data->mmu_notifier.ops = &ocxl_mmu_notifier_ops;
>>>   
>>>   	memset(pe, 0, sizeof(struct ocxl_process_element));
>>>   	pe->config_state = cpu_to_be64(calculate_cfg_state(pidr == 0));
>>> @@ -542,8 +581,16 @@ int ocxl_link_add_pe(void *link_handle, int pasid, u32 pidr, u32 tidr,
>>>   	 * by the nest MMU. If we have a kernel context, TLBIs are
>>>   	 * already global.
>>>   	 */
>>> -	if (mm)
>>> +	if (mm) {
>>>   		mm_context_add_copro(mm);
>>> +		if (link->arva) {
>>> +			/* Use MMIO registers for the TLB Invalidate
>>> +			 * operations.
>>> +			 */
>>> +			mmu_notifier_register(&pe_data->mmu_notifier, mm);
> Every other place doing stuff like this is de-duplicating the
> notifier. If you have multiple clients this will do multiple redundant
> invalidations?

We could have multiple clients, although not something that we have often.
We have only one attach per process. But if not, we must still have 
invalidation for each.

>
> The notifier get/put API is designed to solve that problem, you'd get
> a single notifier for the mm and then add the impacted arva's to some
> list at the notifier.

Thanks for the information.
>
> Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ozlabs.org/pipermail/linuxppc-dev/attachments/20201124/23bac05a/attachment-0001.htm>


More information about the Linuxppc-dev mailing list