<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p><br>
</p>
<div class="moz-cite-prefix">Le 24/11/2020 à 14:45, Jason Gunthorpe
a écrit :<br>
</div>
<blockquote type="cite" cite="mid:20201124134525.GB4800@nvidia.com">
<pre class="moz-quote-pre" wrap="">On Tue, Nov 24, 2020 at 09:17:38AM +0000, Christoph Hellwig wrote:
</pre>
<blockquote type="cite">
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">@@ -470,6 +487,26 @@ void ocxl_link_release(struct pci_dev *dev, void *link_handle)
}
EXPORT_SYMBOL_GPL(ocxl_link_release);
+static void invalidate_range(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+ struct pe_data *pe_data = container_of(mn, struct pe_data, mmu_notifier);
+ struct ocxl_link *link = pe_data->link;
+ unsigned long addr, pid, page_size = PAGE_SIZE;
</pre>
</blockquote>
</blockquote>
<pre class="moz-quote-pre" wrap="">
The page_size variable seems unnecessary
</pre>
<blockquote type="cite">
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">+
+ pid = mm->context.id;
+
+ spin_lock(&link->atsd_lock);
+ for (addr = start; addr < end; addr += page_size)
+ pnv_ocxl_tlb_invalidate(&link->arva, pid, addr);
+ spin_unlock(&link->atsd_lock);
+}
+
+static const struct mmu_notifier_ops ocxl_mmu_notifier_ops = {
+ .invalidate_range = invalidate_range,
+};
+
static u64 calculate_cfg_state(bool kernel)
{
u64 state;
@@ -526,6 +563,8 @@ int ocxl_link_add_pe(void *link_handle, int pasid, u32 pidr, u32 tidr,
pe_data->mm = mm;
pe_data->xsl_err_cb = xsl_err_cb;
pe_data->xsl_err_data = xsl_err_data;
+ pe_data->link = link;
+ pe_data->mmu_notifier.ops = &ocxl_mmu_notifier_ops;
memset(pe, 0, sizeof(struct ocxl_process_element));
pe->config_state = cpu_to_be64(calculate_cfg_state(pidr == 0));
@@ -542,8 +581,16 @@ int ocxl_link_add_pe(void *link_handle, int pasid, u32 pidr, u32 tidr,
* by the nest MMU. If we have a kernel context, TLBIs are
* already global.
*/
- if (mm)
+ if (mm) {
mm_context_add_copro(mm);
+ if (link->arva) {
+ /* Use MMIO registers for the TLB Invalidate
+ * operations.
+ */
+ mmu_notifier_register(&pe_data->mmu_notifier, mm);
</pre>
</blockquote>
</blockquote>
<pre class="moz-quote-pre" wrap="">
Every other place doing stuff like this is de-duplicating the
notifier. If you have multiple clients this will do multiple redundant
invalidations?</pre>
</blockquote>
<font size="-1"></font><br>
<font size="-1">We could have multiple clients, although not
something that we have often. <br>
</font><font size="-1">We have only one attach</font>
<font size="-1">per process. </font><font size="-1"><span
class="tlid-translation translation" lang="en"><span title=""
class="">But if not, we must still have invalidation for each</span></span></font>.<br>
<br>
<blockquote type="cite" cite="mid:20201124134525.GB4800@nvidia.com">
<pre class="moz-quote-pre" wrap="">
The notifier get/put API is designed to solve that problem, you'd get
a single notifier for the mm and then add the impacted arva's to some
list at the notifier.</pre>
</blockquote>
<font size="-1"><br>
Thanks for the information. </font><br>
<blockquote type="cite" cite="mid:20201124134525.GB4800@nvidia.com">
<pre class="moz-quote-pre" wrap="">
Jason
</pre>
</blockquote>
</body>
</html>