[PATCH] parser: Validate no references, no cover behavior

Stephen Finucane stephen at that.guru
Mon Jun 26 06:54:36 AEST 2017


This works as expected but let's add this to prevent regressions.

Signed-off-by: Stephen Finucane <stephen at that.guru>
---
 .../tests/series/base-no-references-no-cover.mbox  | 304 +++++++++++++++++++++
 patchwork/tests/test_series.py                     |  19 ++
 2 files changed, 323 insertions(+)
 create mode 100644 patchwork/tests/series/base-no-references-no-cover.mbox

diff --git a/patchwork/tests/series/base-no-references-no-cover.mbox b/patchwork/tests/series/base-no-references-no-cover.mbox
new file mode 100644
index 0000000..9e1752f
--- /dev/null
+++ b/patchwork/tests/series/base-no-references-no-cover.mbox
@@ -0,0 +1,304 @@
+From mwb at linux.vnet.ibm.com Tue May 23 15:15:11 2017
+To: linuxppc-dev at lists.ozlabs.org, linux-kernel at vger.kernel.org
+From: Michael Bringmann <mwb at linux.vnet.ibm.com>
+Subject: [PATCH 0/2] powerpc/dlpar: Correct display of hot-add/hot-remove CPUs
+ and memory
+Cc: Benjamin Herrenschmidt <benh at kernel.crashing.org>,
+        Paul Mackerras <paulus at samba.org>,
+        Michael Ellerman <mpe at ellerman.id.au>,
+        Reza Arbab <arbab at linux.vnet.ibm.com>,
+        Thomas Gleixner <tglx at linutronix.de>,
+        Bharata B Rao <bharata at linux.vnet.ibm.com>,
+        Balbir Singh <bsingharora at gmail.com>,
+        Nathan Fontenot <nfont at linux.vnet.ibm.com>,
+        Michael Bringmann <mwb at linux.vnet.ibm.com>,
+        Shailendra Singh <shailendras at nvidia.com>,
+        "Aneesh Kumar K.V" <aneesh.kumar at linux.vnet.ibm.com>,
+        Sebastian Andrzej Siewior <bigeasy at linutronix.de>,
+        Andrew Donnellan <andrew.donnellan at au1.ibm.com>,
+        John Allen <jallen at linux.vnet.ibm.com>,
+        Tyrel Datwyler <tyreld at linux.vnet.ibm.com>,
+        Sahil Mehta <sahilmehta17 at gmail.com>,
+        Rashmica Gupta <rashmicy at gmail.com>,
+        Ingo Molnar <mingo at kernel.org>,
+        Thomas Gleixner <tglx at linutronix.de>
+Date: Tue, 23 May 2017 10:15:11 -0500
+MIME-Version: 1.0
+Content-Type: text/plain; charset=utf-8
+Content-Transfer-Encoding: 8bit
+Message-Id: <790af26b-7055-7997-2080-f967aef2d26d at linux.vnet.ibm.com>
+List-ID: <linux-kernel.vger.kernel.org>
+
+powerpc/numa: Correct the currently broken capability to set the
+topology for shared CPUs in LPARs.  At boot time for shared CPU
+lpars, the topology for each shared CPU is set to node zero, however,
+this is now updated correctly using the Virtual Processor Home Node
+(VPHN) capabilities information provided by the pHyp. The VPHN handling
+in Linux is disabled, if PRRN handling is present.
+
+powerpc/hotplug-memory: Removing or adding memory via the PowerPC
+hotplug interface shows anomalies in the association between memory
+and nodes.  The code was updated to better take advantage of defined
+nodes in order to associate memory to nodes not needed at boot time,
+but relevant to dynamically added memory.
+
+Signed-off-by: Michael Bringmann <mwb at linux.vnet.ibm.com>
+
+Michael Bringmann (2):
+  powerpc/numa: Update CPU topology when VPHN enabled
+  powerpc/hotplug-memory: Fix hot-add memory node assoc
+
+
+From mwb at linux.vnet.ibm.com Tue May 23 15:15:29 2017
+To: linuxppc-dev at lists.ozlabs.org, linux-kernel at vger.kernel.org
+From: Michael Bringmann <mwb at linux.vnet.ibm.com>
+Subject: [PATCH 1/2] powerpc/numa: Update CPU topology when VPHN enabled
+Cc: Benjamin Herrenschmidt <benh at kernel.crashing.org>,
+        Paul Mackerras <paulus at samba.org>,
+        Michael Ellerman <mpe at ellerman.id.au>,
+        Reza Arbab <arbab at linux.vnet.ibm.com>,
+        Thomas Gleixner <tglx at linutronix.de>,
+        Bharata B Rao <bharata at linux.vnet.ibm.com>,
+        Balbir Singh <bsingharora at gmail.com>,
+        Michael Bringmann <mwb at linux.vnet.ibm.com>,
+        Shailendra Singh <shailendras at nvidia.com>,
+        "Aneesh Kumar K.V" <aneesh.kumar at linux.vnet.ibm.com>,
+        Sebastian Andrzej Siewior <bigeasy at linutronix.de>,
+        Nathan Fontenot <nfont at linux.vnet.ibm.com>,
+        Andrew Donnellan <andrew.donnellan at au1.ibm.com>,
+        John Allen <jallen at linux.vnet.ibm.com>,
+        Tyrel Datwyler <tyreld at linux.vnet.ibm.com>,
+        Sahil Mehta <sahilmehta17 at gmail.com>,
+        Rashmica Gupta <rashmicy at gmail.com>,
+        Ingo Molnar <mingo at kernel.org>
+Date: Tue, 23 May 2017 10:15:29 -0500
+MIME-Version: 1.0
+Content-Type: text/plain; charset=utf-8
+Content-Transfer-Encoding: 8bit
+Message-Id: <4a1bec9a-d3d2-c0bd-3956-e6e402be334c at linux.vnet.ibm.com>
+List-ID: <linux-kernel.vger.kernel.org>
+
+powerpc/numa: Correct the currently broken capability to set the
+topology for shared CPUs in LPARs.  At boot time for shared CPU
+lpars, the topology for each shared CPU is set to node zero, however,
+this is now updated correctly using the Virtual Processor Home Node
+(VPHN) capabilities information provided by the pHyp. The VPHN handling
+in Linux is disabled, if PRRN handling is present.
+
+Signed-off-by: Michael Bringmann <mwb at linux.vnet.ibm.com>
+---
+ arch/powerpc/mm/numa.c                       |   19 ++++++++++++++++++-
+ arch/powerpc/platforms/pseries/dlpar.c       |    2 ++
+ arch/powerpc/platforms/pseries/hotplug-cpu.c |    3 ++-
+ 3 files changed, 22 insertions(+), 2 deletions(-)
+
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 371792e..15c2dd5 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -29,6 +29,7 @@
+ #include <linux/seq_file.h>
+ #include <linux/uaccess.h>
+ #include <linux/slab.h>
++#include <linux/sched.h>
+ #include <asm/cputhreads.h>
+ #include <asm/sparsemem.h>
+ #include <asm/prom.h>
+@@ -42,6 +43,8 @@
+ #include <asm/vdso.h>
+ 
+ static int numa_enabled = 1;
++static int topology_inited;
++static int topology_update_needed;
+ 
+ static char *cmdline __initdata;
+ 
+@@ -1321,8 +1324,11 @@ int arch_update_cpu_topology(void)
+ 	struct device *dev;
+ 	int weight, new_nid, i = 0;
+ 
+-	if (!prrn_enabled && !vphn_enabled)
++	if (!prrn_enabled && !vphn_enabled) {
++		if (!topology_inited)
++			topology_update_needed = 1;
+ 		return 0;
++	}
+ 
+ 	weight = cpumask_weight(&cpu_associativity_changes_mask);
+ 	if (!weight)
+@@ -1361,6 +1367,8 @@ int arch_update_cpu_topology(void)
+ 			cpumask_andnot(&cpu_associativity_changes_mask,
+ 					&cpu_associativity_changes_mask,
+ 					cpu_sibling_mask(cpu));
++			pr_info("Assoc chg gives same node %d for cpu%d\n",
++					new_nid, cpu);
+ 			cpu = cpu_last_thread_sibling(cpu);
+ 			continue;
+ 		}
+@@ -1377,6 +1385,9 @@ int arch_update_cpu_topology(void)
+ 		cpu = cpu_last_thread_sibling(cpu);
+ 	}
+ 
++	if (i)
++		updates[i-1].next = NULL;
++
+ 	pr_debug("Topology update for the following CPUs:\n");
+ 	if (cpumask_weight(&updated_cpus)) {
+ 		for (ud = &updates[0]; ud; ud = ud->next) {
+@@ -1423,6 +1434,7 @@ int arch_update_cpu_topology(void)
+ 
+ out:
+ 	kfree(updates);
++	topology_update_needed = 0;
+ 	return changed;
+ }
+ 
+@@ -1600,6 +1612,11 @@ static int topology_update_init(void)
+ 	if (!proc_create("powerpc/topology_updates", 0644, NULL, &topology_ops))
+ 		return -ENOMEM;
+ 
++	topology_inited = 1;
++	if (topology_update_needed)
++		bitmap_fill(cpumask_bits(&cpu_associativity_changes_mask),
++					nr_cpumask_bits);
++
+ 	return 0;
+ }
+ device_initcall(topology_update_init);
+diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c
+index bda18d8..5106263 100644
+--- a/arch/powerpc/platforms/pseries/dlpar.c
++++ b/arch/powerpc/platforms/pseries/dlpar.c
+@@ -592,6 +592,8 @@ static ssize_t dlpar_show(struct class *class, struct class_attribute *attr,
+ 
+ static int __init pseries_dlpar_init(void)
+ {
++	arch_update_cpu_topology();
++
+ 	pseries_hp_wq = alloc_workqueue("pseries hotplug workqueue",
+ 					WQ_UNBOUND, 1);
+ 	return sysfs_create_file(kernel_kobj, &class_attr_dlpar.attr);
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index 7bc0e91..b5eff35 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -619,7 +619,8 @@ static int dlpar_cpu_remove_by_index(u32 drc_index)
+ 	}
+ 
+ 	rc = dlpar_cpu_remove(dn, drc_index);
+-	of_node_put(dn);
++	if (rc)
++		of_node_put(dn);
+ 	return rc;
+ }
+ 
+
+
+From mwb at linux.vnet.ibm.com Tue May 23 15:15:44 2017
+To: linuxppc-dev at lists.ozlabs.org, linux-kernel at vger.kernel.org
+Cc: Benjamin Herrenschmidt <benh at kernel.crashing.org>,
+        Paul Mackerras <paulus at samba.org>,
+        Michael Ellerman <mpe at ellerman.id.au>,
+        Reza Arbab <arbab at linux.vnet.ibm.com>,
+        Thomas Gleixner <tglx at linutronix.de>,
+        Bharata B Rao <bharata at linux.vnet.ib>,
+        Balbir Singh <bsingharora at gmail.com>,
+        Michael Bringmann <mwb at linux.vnet.ibm.com>,
+        Shailendra Singh <shailendras at nvidia.com>,
+        "Aneesh Kumar K.V" <aneesh.kumar at linux.vnet.ibm.com>,
+        Sebastian Andrzej Siewior <bigeasy at linutronix.de>
+From: Michael Bringmann <mwb at linux.vnet.ibm.com>
+Subject: [Patch 2/2]: powerpc/hotplug/mm: Fix hot-add memory node assoc
+Date: Tue, 23 May 2017 10:15:44 -0500
+MIME-Version: 1.0
+Content-Type: text/plain; charset=utf-8
+Content-Transfer-Encoding: 8bit
+Message-Id: <3bb44d92-b2ff-e197-4bdf-ec6d588d6dab at linux.vnet.ibm.com>
+List-ID: <linux-kernel.vger.kernel.org>
+
+Removing or adding memory via the PowerPC hotplug interface shows
+anomalies in the association between memory and nodes.  The code
+was updated to initialize more possible nodes to make them available
+to subsequent DLPAR hotplug-memory operations, even if they are not
+needed at boot time.
+
+Signed-off-by: Michael Bringmann <mwb at linux.vnet.ibm.com>
+---
+ arch/powerpc/mm/numa.c |   44 ++++++++++++++++++++++++++++++++------------
+ 1 file changed, 32 insertions(+), 12 deletions(-)
+
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 15c2dd5..3d58c1f 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -870,7 +870,7 @@ void __init dump_numa_cpu_topology(void)
+ }
+ 
+ /* Initialize NODE_DATA for a node on the local memory */
+-static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
++static void setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
+ {
+ 	u64 spanned_pages = end_pfn - start_pfn;
+ 	const size_t nd_size = roundup(sizeof(pg_data_t), SMP_CACHE_BYTES);
+@@ -878,23 +878,41 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
+ 	void *nd;
+ 	int tnid;
+ 
+-	nd_pa = memblock_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid);
+-	nd = __va(nd_pa);
++	if (!node_data[nid]) {
++		nd_pa = memblock_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid);
++		nd = __va(nd_pa);
+ 
+-	/* report and initialize */
+-	pr_info("  NODE_DATA [mem %#010Lx-%#010Lx]\n",
+-		nd_pa, nd_pa + nd_size - 1);
+-	tnid = early_pfn_to_nid(nd_pa >> PAGE_SHIFT);
+-	if (tnid != nid)
+-		pr_info("    NODE_DATA(%d) on node %d\n", nid, tnid);
++		node_data[nid] = nd;
++		memset(NODE_DATA(nid), 0, sizeof(pg_data_t));
++		NODE_DATA(nid)->node_id = nid;
++
++		/* report and initialize */
++		pr_info("  NODE_DATA [mem %#010Lx-%#010Lx]\n",
++			nd_pa, nd_pa + nd_size - 1);
++		tnid = early_pfn_to_nid(nd_pa >> PAGE_SHIFT);
++		if (tnid != nid)
++			pr_info("    NODE_DATA(%d) on node %d\n", nid, tnid);
++	} else {
++		nd_pa = (u64) node_data[nid];
++		nd = __va(nd_pa);
++	}
+ 
+-	node_data[nid] = nd;
+-	memset(NODE_DATA(nid), 0, sizeof(pg_data_t));
+-	NODE_DATA(nid)->node_id = nid;
+ 	NODE_DATA(nid)->node_start_pfn = start_pfn;
+ 	NODE_DATA(nid)->node_spanned_pages = spanned_pages;
+ }
+ 
++static void setup_nodes(void)
++{
++	int i, l = 32 /* MAX_NUMNODES */;
++
++	for (i = 0; i < l; i++) {
++		if (!node_possible(i)) {
++			setup_node_data(i, 0, 0);
++			node_set(i, node_possible_map);
++		}
++	}
++}
++
+ void __init initmem_init(void)
+ {
+ 	int nid, cpu;
+@@ -914,6 +932,8 @@ void __init initmem_init(void)
+ 	 */
+ 	nodes_and(node_possible_map, node_possible_map, node_online_map);
+ 
++	setup_nodes();
++
+ 	for_each_online_node(nid) {
+ 		unsigned long start_pfn, end_pfn;
+ 
diff --git a/patchwork/tests/test_series.py b/patchwork/tests/test_series.py
index 181fc6d..0a33b75 100644
--- a/patchwork/tests/test_series.py
+++ b/patchwork/tests/test_series.py
@@ -222,6 +222,25 @@ class BaseSeriesTest(_BaseTestCase):
 
         self.assertSerialized(patches, [2])
 
+    def test_no_references_no_cover(self):
+        """Series received with no reference headers or cover letter.
+
+        Parse a series with a cover letter and two patches that is received
+        without any reference headers.
+
+        Input:
+          - [PATCH 0/2] powerpc/dlpar: Correct display of hot-add/hot-remove
+                CPUs
+            - [PATCH 1/2] powerpc/numa: Update CPU topology when VPHN enabled
+            - [Patch 2/2]: powerpc/hotplug/mm: Fix hot-add memory node assoc
+        """
+        covers, patches, _ = self._parse_mbox(
+            'base-no-references-no-cover.mbox', [1, 2, 0])
+
+        self.assertSerialized(patches, [2])
+        self.assertSerialized(covers, [1])
+
+
 
 class RevisedSeriesTest(_BaseTestCase):
     """Tests for a series plus a single revision.
-- 
2.9.4



More information about the Patchwork mailing list