[PATCH 5/7] powerpc/64: handle linker stubs in low .text code

Nicholas Piggin npiggin at gmail.com
Wed Oct 19 14:15:58 AEDT 2016


Very large kernels require linker stubs, but the linker tends to place
them in ways that make it very difficult to detect programatically
with the assembler when taking absolute real (physical) addresses.
This breaks the early boot code. Create a small section just before
the .text section with an empty 256 - 4 bytes, and adjust the start of
the .text section to match. The linker will tend to put stubs in that
section and not break our start-of-.text section label.

This is a tiny waste of space on common kernels, but allows large
kernels to build and boot, which is convenient enough to outweigh the
cost.

This is a sad hack, which I will improve on if I can find out how to
achieve it a better way. Until then, it seems to work.

Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
---
 arch/powerpc/include/asm/head-64.h | 16 +++++++++++++---
 arch/powerpc/kernel/vmlinux.lds.S  |  2 ++
 arch/powerpc/tools/head_check.sh   |  5 +++++
 3 files changed, 20 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/head-64.h b/arch/powerpc/include/asm/head-64.h
index 492ebe7..f7131cf 100644
--- a/arch/powerpc/include/asm/head-64.h
+++ b/arch/powerpc/include/asm/head-64.h
@@ -63,11 +63,21 @@
 	. = 0x0;						\
 start_##sname:
 
+/*
+ * .linker_stub_catch section is used to catch linker stubs from being
+ * inserted in our .text section, above the start_text label (which breaks
+ * the ABS_ADDR calculation). See kernel/vmlinux.lds.S and tools/head_check.sh
+ * for more details. We would prefer to just keep a cacheline (0x80), but
+ * 0x100 seems to be how the linker aligns branch stub groups.
+ */
 #define OPEN_TEXT_SECTION(start)				\
-	text_start = (start);					\
+	.section ".linker_stub_catch","ax", at progbits;		\
+linker_stub_catch:						\
+	. = 0x4;						\
 	.section ".text","ax", at progbits;			\
-	. = 0x0;						\
-start_text:
+	text_start = (start) + 0x100;				\
+	.balign 0x100;						\
+start_text:							\
 
 #define ZERO_FIXED_SECTION(sname, start, end)			\
 	sname##_start = (start);				\
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 7de0b05..a09c666 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -73,6 +73,8 @@ SECTIONS
 	} :kernel
 
 	.text : AT(ADDR(.text) - LOAD_OFFSET) {
+		*(.linker_stub_catch);
+		. = . ;
 		/* careful! __ftr_alt_* sections need to be close to .text */
 		ALIGN_FUNCTION();
 		*(.text .fixup __ftr_alt_* .ref.text);
diff --git a/arch/powerpc/tools/head_check.sh b/arch/powerpc/tools/head_check.sh
index 9635fe7..ae9faeb 100755
--- a/arch/powerpc/tools/head_check.sh
+++ b/arch/powerpc/tools/head_check.sh
@@ -23,6 +23,11 @@
 # etc) in the fixed section region (0 - 0x8000ish). Check what places are
 # calling those stubs.
 #
+# A ".linker_stub_catch" section is used to catch some stubs generated by
+# early .text code, which tend to get placed at the start of the section.
+# If there are too many such stubs, they can overflow this section. Expanding
+# it may help (or reducing the number of stub branches).
+#
 # Linker stubs use the TOC pointer, so even if fixed section code could
 # tolerate them being inserted into head code, they can't be allowed in low
 # level entry code (boot, interrupt vectors, etc) until r2 is set up. This
-- 
2.9.3



More information about the Linuxppc-dev mailing list