[PATCH] kernel/module_64.c: Add REL24 relocation support of livepatch symbols

Kamalesh Babulal kamalesh at linux.vnet.ibm.com
Wed Oct 4 19:15:15 AEDT 2017


On Tuesday 03 October 2017 03:00 PM, Naveen N . Rao wrote:

Hi Naveen,

[snip]
> 
> A few small nits focusing on just the trampoline...
> 
>> diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
>> index c98e90b..708a96d 100644
>> --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
>> +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
>> @@ -249,6 +249,83 @@ livepatch_handler:
>>
>>  	/* Return to original caller of live patched function */
>>  	blr
>> +
>> +	/*
>> +	 * This livepatch stub code, called from livepatch module to jump into
>> +	 * kernel or other modules. It replicates the livepatch_handler code,
>> +	 * with an expection of jumping to the trampoline instead of patched
>> +	 * function.
>> +	 */
>> +	.global klp_stub_insn
>> +klp_stub_insn:
>> +	CURRENT_THREAD_INFO(r12, r1)
>> +
>> +	/* Allocate 3 x 8 bytes */
>> +	ld      r11, TI_livepatch_sp(r12)
>> +	addi    r11, r11, 24
>> +	std     r11, TI_livepatch_sp(r12)
>> +
>> +	/* Save toc & real LR on livepatch stack */
>> +	std     r2,  -24(r11)
>> +	mflr    r12
>> +	std     r12, -16(r11)
>> +
>> +	/* Store stack end marker */
>> +	lis     r12, STACK_END_MAGIC at h
>> +	ori     r12, r12, STACK_END_MAGIC at l
>> +	std     r12, -8(r11)
> 
> Seeing as this is the same as livepatch_handler() except for this part 
> in the middle, does it make sense to reuse livepatch_handler() with 
> appropriate labels added for your use?  You could patch in the below 5 
> instructions using the macros from ppc-opcode.h...

Thanks for the review. The current upstream livepatch_handler code
is a bit different. I have posted a bug fix to 
https://lists.ozlabs.org/pipermail/linuxppc-dev/2017-September/163824.html
which alters the livepatch_handler to have similar code. It's
a good idea to re-use the livepatch_handler code. I will re-spin
v2 with based on top of bug fix posted earlier.

> 
>> +
>> +	/*
>> +	 * Stub memory is allocated dynamically, during the module load.
>> +	 * Load TOC relative address into r11. module_64.c::klp_stub_for_addr()
>> +	 * identifies the available free stub slot and loads the address into
>> +	 * r11 with two instructions.
>> +	 *
>> +	 * addis r11, r2, stub_address at ha
>> +	 * addi  r11, r11, stub_address at l
>> +	 */
>> +	.global klp_stub_entry
>> +klp_stub_entry:
>> +	addis   r11, r2, 0
>> +	addi    r11, r11, 0
>> +
>> +	/* Load the r12 with called function address from entry->funcdata */
>> +	ld      r12, 128(r11)
>> +
>> +	/* Move r12 into ctr for global entry and branch there */
>> +	mtctr   r12
>> +	bctrl
>> +
>> +	/*
>> +	 * Now we are returning to the patched function. We are free to
>> +	 * use r11, r12 and we can use r2 until we restore it.
>> +	 */
>> +	CURRENT_THREAD_INFO(r12, r1)
>> +
>> +	ld      r11, TI_livepatch_sp(r12)
>> +
>> +	/* Check stack marker hasn't been trashed */
>> +	lis     r2,  STACK_END_MAGIC at h
>> +	ori     r2,  r2, STACK_END_MAGIC at l
>> +	ld      r12, -8(r11)
>> +2:	tdne    r12, r2
>> +	EMIT_BUG_ENTRY 2b, __FILE__, __LINE__ - 1, 0
> 
> If you plan to keep this trampoline separate from livepatch_handler(), 
> note that the above bug entry is not required since you copy only the 
> text of this trampoline elsewhere and you won't have an associated bug 
> entry for that new stub address.
> 

Agree, klp_stub_entry trampoline will go away in v2. 

-- 
cheers,
Kamalesh.



More information about the Linuxppc-dev mailing list