[PATCH v3 3/5] powerpc/64: Convert patch_instruction() to patch_u32()

Christophe Leroy christophe.leroy at csgroup.eu
Tue May 14 14:39:30 AEST 2024



Le 14/05/2024 à 04:59, Benjamin Gray a écrit :
> On Tue, 2024-04-23 at 15:09 +0530, Naveen N Rao wrote:
>> On Mon, Mar 25, 2024 at 04:53:00PM +1100, Benjamin Gray wrote:
>>> This use of patch_instruction() is working on 32 bit data, and can
>>> fail
>>> if the data looks like a prefixed instruction and the extra write
>>> crosses a page boundary. Use patch_u32() to fix the write size.
>>>
>>> Fixes: 8734b41b3efe ("powerpc/module_64: Fix livepatching for RO
>>> modules")
>>> Link: https://lore.kernel.org/all/20230203004649.1f59dbd4@yea/
>>> Signed-off-by: Benjamin Gray <bgray at linux.ibm.com>
>>>
>>> ---
>>>
>>> v2: * Added the fixes tag, it seems appropriate even if the subject
>>> does
>>>        mention a more robust solution being required.
>>>
>>> patch_u64() should be more efficient, but judging from the bug
>>> report
>>> it doesn't seem like the data is doubleword aligned.
>>
>> Asking again, is that still the case? It looks like at least the
>> first
>> fix below can be converted to patch_u64().
>>
>> - Naveen
> 
> Sorry, I think I forgot this question last time. Reading the commit
> descriptions you linked, I don't see any mention of "entry->funcdata
> will always be doubleword aligned because XYZ". If the patch makes it
> doubleword aligned anyway, I wouldn't be confident asserting all
> callers will always do this without looking into it a lot more.
> 
> Perhaps a separate series could optimise it with appropriate
> justification/assertions to catch bad alignment. But I think leaving it
> out of this series is fine because the original works in words, so it's
> not regressing anything.

As far as I can see, the struct is 64 bits aligned by definition so 
funcdata field is aligned too as there are just 8x u32 before it:

struct ppc64_stub_entry {
	/*
	 * 28 byte jump instruction sequence (7 instructions) that can
	 * hold ppc64_stub_insns or stub_insns. Must be 8-byte aligned
	 * with PCREL kernels that use prefix instructions in the stub.
	 */
	u32 jump[7];
	/* Used by ftrace to identify stubs */
	u32 magic;
	/* Data for the above code */
	func_desc_t funcdata;
} __aligned(8);

> 
>>
>>> ---
>>>   arch/powerpc/kernel/module_64.c | 5 ++---
>>>   1 file changed, 2 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/powerpc/kernel/module_64.c
>>> b/arch/powerpc/kernel/module_64.c
>>> index 7112adc597a8..e9bab599d0c2 100644
>>> --- a/arch/powerpc/kernel/module_64.c
>>> +++ b/arch/powerpc/kernel/module_64.c
>>> @@ -651,12 +651,11 @@ static inline int create_stub(const
>>> Elf64_Shdr *sechdrs,
>>>   	// func_desc_t is 8 bytes if ABIv2, else 16 bytes
>>>   	desc = func_desc(addr);
>>>   	for (i = 0; i < sizeof(func_desc_t) / sizeof(u32); i++) {
>>> -		if (patch_instruction(((u32 *)&entry->funcdata) +
>>> i,
>>> -				      ppc_inst(((u32
>>> *)(&desc))[i])))
>>> +		if (patch_u32(((u32 *)&entry->funcdata) + i, ((u32
>>> *)&desc)[i]))
>>>   			return 0;
>>>   	}
>>>   
>>> -	if (patch_instruction(&entry->magic,
>>> ppc_inst(STUB_MAGIC)))
>>> +	if (patch_u32(&entry->magic, STUB_MAGIC))
>>>   		return 0;
>>>   
>>>   	return 1;
>>> -- 
>>> 2.44.0
>>>
> 


More information about the Linuxppc-dev mailing list