[Patch v2] kexec: increase max of kexec segments and use dynamic allocation

Cong Wang amwang at redhat.com
Thu Jul 29 16:42:25 EST 2010


On 07/27/10 18:00, Milton Miller wrote:
> [ Added kexec at lists.infradead.org and linuxppc-dev at lists.ozlabs.org ]
>
>>
>> Currently KEXEC_SEGMENT_MAX is only 16 which is too small for machine with
>> many memory ranges.  When hibernate on a machine with disjoint memory we do
>> need one segment for each memory region. Increase this hard limit to 16K
>> which is reasonably large.
>>
>> And change ->segment from a static array to a dynamically allocated memory.
>>
>> Cc: Neil Horman<nhorman at redhat.com>
>> Cc: huang ying<huang.ying.caritas at gmail.com>
>> Cc: Eric W. Biederman<ebiederm at xmission.com>
>> Signed-off-by: WANG Cong<amwang at redhat.com>
>>
>> ---
>> diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
>> index ed31a29..f115585 100644
>> --- a/arch/powerpc/kernel/machine_kexec_64.c
>> +++ b/arch/powerpc/kernel/machine_kexec_64.c
>> @@ -131,10 +131,7 @@ static void copy_segments(unsigned long ind)
>>   void kexec_copy_flush(struct kimage *image)
>>   {
>>   	long i, nr_segments = image->nr_segments;
>> -	struct  kexec_segment ranges[KEXEC_SEGMENT_MAX];
>> -
>> -	/* save the ranges on the stack to efficiently flush the icache */
>> -	memcpy(ranges, image->segment, sizeof(ranges));
>> +	struct  kexec_segment range;
>
> I'm glad you found our copy on the stack and removed the stack overflow
> that comes with this bump, but ...
>
>>
>>   	/*
>>   	 * After this call we may not use anything allocated in dynamic
>> @@ -148,9 +145,11 @@ void kexec_copy_flush(struct kimage *image)
>>   	 * we need to clear the icache for all dest pages sometime,
>>   	 * including ones that were in place on the original copy
>>   	 */
>> -	for (i = 0; i<  nr_segments; i++)
>> -		flush_icache_range((unsigned long)__va(ranges[i].mem),
>> -			(unsigned long)__va(ranges[i].mem + ranges[i].memsz));
>> +	for (i = 0; i<  nr_segments; i++) {
>> +		memcpy(&range,&image->segment[i], sizeof(range));
>> +		flush_icache_range((unsigned long)__va(range.mem),
>> +			(unsigned long)__va(range.mem + range.memsz));
>> +	}
>>   }
>
> This is executed after the copy, so as it says,
> "we may not use anything allocated in dynamic memory".
>
> We could allocate control pages to copy the segment list into.
> Actually ppc64 doesn't use the existing control page, but that
> is only 4kB today.
>
> We need the list to icache flush all the pages in all the segments.
> The as the indirect list doesn't have pages that were allocated at
> their destination.
>
> Or maybe the icache flush should be done in the generic code
> like it does for crash load segments?
>

I don't get the point here, according to the comments,
it is copied into stack because of efficiency.

-- 
The opposite of love is not hate, it's indifference.
  - Elie Wiesel


More information about the Linuxppc-dev mailing list